SCIENTISTS COUNDN'T DO IT IN TWO CENTURIES ---------- AN ENGINEER DID IT IN TWO MONTHS ---------- A TIME-DOMAIN COCHLEAR IMPLANT ---------- RULES ARE MEANT TO BE BROKEN

 

“If at first you don’t succeed, try, try again. Then quit. 

There’s no point in being a damn fool about it.”

– WC Fields

If you can’t change an industry – change its foundation.

-A. Doolittle

 

The Bates cochlear implant threatens the implant industry. Two centuries ago, researchers chose a model of how the ear worked that they knew was flawed—but it was workable enough to remain for two centuries. However, they made a mistake: they used that flawed model to construct an electronic ear—the cochlear implant—and the flaws couldn’t be hidden. The sound quality is poor, and cochlear implant wearers will grow up in a world without music.

 

During the Cold War, the electronics engineer John K. Bates Jr. was tasked with designing an electronic “radar” ear. Recognizing that the flawed ear model wouldn’t work, John created a model to describe how to give an electronic ear the capabilities of a natural ear.

 

John recognized their mistake and decided to design an alternative implant. In 1999, he introduced a tested cochlear implant design that gave the wearer a life of inclusion and a world of music. However, John and his cochlear implant were ignored—not because it didn’t work, but because it did.

 

John’s implant challenged two centuries of acoustic research, devalued researchers’ education and bodies of work, and made the products of multiple billion-dollar manufacturers obsolete. John’s implant was an existential threat to the industry, and they chose their self-interest over the cochlear implant wearers they were supposed to help.

 

When John passed away in 2022, his family gave me the records of his decades of cochlear implant research and asked that I try to give his life meaning. I created the Bates Cochlear Implant Project. I also faced the same rejection as John, but I’m the head of Elegant Disruptions and have a history of introducing disruptive technologies, and I’m not easily dismissed.

 

Researchers studying ear diseases admit they cannot explain how the ear works, particularly the basilar membrane. They know it is a tapped delay line, but the current model offers no clue as to how it works. I suggest it is more than a coincidence that the core of John’s implant and the basilar membrane both use a tapped delay line. I know how John’s implant works very well. However, they, too, have not listened to an outsider challenging the foundation of their work.

 

John’s family and I have never considered financial gain our motive. So, I’m giving the math at the core of his implant to the world—particularly programmers. I’ve used the core of his implant for waveform analysis for decades. A tapped delay line replaces the mathematically intensive current tool—Fourier Analysis. Programming languages have no discipline boundaries, and I am giving them the seeds that will eventually destroy the foundation of the cochlear implant industry. In time, cochlear implant wearers will be given the life they were denied.

 

However, our transition sacrifices our ability to fund the effort required to support programmers worldwide. We must supply documentation, code examples, online support, and more to ensure success.

 

If we fail, there will be no one to follow us. Who will believe there is anything of value in notebooks rejected by the cochlear implant industry? Who will bother to look? If we fail, current cochlear implant wearers and the millions to follow will be forever denied a life of inclusion.

 

We need your support. It’s not the amount but the act that counts. Go to GoFundMe.com/ and join us in helping to change the future of cochlear implant wearers.

A video of a real-time  PSM (Periodicity Sorting Matrix) output. The PSM is the preprocessor of the Bates cochlear implant and replaces the filterbanks traditionally used by other implants. The actual pitch is being displayed. 

Previous slide
Next slide

A Partial List of Unpublished Papers by John Bates

A Computational Auditory Model Based on Evolutionary Principle
A Modern Atomist’s Theory of Hearing: It began with Epicurus 300 B.C
A Robust Signal Processor for Cochlear Implants
A Selectionist’s Approach to Auditory Perception
A Signal Processor for Cochlear Implants – An application for interstitial waveform sampling
A Systems Approach for Auditory Modeling
A Time-Domain Processing Experiment to Test Fundamental Auditory Principles
Acoustic Source Separation and Localization
An Auditory Model Based on Principles of Survival
An Auditory Theory, the Helmholtzian Mistake, the Cocktail Party Problem
An Experiment on Direction Finding -of-Arrival of Moving Vehicles
Appendix to “How to hear everything and listen to anything”
Can a Zeros-Based Waveform Encoding Explain Two-Tone Interference?
Decoding Hearing: From Cocktail Party to Fundamental Principles
Experiments in Direction Finding
Experiments on Interstitial Waveform Sampling
Hearing Sound as Particles of Meaning
higher-level auditory processing that leads to robust speech processing and other auditory applications
How to hear everything and listen to anything
Interpolater Between PRF Periodicity Recognition Gates
Modeling the HAAS Effect – A First Step for Solving the CASA Problem
Monaural Separation of Sounds by Their Meanings
My Engineering Mind: How I invented an auditory theory using engineering principles instead of science
Progress Report on AUTONOM, an Autonomic Acoustic Perception System
Solving the Cocktail Party Problem: Unthinkable ideas, luck, and pluck
Solving the Mystery of Hearing: Basic Principles, Ancient Algorithm
The Aural Retina: Hearing Sound as Particles of Meaning
The Microgranule System
The Story of the Aural Retina – Hearing Sound as Particles of Meaning
Time and Frequency: A Closer Look at Filtering and Time-Frequency Analysis
Tonal perception and periodicities
Using Attention and Awareness in a Computational Auditory Model
Zeros-Based Waveform Encoding Experiments in Two-Tone Interference

Future Projects (Unfunded)

Singer Vocal Fault Finder (Being Updated)

UPDATE: We have assigned a programmer to create an iOS version as a “thank you” for your contribution. Contributors will be notified when it becomes available. 

 

The processor in the first-generation cochlear implant was ingeniously used to create a smartphone app to assist singers in visually locating and correcting vocal faults. The app gave the singer a real-time display of their voice’s pitch superimposed on a musical staff. Within the display were fault markers.

How good was the vocal fault detector? Two reviews follow:

“The most significant characteristic of the application is the visible manifestation of singing sound properties in a convincing mathematical way. You may control almost everything, from the exact pitch of the voice and the shaky voice (wrong vibrato frequency) from the annoying “voice caprile” (“He-goat Voice with high frequency) to the unacceptable “ballare la voce (“dancing voice with low frequency and big pitch intervals) up to realize the differentiation of simple legato, tenuto, portando, portato and glissando. The students can easily understand how to control their music phrasing, avoiding exaggerations, merely because they can observe what they sing.

Zachos Terzakis Opera Tenor, Vocal Teacher, Athens, Greece

“I have used this application in my studio to visually show my students whether they are singing on pitch. Once they realize that the center of the space or line equals the center of the pitch, it’s easy for them to see their own accuracy and train their ear as well. The accuracy of the program is incredible. I highly recommend it.”

 Mark Kent – Vocal Teacher, High Point, North Carolina

Visual Speech Enunciation

A PROPOSED PROJECT:

This is an adaptation of the singer’s vocal fault finder. The application is intended to be an enunciation coach for those with limited hearing. The app scrolls the script of a predetermined lesson plan across the screen. As the user reads the script, the engine in the Bates cochlear implant deconstructs the speech in real time and displays the individual elements in a format suggested by the radar plot shown. Synchronized with the user’s voice is a plot display taken from a reference speaker using the same script. The reference speaker will be of similar gender, age, and register. The user corrects their speaking voice by having their plot match the shape of the reference voice.

Empowering the user with control over their learning process, the application allows them to scroll back and forth through the script, highlight areas for practice, and create loops to focus on difficult sections. This user-centric approach ensures a personalized and effective learning experience.  

Looking ahead, a ‘Pro’ version of the application could offer even more advanced features. This version might include recording capability, allowing users to track their progress over time. Additionally, it could provide a method for downloading recordings for review by speech therapists, enhancing the application’s potential for professional use. 

A “Therapist” version (different platform?) would be able to store recordings from multiple users and annotate each as needed.  

And yes, we know this website needs improvement. We welcome anyone willing to volunteer their website design service.