SCIENTISTS COUNDN'T DO IT IN TWO CENTURIES ---------- AN ENGINEER DID IT IN TWO MONTHS ---------- A TIME-DOMAIN COCHLEAR IMPLANT ---------- RULES ARE MEANT TO BE BROKEN

 

“If at first you don’t succeed, try, try again. Then quit. 

There’s no point in being a damn fool about it.”

– WC Fields

If you can’t change an industry – change its foundation.

-A. Doolittle

The Bates cochlear implant is a threat to the implant industry. They made a grave mistake; they chose the wrong foundation for their implant, and cochlear implant wearers paid the price. Voice quality is poor, and cochlear implant wearers live without music.

The current implant’s frequency domain foundation is a mathematical expression, whereas the Bates cochlear is in the domain of the ear—the time domain.

The mathematical foundation of the Bates cochlear implant is a simple tapped delay line and third-grade arithmetic. It is more than a coincidence that the ear’s cochlea is also a tapped delay line.

The elegance of the Bates cochlear implant lies in how it processes sound. The current cochlear implant is sound-centric; it receives sounds, mathematically manipulates them, and sends them to the wearer’s cochlea. Its flaws degrade that sound.

The Bates cochlear implant is phoneme-centric—the building blocks of speech. It digitally separates sound into three categories: speech, background, and noise. Speech is enhanced to improve understanding and compensate for the wearer’s cochlear damage. The background sounds are balanced to avoid masking speech, and noise is eliminated.

After enhancement, the Bates implant borrows a tool from the music industry: Granular Synthesis (GS). GS gives audio engineers absolute control over vocals and music. The Bates cochlear implant uses GS to digitally reconstruct the sound sent to the wearer’s cochlea.

The current implant is sound-centric; the Bates cochlear implant is conversation-centric. Implant wearers will no longer have to lip-read, and children born deaf will grow up in a world with music. The Bates cochlear implant has the power to transform lives and inspire change.

Sadly, we have lost the battle with an industry that prioritizes its survival over the quality of life of those it is supposed to help. But we haven’t given up. The Bates Cochlear isn’t challenging an industry; it’s challenging its foundation, and to put it bluntly, we can destroy it from the outside. Our commitment to improving the lives of cochlear implant wearers remains unwavering, and we will continue to fight for their right to a better quality of life.

Researchers studying the ear use the same descriptive ear model as the cochlear implant industry, and it is no surprise that they still don’t understand how the cochlea works. If they listen, we change the foundation. But they haven’t listened.

We are now offering the clever tapped delay line, which uses third-grade arithmetic, to the public. It replaces the most common mathematical tool for studying waveforms—Fourier analysis. So, we provide the world with a better tool to understand sound. Someone will use that tool to create a new model of the ear, and that model will undermine the foundation of the cochlear implant industry. Cochlear implant wearers will eventually receive the life they deserve, and families will be made whole. We fulfill our promise to the wearers.

Financial gain has never been our goal. We have always thought of financially assisting those who cannot afford the $20,000 to $40,000 cost of an implant as a better way to honor the creator of the Bates cochlear implant – the late John Kenneth Bates, Jr.

We have sacrificed our ability to fund ourselves and are more than ever dependent on your support.

Please help us complete our mission. Go to GoFundMe.com

 

A video of a real-time  PSM (Periodicity Sorting Matrix) output. The PSM is the preprocessor of the Bates cochlear implant and replaces the filterbanks traditionally used by other implants. The actual pitch is being displayed. 

Previous slide
Next slide

A Partial List of Unpublished Papers by John Bates

A Computational Auditory Model Based on Evolutionary Principle
A Modern Atomist’s Theory of Hearing: It began with Epicurus 300 B.C
A Robust Signal Processor for Cochlear Implants
A Selectionist’s Approach to Auditory Perception
A Signal Processor for Cochlear Implants – An application for interstitial waveform sampling
A Systems Approach for Auditory Modeling
A Time-Domain Processing Experiment to Test Fundamental Auditory Principles
Acoustic Source Separation and Localization
An Auditory Model Based on Principles of Survival
An Auditory Theory, the Helmholtzian Mistake, the Cocktail Party Problem
An Experiment on Direction Finding -of-Arrival of Moving Vehicles
Appendix to “How to hear everything and listen to anything”
Can a Zeros-Based Waveform Encoding Explain Two-Tone Interference?
Decoding Hearing: From Cocktail Party to Fundamental Principles
Experiments in Direction Finding
Experiments on Interstitial Waveform Sampling
Hearing Sound as Particles of Meaning
higher-level auditory processing that leads to robust speech processing and other auditory applications
How to hear everything and listen to anything
Interpolater Between PRF Periodicity Recognition Gates
Modeling the HAAS Effect – A First Step for Solving the CASA Problem
Monaural Separation of Sounds by Their Meanings
My Engineering Mind: How I invented an auditory theory using engineering principles instead of science
Progress Report on AUTONOM, an Autonomic Acoustic Perception System
Solving the Cocktail Party Problem: Unthinkable ideas, luck, and pluck
Solving the Mystery of Hearing: Basic Principles, Ancient Algorithm
The Aural Retina: Hearing Sound as Particles of Meaning
The Microgranule System
The Story of the Aural Retina – Hearing Sound as Particles of Meaning
Time and Frequency: A Closer Look at Filtering and Time-Frequency Analysis
Tonal perception and periodicities
Using Attention and Awareness in a Computational Auditory Model
Zeros-Based Waveform Encoding Experiments in Two-Tone Interference

Future Projects (Unfunded)

Singer Vocal Fault Finder (Being Updated)

UPDATE: We have assigned a programmer to create an iOS version as a “thank you” for your contribution. Contributors will be notified when it becomes available. 

 

The processor in the first-generation cochlear implant was ingeniously used to create a smartphone app to assist singers in visually locating and correcting vocal faults. The app gave the singer a real-time display of their voice’s pitch superimposed on a musical staff. Within the display were fault markers.

How good was the vocal fault detector? Two reviews follow:

“The most significant characteristic of the application is the visible manifestation of singing sound properties in a convincing mathematical way. You may control almost everything, from the exact pitch of the voice and the shaky voice (wrong vibrato frequency) from the annoying “voice caprile” (“He-goat Voice with high frequency) to the unacceptable “ballare la voce (“dancing voice with low frequency and big pitch intervals) up to realize the differentiation of simple legato, tenuto, portando, portato and glissando. The students can easily understand how to control their music phrasing, avoiding exaggerations, merely because they can observe what they sing.

Zachos Terzakis Opera Tenor, Vocal Teacher, Athens, Greece

“I have used this application in my studio to visually show my students whether they are singing on pitch. Once they realize that the center of the space or line equals the center of the pitch, it’s easy for them to see their own accuracy and train their ear as well. The accuracy of the program is incredible. I highly recommend it.”

 Mark Kent – Vocal Teacher, High Point, North Carolina

Visual Speech Enunciation

A PROPOSED PROJECT:

This is an adaptation of the singer’s vocal fault finder. The application is intended to be an enunciation coach for those with limited hearing. The app scrolls the script of a predetermined lesson plan across the screen. As the user reads the script, the engine in the Bates cochlear implant deconstructs the speech in real time and displays the individual elements in a format suggested by the radar plot shown. Synchronized with the user’s voice is a plot display taken from a reference speaker using the same script. The reference speaker will be of similar gender, age, and register. The user corrects their speaking voice by having their plot match the shape of the reference voice.

Empowering the user with control over their learning process, the application allows them to scroll back and forth through the script, highlight areas for practice, and create loops to focus on difficult sections. This user-centric approach ensures a personalized and effective learning experience.  

Looking ahead, a ‘Pro’ version of the application could offer even more advanced features. This version might include recording capability, allowing users to track their progress over time. Additionally, it could provide a method for downloading recordings for review by speech therapists, enhancing the application’s potential for professional use. 

A “Therapist” version (different platform?) would be able to store recordings from multiple users and annotate each as needed.  

And yes, we know this website needs improvement. We welcome anyone willing to volunteer their website design service.