SCIENTISTS COUNDN'T DO IT IN TWO CENTURIES ---------- AN ENGINEER DID IT IN TWO MONTHS ---------- A TIME-DOMAIN COCHLEAR IMPLANT ---------- RULES ARE MEANT TO BE BROKEN

 

“For years, Musicians have been told that the ear acts as a Fourier analyzer. This quarter-truth has increased the distrust of perceptive musicians regarding scientists.”

— W. Dixon Ward (1970) (https://en.wikipedia.org/wiki/Ohm%27s_acoustic_law)

Then, that same quarter-truth became a cochlear implant, and its abilities were compromised.
But there is hope: The Bates Cochlear Implant.

During the Cold War, electronics engineer John K. Bates, Jr. was assigned to design a passive radar defense system possessing the ear’s abilities. However, he discovered that the Fourier-based model of the ear was not suitable for his electronic radar ears. As a result, he decided to start from scratch and developed a new model to provide his electronic ears with the necessary capabilities.

 

John’s model was groundbreaking as it completely disregarded tradition. Rather than using complex mathematics, he employed third-grade arithmetic. Instead of focusing on the frequency domain, which views the world through mathematical lenses, he embraced the time domain – the domain of vibrating strings, audio speakers, changing voltages, and the perception of sound by the ear.

 

John’s design was first classified and hidden from view; it reemerged in a time of change—electronic components containing embedded formulas replaced the discrete elements of John’s electronic radar ear’s era. John’s model was pushed aside and eventually discarded—but John didn’t forget.

 

Approaching retirement, John built a laboratory in his basement and began investigating acoustic uses for his radar ear. Using his electronic intelligence experience, John started extracting the building blocks of speech from sounds—phonemes—and followed that with manipulating phonic elements. He began deconstructing heavily accented speech, removing the phonics associated with accents, and reconstructing speech free of accents.

 

However, introducing another electronic ear, the cochlear implant, changed John’s life. The cochlear implant’s foundation was the same frequency domain ear model he had encountered decades earlier, and John knew it would contain unsolvable problems. John decided to build an alternative implant – a better implant.

 

A video of a real-time  PSM (Periodicity Sorting Matrix) output. The PSM is the preprocessor of the Bates cochlear implant and replaces the filterbanks traditionally used by other implants. The actual pitch is being displayed. 

A Partial List of Unpublished Papers by John Bates

A Computational Auditory Model Based on Evolutionary Principle
A Modern Atomist’s Theory of Hearing: It began with Epicurus 300 B.C
A Robust Signal Processor for Cochlear Implants
A Selectionist’s Approach to Auditory Perception
A Signal Processor for Cochlear Implants – An application for interstitial waveform sampling
A Systems Approach for Auditory Modeling
A Time-Domain Processing Experiment to Test Fundamental Auditory Principles
Acoustic Source Separation and Localization
An Auditory Model Based on Principles of Survival
An Auditory Theory, the Helmholtzian Mistake, the Cocktail Party Problem
An Experiment on Direction Finding -of-Arrival of Moving Vehicles
Appendix to “How to hear everything and listen to anything”
Can a Zeros-Based Waveform Encoding Explain Two-Tone Interference?
Decoding Hearing: From Cocktail Party to Fundamental Principles
Experiments in Direction Finding
Experiments on Interstitial Waveform Sampling
Hearing Sound as Particles of Meaning
higher-level auditory processing that leads to robust speech processing and other auditory applications
How to hear everything and listen to anything
Interpolater Between PRF Periodicity Recognition Gates
Modeling the HAAS Effect – A First Step for Solving the CASA Problem
Monaural Separation of Sounds by Their Meanings
My Engineering Mind: How I invented an auditory theory using engineering principles instead of science
Progress Report on AUTONOM, an Autonomic Acoustic Perception System
Solving the Cocktail Party Problem: Unthinkable ideas, luck, and pluck
Solving the Mystery of Hearing: Basic Principles, Ancient Algorithm
The Aural Retina: Hearing Sound as Particles of Meaning
The Microgranule System
The Story of the Aural Retina – Hearing Sound as Particles of Meaning
Time and Frequency: A Closer Look at Filtering and Time-Frequency Analysis
Tonal perception and periodicities
Using Attention and Awareness in a Computational Auditory Model
Zeros-Based Waveform Encoding Experiments in Two-Tone Interference

Future Projects (Unfunded)

Singer Vocal Fault Finder (Being Updated)

UPDATE: We have assigned a programmer to create an iOS version as a “thank you” for your contribution. Contributors will be notified when it becomes available. 

 

The processor in the first-generation cochlear implant was ingeniously used to create a smartphone app to assist singers in visually locating and correcting vocal faults. The app gave the singer a real-time display of their voice’s true pitch superimposed on a musical staff. Within the display were fault markers.

How good was the vocal fault detector? Two reviews follow:

“The most significant characteristic of the application is the visible manifestation of singing sound properties in a convincing mathematical way. You may control almost everything, from the exact pitch of the voice and the shaky voice (wrong vibrato frequency) from the annoying “voice caprile” (“He-goat Voice with high frequency) to the unacceptable “ballare la voce (“dancing voice with low frequency and big pitch intervals) up to realize the differentiation of simple legato, tenuto, portando, portato and glissando. The students can easily understand how to control their music phrasing, avoiding exaggerations, merely because they can observe what they sing.

Zachos Terzakis Opera Tenor, Vocal Teacher, Athens, Greece

“I have used this application in my studio to visually show my students whether they are singing on pitch. Once they realize that the center of the space or line equals the center of the pitch, it’s easy for them to see their own accuracy and train their ear as well. The accuracy of the program is incredible. I highly recommend it.”

 Mark Kent – Vocal Teacher, High Point, North Carolina

Visual Speech Enunciation

This is an adaptation of the singer’s vocal fault finder. The application is intended to be an enunciation coach for those with limited hearing. The app scrolls the script of a predetermined lesson plan across the screen. As the user reads the script, the engine in the Bates cochlear implant deconstructs the speech in real-time and displays the individual elements in a format suggested by the radar plot shown. Synchronized with the user’s voice is a plot display taken from a reference speaker using the same script. The reference speaker will be of similar gender, age, and register. The user corrects their speaking voice by having their plot match the shape of the reference voice.

The user can scroll back and forth through the script, highlight areas to practice, and creates loops to practice difficult areas.  

A “Pro” version might have a recording capability and a method to download recordings for review by speech therapists. 

A “Therapist” version (different platform?) would be able to store recordings from multiple users and annotate each as needed.  

And yes, we know this website needs improvement. We welcome anyone willing to volunteer their website design service.