The brain does a fantastic job of picking out individual voices in noisy environments. This process, known as the cocktail party effect, is a remarkable human ability. This aptitude for understanding speech in the presence of noise is challenging for a person with hearing loss. Advanced hearing aids have trouble deciphering speech in noisy environments. However, a group of engineers has innovative technology that can mimic the brain’s natural ability to detect and amplify certain sounds. This new, brain-controlled hearing aid prototype filters noise and boosts voices a person wants to hear.
Distinguishing a signal from noise is a function of the cells in the primary auditory cortex of the brain. The cells can turn down the noise and increase the gain on a particular signal, such as a friend’s voice on a noisy street. However, there is more to sound filtering than merely boosting the signal. The brain must dampen the noise. This process involves synaptic depression, which occurs when the brain cells fire less in response to some signals. Any technology that enhances the critical signal while dampening the unnecessary noise can significantly improve speech recognition systems. That technology may be here shortly.
Although current hearing aid models do an adequate job of amplifying speech while eliminating background noise, the existing modern hearing aids experience problems when trying to boost the volume of an individual voice over other voices. The result of this blending of many voices is a drastic reduction in the hearing aid wearer’s ability to communicate effectively, leading to isolation. The new experimental hearing aids, still in development, have the potential to enable hearing aid wearers to communicate much more effectively with those around them while eliminating noise. The engineering team feels that the device, which uses the power of the brain itself, may enable the multitude of hearing aid wearers to communicate with ease.
The new hearing aid is different from these other hearing aids. The new technology uses the listener’s brain waves instead of sole reliance upon external sound-amplifiers such as microphones. The team knew that when two people speak to each other, the brain waves of the speaker resemble those of the listener. Based on this knowledge, the engineers combined speech-separation algorithms with neural networks and mathematical models that imitate the brain’s computational abilities. The result is a system that separates the voices of individual speakers within a group then links the voices of each speaker to the brain waves of the person listening. The speaker with the voice pattern that most closely matches the listener’s brain waves receives amplification over the rest of the speakers.
The work of the team now involves finding a way to transform this prototype into a non-invasive device capable of being placed externally on the scalp or around the ear. They are interested in further improving and refining the algorithm so it might be able to function in a broader range of environments. The engineers want hearing aid wearers to experience the world around them entirely.