Australian Scientists revealed the simplified brain circuitry behind human hearing
In what can be termed a groundbreaking study Researchers at Macquarie University have disproved a long-held idea regarding how people process spatial hearing, which may completely change the audio technology and hearing aid industries.
Developed in the 1940s, the conventional model proposes that the brain uses a sophisticated network of neurons to identify the source of sounds based on minute variations in the arrival times at each ear. But Dr. Jaime Undurraga and his group have refuted this theory, showing that the spatial hearing circuitry of the brain is far more basic than was thought.
Contrary to the idea of specially complex cerebral processing, the study, which was published in the international journal Current Biology, shows that humans perceive sounds in a comparable way to many other species. The brain uses a multipurpose network of neurons to encode information about the size and position of the source, rather than a separate neuron for every point in space.
"Our brains don't need an over-engineered system to determine where sounds are coming from," says senior author Professor David McAlpine.
"For spatial listening, like other animals, we use a sparse, energy-efficient type of brain circuitry," he adds.
These results have the potential to greatly impact the creation of audio technologies and hearing aids, researchers suggest.
They argue that now researchers can build more flexible and effective hearing aids, cochlear implants, and cellphones by comprehending the brain's streamlined method for differentiating between sound sources.
The hearing aids now in use find it difficult to separate out individual sounds in loud or highly reverberating surroundings. This makes it difficult for users—especially those with hearing loss—to interpret speech and other crucial aural clues.
The work also clarifies the shortcomings of existing machine listening technologies, such as large language models (LLMs), which try to replicate high-fidelity signals exactly without taking into account how the brain processes sound localization.
Professor McAlpine underlines the need to concentrate on the brain's "shallow" processing of sound fragments rather than trying to imitate sophisticated language processing. He says improved machine listening systems can result from knowledge of how the brain represents sound.
Going on, the research team seeks to look at how little information is needed to listen spatially effectively. Working together with industry partners including Google Australia, Cochlear, and National Acoustic Laboratories, the goal is also to investigate ways to incorporate artificial intelligence into hearing aids.
This ground-breaking research questions established knowledge about human hearing and opens the door to creative developments in audio technology and assistive technologies.