Cognitive Hearing Aid Can Isolate a Single Voice in a Crowded Room

This new AI-powered system developed at Columbia University could show up in commercial hearing aids within five years.

Ask anyone who uses a modern hearing aid and they'll likely tell you that the technology is just this side of miraculous — except in crowds. In busy multi-speaker environments, even the most advanced hearing aids have trouble helping people hear what others are saying.

New research out of Columbia University, however, aims to remedy that situation by combining traditional hearing aid technology with brain scanning and artificial intelligence. Using external sensors that monitor brain activity, the technology determines which person the user is speaking to, isolates the source, and then amplifies that person's voice while suppressing all other background noise.

“We originally decoded a person's attention using invasive recordings in 2012, and we showed in 2014 that the same can be done using non-invasive scalp recording — a cap with electrodes that touch the scalp,” said Nima Mesgarani, an associate professor in the Department of Electrical Engineering at Columbia University in New York.

Processing massive amounts of audio data and isolating the wearer's object of interest based on their brain waves is a heavy lift for a hearing aid system. Performing that high-tech maneuver in real time requires a lot of computing power, which can be difficult to scale down to the size of a standard hearing aid. Luckily, advances in material science are making such tiny computers possible.

“Plenty of research is being done these days to make small dedicated chips that can do the computation that is needed in such devices,” Mesgarani said.

The technology, called auditory attention decoding, relies on the deep neural network model of artificial intelligence. Neural networks mimic the workings of the human brain – learning on the fly and coming up with their own solutions to new problems. Using the neural network approach, the hearing aid computer actually teaches itself, over time, the best way to pluck a single voice out of a crowded room.

“Our algorithm uses deep neural networks, because they are becoming so widespread, many researchers are developing low-power specialized hardware to implement them in real time,” Mesgarani said. “Also, the modern hearing aids are able to do some of their calculation off-board – for example by syncing to your phone – which helps to manage heavy computation in such small device.” 

The technology is in very early proof-of concept phase, but Mesgarani said that if all goes well the system could start showing up in commercial hearing aids within five years.

“There is no theoretical reason prohibiting the implementation of this technology in an actual hearing aid,” he said. “In fact, several hearing aid companies have already started researching this idea and expressed interest in our approach.”

RELATED: New Computer Chips That ‘See’ Data Will Enable Energy-Efficient Supercomputers

Down the line, the researcher hope to further refine and miniaturize the technology so that everything fits into a more-or-less standard hearing-aid apparatus. For one thing, Mesgarani said, that skullcap of electrodes has got to go.

“Others have shown the feasibility of decoding attention using in-ear recording — an earbud with electrodes placed on it — or a C-shaped array of electrodes that is placed around the ear, a similar shape to a conventional hearing aid,” Mesgarani said.

The research, published this week in the Journal of Neural Engineering, is a collaboration among the Columbia University Medical Center's Department of Neurosurgery, the Hofstra-Northwell School of Medicine, and Feinstein Institute for Medical Research and was funded in part by grants from the National Institutes of Health. 

WATCH: Where the Future of AI Is Headed