App Lets You Map Rooms Like a Bat
Bats, whales and dolphins, among other animals, use echolocation -- emitting a sound and listening to the echo -- to create a mental map of their environment. They use this map to navigate and find food. Now, researchers have demonstrated a new computer algorithm that could soon give humans a similar ability to map their environments with sound.
The technology could be used to produce a more accurate sound experience in virtual spaces, fine-tune the acoustics of architectural designs and recreate room structure by using only audio files. Eventually, a person’s whereabouts could even be revealed via an echolocation app on her smartphone, down to the very room she occupies, since every room has a unique audio signature. Consider it a hyper-localized audio GPS.
The system works by analyzing the echoes from a simple sound source, such as a person’s voice or finger snap. Those echoes are captured by a few microphones and then processed by the algorithm, which is able to reconstruct the 3-D geometry of complex room shapes.
“The algorithm relies on a certain acoustical model called image source model,” Ivan Dokmanic, of the Audiovisual Communications Laboratory at the Ecole Polytechnique Federale de Lausanne, told Discovery News. “It says that if you have an echo from a wall, you can model this echo -- this sound -- from a point source, which is like a mirror of the original source across a wall.”
Dokmanic said the algorithm he helped create is able to sort the echoes it receives and determine value. For example, due to a ricocheting effect, some echoes arrive at a microphone as a third or fourth echo, which are comparably diminished and less useful than “first-order echoes.”
“Combined with echo sorting, the algorithm can discard higher-order echoes and construct the walls from first-order echoes,” Dokmanic said.
These first order echoes help create a more authentic acoustic diagram -- one that’s just as important in the virtual world as it is the real world. Up until a couple years ago, Dokmanic says advances in virtual reality were mainly concentrated on the perception of visual space, not aural space. But he argues the sonic landscape is just as important because false representations of reverberations or echoes in the virtual world can easily confuse the brain.
To give an accurate impression of virtual space using sound, Dokmanic says it’s really important to provide the correct echoes. “If you’re starting your design of the space, then you could say ‘I know how I want my room to sound, so these are the acoustic properties I want it to have,’ ” he said. “The same thing applies to architectural acoustics.”
Conversely, the algorithm could be as an audio forensics system to recreate actual space.
“You can imagine that you have a piece of audio that’s recorded in a certain room and you need to reconstruct this room,” explained Dokmanic. “Or you can imagine someone talking on a cell phone. From the cell phone signal or audio recorded by the phone, you could reconstruct the room.”
For example, if an audio engineer wanted to replicate the sound parameters of the legendary Sun Studio where Elvis Presley first recorded “That’s All Right,” he or she could potentially call up the famed Memphis studio and record a phone conversation with someone standing in that room.
Yet another application for this tech is indoor localization, where the subtleties of a person’s position could still be mapped or tracked, picking up the trail where GPS stopped.
“Big companies, big office buildings, malls and supermarkets that are interested in knowing how and where people walk around are interested in this,” Dokmanic said. “Museums could also locate a person inside the building and tell if they’re standing next to a piece of art and tailor a presentation.”
Combined with the floor plans and blueprints of buildings, the algorithm could be utilized to navigate buildings or large spaces -- complete with vertical, three-dimensional mapping -- where as most GPS devices only provides horizontal, two-dimensional locations.
“If the system knows what the room looks like and knows where the walls are, one microphone will suffice to localize yourself,” said Dokmanic. “Then you could image applications like a smartphone app.”
Laurent Daudet, a professor who studies acoustic at the Institut Langevin in Paris, says traditionally reverberation can be a nuisance with most studies on localization.
"At high levels of reverberation most of the standard techniques would completely fail," he said via email. "Getting the geometry from such measurements is not a new idea -- other people try to do it -- but Dokmanic makes this in a very elegant way, turning a big complex geometrical optimization into a number of small and easy liner algebra problems. Basically, he makes it practical."