They Really Didn't Hear You

When people focus on a complex visual task, they can become deaf to other sounds around them.

Your better half is engrossed in the latest episode of Homeland when you ask for help cleaning up the kitchen.

When she doesn't respond, you assume she's conveniently ignoring you. But in reality she might not even hear you.

The ability to process sound and the ability to process sight share the same part of the brain, suggests new research. So when a person focuses on a complex visual task - say, determining if he is or isn't a bad guy in a tense but wordless TV scene – they can become deaf to other sounds around them.

Photos: See the Dreams of an Artificial Brain

Anyone who's been staring at an interesting billboard not seen an approaching car, or who's been reading a page-turner and not heard his child arrive home from school, knows this anecdotally.

And previous research has suggested that the reverse is also true: when people switch to pay attention to sounds, they lose the ability focus on what's in front of them. Think of the driver who takes a cell phone call and starts swerving.

Now, by using brain imaging, researchers have uncovered the extent of this "inattentional deafness," or how intensive visuals rob us of the ability to hear.

Photos: Animal Superpowers - the Eyes Have It

"When volunteers were performing demanding visual task, they were unable to hear sounds that they would normally hear," study coauthor Dr. Maria Chait said in a release. "The brain scans showed that people were not only ignoring or filtering out the sounds, they were not actually hearing them in the first place."

This is great news for spouses the world over who've long pleaded, "But I really didn't hear you!" But it also has broader implications. Doctors in noisy operating rooms may benefit by being aware of this phenomenon, for example.

Pedestrians who text and walk should take heed, too.

"They're prone to inattentional deafness," coauthor Professor Nilli Lavie said in the release. "Loud sounds such as sirens and horns will be loud enough to get through, but quieter sounds like bicycle bells or car engines are likely to go unheard."

Phones can become a dangerous distraction when crossing a street.

It's easy to mistake this photo for some kind of surreal landscape painting, but this image in fact shows off the imagination of Google's advanced image detection software. Similar to an artist with a blank canvas, Google's software constructed this image out of nothing, or essentially nothing, anyway. This photo began as random noise before software engineers coaxed this pattern out of their machines. How is it possible for software to demonstrate what appears to be an artistic sensibility? It all begins with what is basically an artificial brain.

When Art Meets Science: Photos

Artificial neural networks are systems consisting of between 10 and 30 stacked layers of synthetic neurons. In order to train the network, "each image is fed into the input layer, which then talks to the next layer, until eventually the 'output' layer is reached,"

the engineers wrote in a blog post detailing their findings

. The layers work together to identify an image. The first layer detects the most basic information, such as the outline of the image. The next layers hone in on details about the shapes. The final output layer provides the "answer," or identification of the subject of an image. Shown is Google's image software before and after processing an image of two ibis grazing to detect their outlines.

How Face Recognition Tech Will Change Everything

Searching for shapes in clouds isn't just a human pastime anymore. Google engineers trained the software to identify patterns by feeding millions of images to the artificial neural network. Give the software constraints, and it will scout out patterns to recognize objects even in photos where the search targets are not present. In this photo, for example, Google's software, like a daydreamer staring at the clouds, finds all kinds of different animals in the sky. This pattern emerged because the neural network was trained primarily on images of animals.

Cloud-Gazing: Learn Your Cloud Types

How the machine is trained will determine its bias in terms of recognizing certain objects within an otherwise unfamiliar image. In this photo, a horizon becomes a pagoda; a tree is morphed into building; and a leaf is identified as a bird after image processing. The objects may have similar outlines to their counterparts, but all of the entries in the "before" images aren't a part of the software's image vocabulary, so the system improvises.

Facial Recognition System Detects Pain

When the software acknowledges an object, it modifies a photo to exaggerate the presence of that known pattern. Even if the software is able to correctly recognize the animals it has been trained to spot, image detection may be a little overzealous in identifying familiar shapes, particularly after the engineers send the photo back, telling the software to find more of the same, and thereby creating a feedback loop. In this photo of a knight, the software appears to recognize the horse, but also renders the faces of other animals on the knight's helmet, globe and saddle, among other places.

Photo First: Light Captured as Both Particle and Wave

Taken a step further, using the same image over several cycles in which the output is fed through over and over again, the artificial neural network will restructure an image into the shapes and patterns it has been trained to recognize. Again borrowing from an image library heavy on animals, this landscape scene is transformed into a psychedelic dream scene where clouds are apparently made of dogs.

Plants Thrive in Psychedelic, Underground Farms

At its most extreme, the neural network can transform an image that started as random noise into a recognizable but still somewhat abstract kaleidoscopic expression of objects with which the software is most familiar. Here, the software has detected a seemingly limitless number of arches in what was a random collection of pixels with no coherence whatsoever.

Digital 'Head Dome' Immerses You in Art

This landscape was created with a series of buildings. Google is developing this technology in order to boost its image recognition software. Future photo services might recognize an object, a location or a face in a photo. The engineers also suggest that the software could one day be a tool for artists that unlocks a new form of creative expression and may even shed light on the creative process more broadly.

New Google Initiative Targets Classical Music Lovers