Implant to 'Plug' Brain into Supercomputers

DARPA wants to develop a tiny device that would translate the electrochemical language of the brain into the 1s and 0s of computers.

DARPA wants to get inside your head.

The Defense Advanced Research Projects Agency - R&D arm of the U.S. Department of Defense - announced plans this week to develop a next-generation neural implant device for connecting the human brain to sophisticated supercomputers.

Dubbed the Neural Engineering System Design (NESD), the program aims to dramatically improve current neurotechnology capabilities through public and private research initiatives. The ultimate goal is to produce a miniaturized brain implant less than one cubic centimeter in size.

See The Dreams Of An Artificial Brain: Photos

The essential problem that DARPA is trying to solve concerns data transfer. The human brain is, of course, an enormously complex system with hundreds of billions of neurons. Today's most advanced supercomputers, meanwhile, can process huge amounts of data in seconds.

The trick is getting the two systems to communicate efficiently, say DARPA officials. The proposed device would serve as a translator between digital computer systems and the electrochemical "language" of the brain.

"Today's best brain-computer interface systems are like two supercomputers trying to talk to each other using an old 300-baud modem," said Phillip Alvelda, the NESD program manager, in press materials accompanying the announcement.

Matrix-Style Brain Implant Could Boost Memory

"Imagine what will become possible when we upgrade our tools to really open the channel between the human brain and modern electronics."

According to DARPA, current neural interfaces approved for human use feature around 100 channels, each responsible for aggregating signals from tens of thousands of neurons. The NESD program aims to develop systems that can communicate directly with up to one million individual neurons in a given region of the brain.

Hack Your Brain to Improve Your Health

It's going to take some doing. DARPA officials say the system will require integrated breakthroughs in multiple disciplines including neuroscience, synthetic biology, low-power electronics, photonics and medical manufacturing.

Potential applications of the technology are vast, but in the short term DARPA is hoping to create devices for those with sight or hearing impairments. For example, the NESD system could feed digital auditory or visual information into the brain with far greater resolution and clarity than current technology.

The NESD program is part of the BRAIN (Brain Research through Advancing Innovative Neurotechnologies) Initiative launched in 2013. You can read more about that at the DARPA web site.

via Gizmag

It's easy to mistake this photo for some kind of surreal landscape painting, but this image in fact shows off the imagination of Google's advanced image detection software. Similar to an artist with a blank canvas, Google's software constructed this image out of nothing, or essentially nothing, anyway. This photo began as random noise before software engineers coaxed this pattern out of their machines. How is it possible for software to demonstrate what appears to be an artistic sensibility? It all begins with what is basically an artificial brain.

When Art Meets Science: Photos

Artificial neural networks are systems consisting of between 10 and 30 stacked layers of synthetic neurons. In order to train the network, "each image is fed into the input layer, which then talks to the next layer, until eventually the 'output' layer is reached,"

the engineers wrote in a blog post detailing their findings

. The layers work together to identify an image. The first layer detects the most basic information, such as the outline of the image. The next layers hone in on details about the shapes. The final output layer provides the "answer," or identification of the subject of an image. Shown is Google's image software before and after processing an image of two ibis grazing to detect their outlines.

How Face Recognition Tech Will Change Everything

Searching for shapes in clouds isn't just a human pastime anymore. Google engineers trained the software to identify patterns by feeding millions of images to the artificial neural network. Give the software constraints, and it will scout out patterns to recognize objects even in photos where the search targets are not present. In this photo, for example, Google's software, like a daydreamer staring at the clouds, finds all kinds of different animals in the sky. This pattern emerged because the neural network was trained primarily on images of animals.

Cloud-Gazing: Learn Your Cloud Types

How the machine is trained will determine its bias in terms of recognizing certain objects within an otherwise unfamiliar image. In this photo, a horizon becomes a pagoda; a tree is morphed into building; and a leaf is identified as a bird after image processing. The objects may have similar outlines to their counterparts, but all of the entries in the "before" images aren't a part of the software's image vocabulary, so the system improvises.

Facial Recognition System Detects Pain

When the software acknowledges an object, it modifies a photo to exaggerate the presence of that known pattern. Even if the software is able to correctly recognize the animals it has been trained to spot, image detection may be a little overzealous in identifying familiar shapes, particularly after the engineers send the photo back, telling the software to find more of the same, and thereby creating a feedback loop. In this photo of a knight, the software appears to recognize the horse, but also renders the faces of other animals on the knight's helmet, globe and saddle, among other places.

Photo First: Light Captured as Both Particle and Wave

Taken a step further, using the same image over several cycles in which the output is fed through over and over again, the artificial neural network will restructure an image into the shapes and patterns it has been trained to recognize. Again borrowing from an image library heavy on animals, this landscape scene is transformed into a psychedelic dream scene where clouds are apparently made of dogs.

Plants Thrive in Psychedelic, Underground Farms

At its most extreme, the neural network can transform an image that started as random noise into a recognizable but still somewhat abstract kaleidoscopic expression of objects with which the software is most familiar. Here, the software has detected a seemingly limitless number of arches in what was a random collection of pixels with no coherence whatsoever.

Digital 'Head Dome' Immerses You in Art

This landscape was created with a series of buildings. Google is developing this technology in order to boost its image recognition software. Future photo services might recognize an object, a location or a face in a photo. The engineers also suggest that the software could one day be a tool for artists that unlocks a new form of creative expression and may even shed light on the creative process more broadly.

New Google Initiative Targets Classical Music Lovers