New Computer Chips That ‘See’ Data Will Enable Energy-Efficient Supercomputers
Drawing inspiration from how mammalian brains process sight, researchers have found a way to mimic the functions of biological neural networks on a next-gen “memristor” chip.
Machine learning has an energy problem. It runs on computer chips that store information in one location and process it in another, requiring power-hungry operations to move the data back and forth between the two.
Neural networks in the brain, which machine-learning algorithms emulate, do both at the same time without using much energy at all.
Now, drawing inspiration from how mammalian brains process sight near-instantly, researchers have found a way to mimic the functions of biological neural networks using an efficient algorithm running on an electrical circuit. The artificial network is based on an advanced chip called a “memristor” (an electrical resistor with memory), which stores and processes data simultaneously and enables efficient pattern recognition.
In experiments, the circuit quickly computed complex data from images and video using far less energy than in conventional computer circuits.
“The idea is to perform computations directly in memory, directly where the data is, so that you don’t have to move data,” Wei Lu, a professor of electrical engineering and computer science at the University of Michigan, told Seeker.
Such a system could be built into the hardware of cameras, for example, and used for real-time video analysis from self-driving cars, which could improve their safety.
RELATED: Blueprint for Giant Quantum Computer Promises Mind-Blowing Power
Lu and his team just published a paper in Nature Nanotechnology describing the new circuit and algorithm, which is based on a principle from neuroscience called sparse coding.
According to the principle, the brain’s cortex manages the tremendous amount of sensory information — images, sounds, smells, etc. — flooding it constantly by reformatting the influx into various components called features, so that it takes very few neurons to process it.
For instance, when a person sees a clock, only a small number of neurons become active, enough to do pattern matching that draws upon the brain’s memory of a clock and compares it with the object’s basic features — its round shape, the presence of numbers — to draw a conclusion that this thing is a clock. Such processing by limited neurons saves energy.
The researchers wrote an algorithm that does something similar and then tested it on memristor network that was a grid, 32-by-32. In this so-called neuromorphic network, the electrical output from each column represents a neuron signal.
Memristor circuits are different from conventional computer circuits because they contain materials that, when hit with a specific voltage, can re-arrange their internal atomic composition. The new atomic arrangement stays put, even when the voltage is turned off. That’s the memory part.
The device also regulates electric current based on the history of voltages that have been applied to it, which is where the processing comes in. The two functions happen simultaneously, just as with synapses that connect neurons in a brain.
RELATED: Quantum Data Storage in a Single Atom Brings New Computing Era Closer to Reality
For the experiment, Lu and his colleagues started with a collection of natural images, represented as data in the form of different voltage pulses. They applied the voltage pulses to the memristor network, training it to learn patterns in the images. Each neuron retained a bit of voltage information that corresponded to a feature in the image.
Next, they applied new data to the network that represented new images. The sparse coding algorithm is such that it forces each column in the network — each neuron, essentially — to compete with the others to match the pattern in its memory to the incoming data.
“Only the ones that best represent the input become active in the end,” said Lu. “That’s actually a very neat approach.”
The system then uses the best matches — the winning neurons — to reconstruct the image.
Lu said that this approach can be considered a kind of data compression scheme that works much better than conventional compression methods, which involve decompressing the data in order to recover the original signal rather than enabling the compressed data to be used directly.
“Here we don’t have to do that,” he said.
The sparse coding represents the data using just a few active neurons and the features associated with those neurons, he noted.
Calculations showed tremendous energy savings as well. Lu said it would take 20 milliwatts of energy to process video in real time, a function that would otherwise take a conventional graphics processing unit about 300 watts.
Lu thinks these computer chips could be integrated directly into internet-of-things devices, reducing the need to send and process information in the cloud. It could be done on the device directly.
He also envisions scaling up the system, combining many memristor networks and running them in parallel to solve complex problems that cannot yet be tackled by even the most powerful supercomputers.
WATCH: How Close Are We to Computers That Think Like Humans?