Space & Innovation

Octopus Inspires AI Robots on a Mission

Distributed intelligence may work best for controlling large groups of robots. Continue reading →

If you want an artificially intelligent system to reason like a person, model it after an octopus.

That's the idea behind an AI project underway at Raytheon in Aurora, Colo.

A.I. Software Learns A Simple Task Like A Human

Jim Crowder, chief engineer at Raytheon Intelligence, Information and Services, and his colleague John Carbone are working toward robotic systems that possess distributed intelligence like that found in octopi. The robotic systems could be useful in managing dozens of autonomous vehicles or drones at once, such as a flock of unmanned aerial vehicles sent to scout a disaster zone for survivors.

"Right now, it takes many people to run one UAV," Crowder told DNews. "You want to see one person running several UAVs."

That means giving more autonomy over to the machines so that if the controller loses contact with a drone, it can continue its mission.

Robots Rising Up In Odd Places

An octopus is a good model because of its distributed intelligence. Unlike a human, which has one brain controlling all bodily functions, an octopus has a brain plus bundles of nerves in each of its legs that act as mini-brains to control that appendage.

The octopus's brain serves as a central mediator, so that if one leg wants to go off in a particular direction -- maybe there's food that way -- the central brain gets the other legs to follow.

The system Crowder is building works along those lines. It doesn't look like an octopus. It's made of unconnected bug-like robots, each of them about 5 ½ inches by 5 ½ inches big. They do not communicate with each other, but with a central mediator.

‘Ex Machina': Science Vs. Fiction

Each robot has a kind of mini-brain that's been programmed with a basic objective, such as go down to the end of a room and come back. But how each unit accomplishes the task is a matter of its neural network, which is designed to learn and adapt to the environment.

The robots are powered by a solar cell and need to move into the light to charge their battery.

How Real-Life A.I. Rivals ‘Ex Machina'

They've also been programmed to think the light is dangerous - too much time there and they could be eaten by a predator. The predator in this case is the infrared light on another robot. It's not dangerous, but programming the robots with an internal conflict forces them to solve problems.

"If a new situation comes up they have to figure out how to adapt," said Crowder.

If one of them gets too far off track, the central mediator can coax the machines back on mission.

Interestingly enough, the internal conflict also creates emotion.

"‘Happy' means ‘I know what I'm doing, I can handle the info and make the right decisions,'" explains Crowder.

Octopi Have A Brain In Every Tentacle

"Anxious" means, "I don't know what to do."

Emotions are critical to an A.I. system, says Crowder because they help the brain decide how to use the resources at hand. Humans do the same thing; it's called cognitive economy.

"Without putting emotions into it, the system will be crippled," says Crowder, because it will have to rethink everything every time.

The challenge is finding the right balance because too much emotion could cause too much unpredictability.

But unpredictability is necessary, says Crowder. Without it, you limit how much a robot can adapt to its environment.

Jim Crowder with two of his artificially intelligent robots.