The team also looked at how other animals -– in this case, a species of bird called an Arabian babbler –- drive off predators. Babblers will make an alarm call when they see a predator and other babblers will join the bird and make more calls. They then mob the predator, all the while flapping wings and making noise. The babblers don't ever actually fight the animal they want to drive off; they just make enough noise and flap around enough that attacking a babbler seems like it isn't worth it.
Arkin and and Ph.D. student Justin Davis found that the deception works when the group reaches a certain size - essentially, when enough backup arrives to convince the adversary that it's best to back off. Davis modeled that behavior in software using a military scenario and found that it worked even if the group didn't have the firepower to confront the enemy directly.
Robo-Bee To Get Brain For Autonomous Flight
The military is interested in this because a robot that can fool an opponent is a valuable tool. It could lead an enemy down a false trail or make itself look more dangerous than it actually is.
The work is an extension of similar
research Arkin started in 2009, developing a kind of 'ethical governor' for robots. In 2010 he worked with Alan Wagner to develop deception
algorithms using a kind of hide-and-seek game.
If robots can fool other robots – or people – that does raise interesting ethical problems. When does fooling people become dangerous?
How do you tell the robot when the right time to do that is? We won't be seeing anything like the Terminator anytime soon, but we already have drones, and the military has explored the use of autonomous
supply vehicles. Human Rights Watch has expressed
concern over robots that can make targeting decisions - the ability to deceive would complicate that.
via Georgia Tech Credit: Tetra Images/Corbis