The U.S. military has been developing robots for years now. For the most part, these 7,000 or so ground-based machines are designed to help soldiers on the battlefield, either with heaving lifting or more dangerous work, such as deactivating or clearing away bombs. But the United States, Britain, Israel and South Korea have robot sentries, which come equipped with machine guns and cameras, thermal imaging and laser range finders capable of detecting intruders up to 2 1⁄2 miles away. These robots are seen as precursors to fully autonomous systems that make decisions on their own and shoot humans without a living being pulling the trigger.
In light of these technological advances, United Nations expert, Christof Heyns, has called for a “global moratorium on the development and use of armed robots that can select and kill targets without human command,” reports the NY Times.
A Human Rights Watch report compiled in collaboration with the Harvard Law School cites a United States Air Force assessment that “by 2030 machine capabilities will have increased to the point that humans have become the weakest component in a wide array of systems and processes.”
If robots will be better at war than humans, will wars be easier to engage? And it’s unclear whether current international laws are adequate for controlling the use of such machines. These lays require that soldier engaged in war are able to distinguish between civilians and combatants or whether any harm to civilians during a military action exceeds the military advantage gained by it.
And let’s say a robot could be programmed to make such a decision, but then malfunctions and kills innocent people. Who is held responsible? Who is punished?
“War without reflection is mechanical slaughter,” Heyns told the Human Rights Council in Geneva on Thursday.
Credit: General Dynamics Robotic Systems