Human rights groups are calling for a ban on autonomous battlefield robots, raising larger questions about the ethics of robots from the battlefield to eldercare.
Steve Ranger on TechRepublic reports that governments that are part of the Convention on Certain conventional Weapons (CCW) plan to meet in Geneva next year to "discuss the issues related to so-called 'lethal autonomous weapons systems,' or what campaigners have dubbed 'killer robots.'"
I had my eyes opened to the potential for battlefield robots in P.W. Singer's 2009 book Wired for War. He makes a strong case that robots and drones are transforming the nature of warfare in the way that the invention of gunpowder and planes did. War is a fundamental human activity; most nations are fighting a war, building up their defenses, or recovering from a war most of the time. War has broad ramifications for civilian society, from fashion to the Internet itself (fundamental development of which was funded by the US Department of Defense).
Everybody knows about drones, but few people are aware of how deeply fundamental the technology is to America's wars. That was true in 2009, and I expect it's even more true today.
Right now, drones are remote-control machines. Humans must decide what they will target and when they will fire a missile. "But what concerns many experts is the potential next generation of robotic weapons: ones that make their own decisions about who to target and who to kill," Ranger wrote.
Steve Goose, arms director at Human Rights Watch, calls for an outright ban on killer robots. The US Department of Defense released a policy last year on using autonomous weapons in the battlefield, setting limits on their operation but not banning them. The UK says it has no plans to develop weapons that operate outside of human control.
Noel Sharkey, chairman of the International Committee on Robotic Arms Control and professor of AI and robotics at the University fo Sheffield in the UK, says the military overestimates the capabilities of robotic technology. He told Ranger that robot soldiers can't comply with the basic rules of war. "They can't distinguish between a combatant or a civilian or between a wounded soldier and a legitimate target." One UK-built system can tell a person from a car "but has problems with a dancing bear or a dog on its hind legs." Robots won't be able to judge "proportionality" -- whether civilian losses are acceptable and in proportion to military advantage. That's a difficult decision even for human military experts.
And accountability is a huge problem. A robot can't be blamed if things go wrong; the blame will go to military commanders. But that wouldn't be fair, Sharkey told Ranger, because military robots can be hacked or programmed badly. Who should get the blame -- manufacturers, software engineers, hardware engineers, or the commander?
Isaac Asimov addressed these problems more than 70 years ago with his Three Laws of Robotics, which say robots must protect humans, obey humans, and protect themselves -- in that order. (His robot stories are collected in the book I, Robot, along with several robot novels written over the 1950s-80s.) The problem with applying those laws in the real world is they assume robots make choices. They don't. Today's military drones operate by remote control, and autonomous robots are programmed, with no more free will than a word processor.
Moreover, as the science fiction writer Cory Doctorow pointed out in a 2005 short story, also titled "I, Robot," it would require a police state to truly ensure that only Three Laws-compliant robots get built.
A robot is just a tool, like a gun or a hammer. Human beings must exercise control, Dr. Joanna Bryson of the University of Bath told TechRepublic.
This goes beyond battlefield robots. With Roomba vacuum-cleaning robots, Google buying up robot companies, and self-driving cars and eldercare robots being developed in Japan, robots are entering our civilian, daily lives, and we must figure out how to use them ethically. The Engineering and Physical Sciences Research Council is concerned companies might make robots designed to be cute or otherwise lovable, alienating people from one another. The machine nature of robots should be apparent, EPSRC says.
— Mitch Wagner, , Editor in Chief, Internet Evolution