Killer robots.
Looking at the two words together is enough to conjure up images of chaos and destruction. They’re an image far too familiar in science fiction settings such as Isaac Asimov or Arthur C. Clarke. It’s also a concept many A.I. researchers will gladly tell you they’ve been plagued with at least once by friends or colleagues. However, how much of a real ethical concern do they pose for society?
In November of last year, Human Rights Watch (HRW) and the International Human Rights Clinic (IHRC) at Harvard Law School jointly published a 50-page report on the topic of killer robots. The report, titled Losing Humanity: The Case against Killer Robots, outlined many legal, ethical, and other concerns pertaining to the use of fully autonomous weapons (previously covered by Mike Hamer of Robohub). Some of the concerns outlined include unsolved and important roboethics questions such as “who is legally responsible for a robot’s actions?”
Maybe this question doesn’t have to be answered (not right away at least) if we simply stop the use and development of killer robots all together.
Earlier this week, on April 23, a new global campaign named the Campaign to Stop Killer Robots was launched with this very idea of stopping killer robots. Composed of over twenty international NGOs in ten countries, the campaign’s focus is to ban the development, production and use of future lethal robot weapons or “killer robots” that could autonomously locate and neutralize human targets.
Addressing countries such as China, Russia, Israel and the United States that are currently moving to create systems to give greater autonomy to combat robots, the campaign believes it would pose a challenge to international human rights and humanitarian law.
Fully autonomous weapons aren’t really roaming around war zones today (yet). So this campaign is preemptive. But given that we have so many precursor technologies out there already — I mean, who doesn’t know about the drone technologies used for targeted attacks today? — it seems that developing and deploying autonomous weapons is an obvious next step. But this obvious next step could lead to a tragic case of a robotic arms race. Taking actions to ban it now makes more sense than to ban it when it’s too late.
In 2009, Robots Podcast interviewed Noel Sharkey (Professor of A.I. And Robotics at the University of Sheffield, and spokesperson of the Campaign to Stop Killer Robots) and Ronald Arkin (Regent’s Professor and Director of the Mobile Robot Laboratory at Georgia Institute of Technology), and both experts addressed the issue of ethics of robot soldiers. As Sharkey argued, the problem of robots autonomously identifying targets lies within the Principle of Discrimination (part of the international Laws of War described in the Geneva Convention) (listen to his interview here). In it, soldiers must not harm a civilian, non combatant, the immensely ill or prisoners of war. According to Sharkey, no A.I. system can reliably discriminate between soldiers and civilians. A robot, for the moment, “can’t have a sense of ethics” necessary to make the humanistic decisions required by soldiers.
An opposing viewpoint is provided by Arkin. According to Arkin, robot soldiers carry an inherent danger but depending on their implementation, it could provide better safety and non-combatants for soldiers (listen to his interview here). As Arkin argues, robots are not affected by emotions such as fear or anger that can sometimes cloud soldier’s better judgment on the field — although simulation of certain emotions such as guilt could be used to improve a robot’s decision making algorithm.
Despite the contrasting views, both Sharkey and Arkin were in agreement in 2009 that as it stands, autonomous robots are not prepared for battlefield. The Losing Humanity report from last year echos this viewpoint. The launch of the Campaign to Stop Killer Robots is timely, and comes one month ahead of Christof Heyns‘ (United Nations Special Rapporteur on extrajudicial, summary or arbitrary executions) delivery of his report on lethal autonomous robotics to the UN Human Rights Council.
So what can we, roboticists, politicians, or the public do in support of the campaign?
You can have your say on the matter by voting on the poll of the Engineer (see left hand column).
Interested individuals can support the campaign by responding to the call to ban lethal autonomous robots by the International Committee for Robot Arms Control’s (ICRAC, a leading NGO member of the Campaign to Stop Killer Robots). Interested NGOs can join the campaign by contacting the campaign coordinator.
Or if you’d rather not explicitly support them, but find out more, check out the campaign’s website or the press release .
To keep on top of the news from the campaign, follow the campaign via Twitter, Facebook, and Flickr.
This post was prepared jointly by Matthew Ebisu and AJung Moon and first appeared on Roboethics Info Database.