Robohub.org
 

Campaign to Stop Killer Robots: Let’s all stop them, shall we?

by
27 April 2013



share this:

Killer robots.

Looking at the two words together is enough to conjure up images of chaos and destruction.  They’re an image far too familiar in science fiction settings such as Isaac Asimov or Arthur C. Clarke. It’s also a concept many A.I. researchers will gladly tell you they’ve been plagued with at least once by friends or colleagues.  However, how much of a real ethical concern do they pose for society?

Nobel Peace Laureate Jody Williams and Noel Sharkey from the International Committee for Robot Arms Control at the launch of the Campaign to Stop Killer Robots. Photo by Peter Asaro.

Nobel Peace Laureate Jody Williams and Noel Sharkey from the International Committee for Robot Arms Control at the launch of the Campaign to Stop Killer Robots. Photo by Peter Asaro.

In November of last year, Human Rights Watch (HRW) and the International Human Rights Clinic (IHRC) at Harvard Law School jointly published a 50-page report on the topic of killer robots. The report, titled Losing Humanity: The Case against Killer Robots, outlined many legal, ethical, and other concerns pertaining to the use of fully autonomous weapons (previously covered by Mike Hamer of Robohub). Some of the concerns outlined include unsolved and important roboethics questions such as “who is legally responsible for a robot’s actions?”

Maybe this question doesn’t have to be answered (not right away at least) if we simply stop the use and development of killer robots all together.

Earlier this week, on April 23, a new global campaign named the Campaign to Stop Killer Robots was launched with this very idea of stopping killer robots. Composed of over twenty international NGOs in ten countries, the campaign’s focus is to ban the development, production and use of future lethal robot weapons or “killer robots” that could autonomously locate and neutralize human targets.

Addressing countries such as China, Russia, Israel and the United States that are currently moving to create systems to give greater autonomy to combat robots, the campaign believes it would pose a challenge to international human rights and humanitarian law.

Photo by Peter Asaro

Photo by Peter Asaro

Fully autonomous weapons aren’t really roaming around war zones today (yet). So this campaign is preemptive. But given that we have so many precursor technologies out there already — I mean, who doesn’t know about the drone technologies used for targeted attacks today? — it seems that developing and deploying autonomous weapons is an obvious next step. But this obvious next step could lead to a tragic case of a robotic arms race. Taking actions to ban it now makes more sense than to ban it when it’s too late.

In 2009, Robots Podcast interviewed Noel Sharkey (Professor of A.I. And Robotics at the University of Sheffield, and spokesperson of the Campaign to Stop Killer Robots) and Ronald Arkin (Regent’s Professor and Director of the Mobile Robot Laboratory at Georgia Institute of Technology), and both experts addressed the issue of ethics of robot soldiers.  As Sharkey argued, the problem of robots autonomously identifying targets lies within the Principle of Discrimination (part of the international Laws of War described in the Geneva Convention) (listen to his interview here).  In it, soldiers must not harm a civilian, non combatant, the immensely ill or prisoners of war. According to Sharkey, no A.I. system can reliably discriminate between soldiers and civilians.  A robot, for the moment, “can’t have a sense of ethics” necessary to make the humanistic decisions required by soldiers.

An opposing viewpoint is provided by Arkin. According to Arkin, robot soldiers carry an inherent danger but depending on their implementation, it could provide better safety and non-combatants for soldiers (listen to his interview here).  As Arkin argues, robots are not affected by emotions such as fear or anger that can sometimes cloud soldier’s better judgment on the field — although simulation of certain emotions such as guilt could be used to improve a robot’s decision making algorithm.

Despite the contrasting views, both Sharkey and Arkin were in agreement in 2009 that as it stands, autonomous robots are not prepared for battlefield. The Losing Humanity report from last year echos this viewpoint. The launch of the Campaign to Stop Killer Robots is timely, and comes one month ahead of Christof Heyns‘ (United Nations Special Rapporteur on extrajudicial, summary or arbitrary executions) delivery of  his report on lethal autonomous robotics to the UN Human Rights Council.

Stop Killer RobotsSo what can we, roboticists, politicians, or the public do in support of the campaign?

You can have your say on the matter by voting on the poll of the Engineer (see left hand column).

Interested individuals can support the campaign by responding to the call to ban lethal autonomous robots by the International Committee for Robot Arms Control’s (ICRAC, a leading NGO member of the Campaign to Stop Killer Robots). Interested NGOs can join the campaign by contacting the campaign coordinator.

Or if you’d rather not explicitly support them, but find out more, check out the campaign’s website or the press release .

To keep on top of the news from the campaign, follow the campaign via TwitterFacebook, and Flickr.

This post was prepared jointly by Matthew Ebisu and AJung Moon and first appeared on Roboethics Info Database.


tags: , , , , ,


AJung Moon HRI researcher at McGill and publicity co-chair for the ICRA 2022 conference
AJung Moon HRI researcher at McGill and publicity co-chair for the ICRA 2022 conference





Related posts :



Robot Talk Episode 95 – Jonathan Walker

In the latest episode of the Robot Talk podcast, Claire chatted to Jonathan Walker from Innovate UK about translating robotics research into the commercial sector.
25 October 2024, by

Robot Talk Episode 94 – Esyin Chew

In the latest episode of the Robot Talk podcast, Claire chatted to Esyin Chew from Cardiff Metropolitan University about service and social humanoid robots in healthcare and education.
18 October 2024, by

Robot Talk Episode 93 – Matt Beane

In the latest episode of the Robot Talk podcast, Claire chatted to Matt Beane from the University of California, Santa Barbara about how humans can learn to work with intelligent machines.
11 October 2024, by

Robot Talk Episode 92 – Gisela Reyes-Cruz

In the latest episode of the Robot Talk podcast, Claire chatted to Gisela Reyes-Cruz from the University of Nottingham about how humans interact with, trust and accept robots.
04 October 2024, by

Robot Talk Episode 91 – John Leonard

In the latest episode of the Robot Talk podcast, Claire chatted to John Leonard from Massachusetts Institute of Technology about autonomous navigation for underwater vehicles and self-driving cars. 
27 September 2024, by

Interview with Jerry Tan: Service robot development for education

We find out about the Jupiter2 platform and how it can be used in educational settings.
18 September 2024, by





Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association