Robohub.org
 

Campaign to Stop Killer Robots: Let’s all stop them, shall we?

by
27 April 2013



share this:

Killer robots.

Looking at the two words together is enough to conjure up images of chaos and destruction.  They’re an image far too familiar in science fiction settings such as Isaac Asimov or Arthur C. Clarke. It’s also a concept many A.I. researchers will gladly tell you they’ve been plagued with at least once by friends or colleagues.  However, how much of a real ethical concern do they pose for society?

Nobel Peace Laureate Jody Williams and Noel Sharkey from the International Committee for Robot Arms Control at the launch of the Campaign to Stop Killer Robots. Photo by Peter Asaro.

Nobel Peace Laureate Jody Williams and Noel Sharkey from the International Committee for Robot Arms Control at the launch of the Campaign to Stop Killer Robots. Photo by Peter Asaro.

In November of last year, Human Rights Watch (HRW) and the International Human Rights Clinic (IHRC) at Harvard Law School jointly published a 50-page report on the topic of killer robots. The report, titled Losing Humanity: The Case against Killer Robots, outlined many legal, ethical, and other concerns pertaining to the use of fully autonomous weapons (previously covered by Mike Hamer of Robohub). Some of the concerns outlined include unsolved and important roboethics questions such as “who is legally responsible for a robot’s actions?”

Maybe this question doesn’t have to be answered (not right away at least) if we simply stop the use and development of killer robots all together.

Earlier this week, on April 23, a new global campaign named the Campaign to Stop Killer Robots was launched with this very idea of stopping killer robots. Composed of over twenty international NGOs in ten countries, the campaign’s focus is to ban the development, production and use of future lethal robot weapons or “killer robots” that could autonomously locate and neutralize human targets.

Addressing countries such as China, Russia, Israel and the United States that are currently moving to create systems to give greater autonomy to combat robots, the campaign believes it would pose a challenge to international human rights and humanitarian law.

Photo by Peter Asaro

Photo by Peter Asaro

Fully autonomous weapons aren’t really roaming around war zones today (yet). So this campaign is preemptive. But given that we have so many precursor technologies out there already — I mean, who doesn’t know about the drone technologies used for targeted attacks today? — it seems that developing and deploying autonomous weapons is an obvious next step. But this obvious next step could lead to a tragic case of a robotic arms race. Taking actions to ban it now makes more sense than to ban it when it’s too late.

In 2009, Robots Podcast interviewed Noel Sharkey (Professor of A.I. And Robotics at the University of Sheffield, and spokesperson of the Campaign to Stop Killer Robots) and Ronald Arkin (Regent’s Professor and Director of the Mobile Robot Laboratory at Georgia Institute of Technology), and both experts addressed the issue of ethics of robot soldiers.  As Sharkey argued, the problem of robots autonomously identifying targets lies within the Principle of Discrimination (part of the international Laws of War described in the Geneva Convention) (listen to his interview here).  In it, soldiers must not harm a civilian, non combatant, the immensely ill or prisoners of war. According to Sharkey, no A.I. system can reliably discriminate between soldiers and civilians.  A robot, for the moment, “can’t have a sense of ethics” necessary to make the humanistic decisions required by soldiers.

An opposing viewpoint is provided by Arkin. According to Arkin, robot soldiers carry an inherent danger but depending on their implementation, it could provide better safety and non-combatants for soldiers (listen to his interview here).  As Arkin argues, robots are not affected by emotions such as fear or anger that can sometimes cloud soldier’s better judgment on the field — although simulation of certain emotions such as guilt could be used to improve a robot’s decision making algorithm.

Despite the contrasting views, both Sharkey and Arkin were in agreement in 2009 that as it stands, autonomous robots are not prepared for battlefield. The Losing Humanity report from last year echos this viewpoint. The launch of the Campaign to Stop Killer Robots is timely, and comes one month ahead of Christof Heyns‘ (United Nations Special Rapporteur on extrajudicial, summary or arbitrary executions) delivery of  his report on lethal autonomous robotics to the UN Human Rights Council.

Stop Killer RobotsSo what can we, roboticists, politicians, or the public do in support of the campaign?

You can have your say on the matter by voting on the poll of the Engineer (see left hand column).

Interested individuals can support the campaign by responding to the call to ban lethal autonomous robots by the International Committee for Robot Arms Control’s (ICRAC, a leading NGO member of the Campaign to Stop Killer Robots). Interested NGOs can join the campaign by contacting the campaign coordinator.

Or if you’d rather not explicitly support them, but find out more, check out the campaign’s website or the press release .

To keep on top of the news from the campaign, follow the campaign via TwitterFacebook, and Flickr.

This post was prepared jointly by Matthew Ebisu and AJung Moon and first appeared on Roboethics Info Database.


tags: , , , , ,


AJung Moon HRI researcher at McGill and publicity co-chair for the ICRA 2022 conference
AJung Moon HRI researcher at McGill and publicity co-chair for the ICRA 2022 conference





Related posts :



Open Robotics Launches the Open Source Robotics Alliance

The Open Source Robotics Foundation (OSRF) is pleased to announce the creation of the Open Source Robotics Alliance (OSRA), a new initiative to strengthen the governance of our open-source robotics so...

Robot Talk Episode 77 – Patricia Shaw

In the latest episode of the Robot Talk podcast, Claire chatted to Patricia Shaw from Aberystwyth University all about home assistance robots, and robot learning and development.
18 March 2024, by

Robot Talk Episode 64 – Rav Chunilal

In the latest episode of the Robot Talk podcast, Claire chatted to Rav Chunilal from Sellafield all about robotics and AI for nuclear decommissioning.
31 December 2023, by

AI holidays 2023

Thanks to those that sent and suggested AI and robotics-themed holiday videos, images, and stories. Here’s a sample to get you into the spirit this season....
31 December 2023, by and

Faced with dwindling bee colonies, scientists are arming queens with robots and smart hives

By Farshad Arvin, Martin Stefanec, and Tomas Krajnik Be it the news or the dwindling number of creatures hitting your windscreens, it will not have evaded you that the insect world in bad shape. ...
31 December 2023, by

Robot Talk Episode 63 – Ayse Kucukyilmaz

In the latest episode of the Robot Talk podcast, Claire chatted to Ayse Kucukyilmaz from the University of Nottingham about collaboration, conflict and failure in human-robot interactions.
31 December 2023, by





Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association