Robohub.org
 

Campaign to Stop Killer Robots: Let’s all stop them, shall we?


by
27 April 2013



share this:

Killer robots.

Looking at the two words together is enough to conjure up images of chaos and destruction.  They’re an image far too familiar in science fiction settings such as Isaac Asimov or Arthur C. Clarke. It’s also a concept many A.I. researchers will gladly tell you they’ve been plagued with at least once by friends or colleagues.  However, how much of a real ethical concern do they pose for society?

Nobel Peace Laureate Jody Williams and Noel Sharkey from the International Committee for Robot Arms Control at the launch of the Campaign to Stop Killer Robots. Photo by Peter Asaro.

Nobel Peace Laureate Jody Williams and Noel Sharkey from the International Committee for Robot Arms Control at the launch of the Campaign to Stop Killer Robots. Photo by Peter Asaro.

In November of last year, Human Rights Watch (HRW) and the International Human Rights Clinic (IHRC) at Harvard Law School jointly published a 50-page report on the topic of killer robots. The report, titled Losing Humanity: The Case against Killer Robots, outlined many legal, ethical, and other concerns pertaining to the use of fully autonomous weapons (previously covered by Mike Hamer of Robohub). Some of the concerns outlined include unsolved and important roboethics questions such as “who is legally responsible for a robot’s actions?”

Maybe this question doesn’t have to be answered (not right away at least) if we simply stop the use and development of killer robots all together.

Earlier this week, on April 23, a new global campaign named the Campaign to Stop Killer Robots was launched with this very idea of stopping killer robots. Composed of over twenty international NGOs in ten countries, the campaign’s focus is to ban the development, production and use of future lethal robot weapons or “killer robots” that could autonomously locate and neutralize human targets.

Addressing countries such as China, Russia, Israel and the United States that are currently moving to create systems to give greater autonomy to combat robots, the campaign believes it would pose a challenge to international human rights and humanitarian law.

Photo by Peter Asaro

Photo by Peter Asaro

Fully autonomous weapons aren’t really roaming around war zones today (yet). So this campaign is preemptive. But given that we have so many precursor technologies out there already — I mean, who doesn’t know about the drone technologies used for targeted attacks today? — it seems that developing and deploying autonomous weapons is an obvious next step. But this obvious next step could lead to a tragic case of a robotic arms race. Taking actions to ban it now makes more sense than to ban it when it’s too late.

In 2009, Robots Podcast interviewed Noel Sharkey (Professor of A.I. And Robotics at the University of Sheffield, and spokesperson of the Campaign to Stop Killer Robots) and Ronald Arkin (Regent’s Professor and Director of the Mobile Robot Laboratory at Georgia Institute of Technology), and both experts addressed the issue of ethics of robot soldiers.  As Sharkey argued, the problem of robots autonomously identifying targets lies within the Principle of Discrimination (part of the international Laws of War described in the Geneva Convention) (listen to his interview here).  In it, soldiers must not harm a civilian, non combatant, the immensely ill or prisoners of war. According to Sharkey, no A.I. system can reliably discriminate between soldiers and civilians.  A robot, for the moment, “can’t have a sense of ethics” necessary to make the humanistic decisions required by soldiers.

An opposing viewpoint is provided by Arkin. According to Arkin, robot soldiers carry an inherent danger but depending on their implementation, it could provide better safety and non-combatants for soldiers (listen to his interview here).  As Arkin argues, robots are not affected by emotions such as fear or anger that can sometimes cloud soldier’s better judgment on the field — although simulation of certain emotions such as guilt could be used to improve a robot’s decision making algorithm.

Despite the contrasting views, both Sharkey and Arkin were in agreement in 2009 that as it stands, autonomous robots are not prepared for battlefield. The Losing Humanity report from last year echos this viewpoint. The launch of the Campaign to Stop Killer Robots is timely, and comes one month ahead of Christof Heyns‘ (United Nations Special Rapporteur on extrajudicial, summary or arbitrary executions) delivery of  his report on lethal autonomous robotics to the UN Human Rights Council.

Stop Killer RobotsSo what can we, roboticists, politicians, or the public do in support of the campaign?

You can have your say on the matter by voting on the poll of the Engineer (see left hand column).

Interested individuals can support the campaign by responding to the call to ban lethal autonomous robots by the International Committee for Robot Arms Control’s (ICRAC, a leading NGO member of the Campaign to Stop Killer Robots). Interested NGOs can join the campaign by contacting the campaign coordinator.

Or if you’d rather not explicitly support them, but find out more, check out the campaign’s website or the press release .

To keep on top of the news from the campaign, follow the campaign via TwitterFacebook, and Flickr.

This post was prepared jointly by Matthew Ebisu and AJung Moon and first appeared on Roboethics Info Database.


tags: , , , , ,


AJung Moon HRI researcher at McGill and publicity co-chair for the ICRA 2022 conference
AJung Moon HRI researcher at McGill and publicity co-chair for the ICRA 2022 conference


Subscribe to Robohub newsletter on substack



Related posts :

Robot Talk Episode 145 – Robotics and automation in manufacturing, with Agata Suwala

  20 Feb 2026
In the latest episode of the Robot Talk podcast, Claire chatted to Agata Suwala from the Manufacturing Technology Centre about leveraging robotics to make manufacturing systems more sustainable.

Reversible, detachable robotic hand redefines dexterity

  19 Feb 2026
A robotic hand developed at EPFL has dual-thumbed, reversible-palm design that can detach from its robotic ‘arm’ to reach and grasp multiple objects.

“Robot, make me a chair”

  17 Feb 2026
An AI-driven system lets users design and build simple, multicomponent objects by describing them with words.

Robot Talk Episode 144 – Robot trust in humans, with Samuele Vinanzi

  13 Feb 2026
In the latest episode of the Robot Talk podcast, Claire chatted to Samuele Vinanzi from Sheffield Hallam University about how robots can tell whether to trust or distrust people.

How can robots acquire skills through interactions with the physical world? An interview with Jiaheng Hu

and   12 Feb 2026
Find out more about work published at the Conference on Robot Learning (CoRL).

Sven Koenig wins the 2026 ACM/SIGAI Autonomous Agents Research Award

  10 Feb 2026
Sven honoured for his work on AI planning and search.

Robot Talk Episode 143 – Robots for children, with Elmira Yadollahi

  06 Feb 2026
In the latest episode of the Robot Talk podcast, Claire chatted to Elmira Yadollahi from Lancaster University about how children interact with and relate to robots.

New frontiers in robotics at CES 2026

  03 Feb 2026
Henry Hickson reports on the exciting developments in robotics at Consumer Electronics Show 2026.



Robohub is supported by:


Subscribe to Robohub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence