Robohub.org
 

Losing Humanity: The case against killer robots


by
20 November 2012



share this:

Released yesterday by the Human Rights Watch (HRW) organization, the report Losing Humanity: The Case against Killer Robots (herein: the report) and associated press release and video (below), warn of the increasing governmental interest in autonomous, “killer” robots, and the dangers that such technology would pose if ever allowed onto the battlefield.

As mentioned on Robohub before (Rise of the robots: The robotization of today’s military), today’s military is becoming increasingly robotic. In its current iteration, all military strikes carried out by robots are authorized by a human operator, meaning that military decisions – decisions to end human lives – are supported by and traceable through a human chain of accountability.

However as documented by the HRW, governments are showing increasing interest in automating this decision process, thus removing the accountable human from the chain. This line of thought raises the question: who is liable if an autonomous robot kills someone? The HRW argues that autonomous robots “would inherently lack human qualities that provide legal and non-legal checks on the killing of civilians”. We’ve discussed some of these ethical issues with Noel Sharkey (also featured in the above video) and Ronald Arkin before in our two part series on robot ethics (part 1, part 2).

Precursor technologies (discussed in detail on page 4 of the report) are already being developed and in some cases, deployed on the battlefield. These technologies have the ability to identify and target a threat, but will wait upon human permission before engaging. These technologies include South Korea’s sentry turrets (with a similar installation being carried out by Israel); the US Military’s missile defense systems; and various autonomous aircraft currently in development.

The US Government has confirmed its interest in autonomous military robots, through the release of military development roadmaps. The Unmanned Systems Integrated Roadmap FY2011-2036 released by the US Department of Defense states that the department “envisions unmanned systems seamlessly operating with manned systems while gradually reducing the degree of human control and decision making required for the unmanned portion of the force structure”.

Similar visions are expressed by the US Army, who state a “current goal of supervised autonomy, but with an ultimate goal of full autonomy”; the US Navy, who state that “while admittedly futuristic in vision, one can conceive of scenarios where UUVs sense, track, identify, target, and destroy an enemy – all autonomously”; and the US Airforce, who foresee that:

Increasingly humans will no longer be “in the loop” but rather “on the loop” – monitoring the execution of certain decisions. Simultaneously, advances in AI will enable systems to make combat decisions and act within legal and policy constraints without necessarily requiring human input.

These excerpts, all taken directly from unclassified military development roadmaps, show a clear governmental interest in reducing the dependence on humans, with a final goal of autonomous decision making and execution. Thankfully, these roadmaps do question the legal, ethical and political ramifications of fully autonomous “lethal systems”, which have “not yet been fully addressed and resolved”.

The report addresses these legal and ethical questions and calls for the following two resolutions to be made:

To All States:

  • Prohibit the development, production, and use of fully autonomous weapons through an international legally binding instrument.
  • Adopt national laws and policies to prohibit the development, production, and use of fully autonomous weapons.
  • Commence reviews of technologies and components that could lead to fully autonomous weapons. These reviews should take place at the very beginning of the development process and continue throughout the development and testing phases.

To Roboticists and Others Involved in the Development of Robotic Weapons:

  • Establish a professional code of conduct governing the research and development of autonomous robotic weapons, especially those capable of becoming fully autonomous, in order to ensure that legal and ethical concerns about their use in armed conflict are adequately considered at all stages of technological development.

In closing, I would like to leave you with an extract from page 8 the report:

In training their troops to kill enemy forces, armed forces often attempt “to produce something close to a ‘robot psychology,’ in which what would otherwise seem horrifying acts can be carried out coldly.” This desensitizing process may be necessary to help soldiers carry out combat operations and cope with the horrors of war, yet it illustrates that robots are held up as the ultimate killing machines.

I personally believe we are at the beginning of the era of robotics, during which robots will become increasingly present in our everyday lives. I only hope that this era and that the robots we develop, will be remembered for more than simply being the ultimate killing machines.

 



tags: , , , , ,


Mike Hamer





Related posts :



Robot Talk Episode 121 – Adaptable robots for the home, with Lerrel Pinto

  16 May 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Lerrel Pinto from New York University about using machine learning to train robots to adapt to new environments.

What’s coming up at #ICRA2025?

  16 May 2025
Find out what's in store at the IEEE International Conference on Robotics & Automation, which will take place from 19-23 May.

Robot see, robot do: System learns after watching how-tos

  14 May 2025
Researchers have developed a new robotic framework that allows robots to learn tasks by watching a how-to video

AI-powered robots help tackle Europe’s growing e-waste problem

  12 May 2025
EU-funded researchers have developed adaptable robots that could transform the way we recycle electronic waste, benefiting both the environment and the economy.

Robot Talk Episode 120 – Evolving robots to explore other planets, with Emma Hart

  09 May 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Emma Hart from Edinburgh Napier University about algorithms that 'evolve' better robot designs and control systems.

Robot Talk Episode 119 – Robotics for small manufacturers, with Will Kinghorn

  02 May 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Will Kinghorn from Made Smarter about how to increase adoption of new tech by small manufacturers.

Multi-agent path finding in continuous environments

  01 May 2025
How can a group of agents minimise their journey length whilst avoiding collisions?

Interview with Yuki Mitsufuji: Improving AI image generation

  29 Apr 2025
Find out about two pieces of research tackling different aspects of image generation.



 

Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


©2025.05 - Association for the Understanding of Artificial Intelligence


 












©2025.05 - Association for the Understanding of Artificial Intelligence