Robohub.org
 

How ethical is your ethical robot?


by
13 November 2015



share this:
NAOrobots

If you’re in the business of making ethical robots, then sooner or later you have to face the question: how ethical is your ethical robot? If you’ve read my previous blog posts then you will probably have come to the conclusion ‘not very’ – and you would be right – but here I want to explore the question in a little more depth.

First let us consider whether our ‘Asimovian’ robot can be considered ethical at all. For the answer I’m indebted to philosopher Dr Rebecca Reilly-Cooper who read our paper and concluded that yes, we can legitimately describe our robot as ethical, at least in a limited sense. She explained that the robot implements consequentialist ethics. Rebbeca wrote:

“The obvious point that any moral philosopher is going to make is that you are assuming that an essentially consequentialist approach to ethics is the correct one. My personal view, and I would guess the view of most moral philosophers, is that any plausible moral theory is going to have to pay at least some attention to the consequences of an action in assessing its rightness, even if it doesn’t claim that consequences are all that matter, or that rightness is entirely instantiated in consequences. So on the assumption that consequences have at least some significance in our moral deliberations, you can claim that your robot is capable of attending to one kind of moral consideration, even if you don’t make the much stronger claim that is capable of choosing the right action all things considered.”

One of the great things about consequences is that they can be estimated – in our case using a simulation-based internal model which we call a consequence engine. So, from a practical point of view, it seems that we can build a robot with consequentialist ethics, whereas it is much harder to think about how to build a robot with say Deontic ethics, or Virtue ethics.

Having established what kind of ethics that our ethical robot has, now consider the question of how far does the robot go toward moral agency. Here we can turn to an excellent paper by James Moor, called The Nature, Importance and Difficulty of Machine Ethics. In that paper* Moor suggests four categories of ethical agency – starting with the lowest. Let me summarise those here:

  • Ethical impact agents: Any machine that can be evaluated for its ethical consequences.
  • Implicit ethical agents: Designed to avoid negative ethical effects.
  • Explicit ethical agents: Machines that can reason about ethics.
  • Full ethical agents: Machines that can make explicit moral judgments and justify them.

The first category: ethical impact agents, really includes all machines. A good example is a knife, which can clearly be used for good (chopping food, or surgery) or ill (as a lethal weapon). Now think about the blunt plastic knife that comes with airplane food – that falls into Moor’s second category, since it has been designed to reduce the potential of ethical misuse – it is an implicit ethical agent. Most robots fall into the first category: they are ethical impact agents, and a subset – those that have been designed to avoid harm by, for instance, detecting if a human walks in front of them and automatically coming to a stop – are implicit ethical agents.
Let’s now skip to Moor’s fourth category, because it helps to frame our question – how ethical is your ethical robot? At present I would say there are no machines that are full ethical agents. In fact the only full ethical agents we know are ‘adult humans of sound mind’. The point is this – to be a full ethical agent you need to be able to not only make moral judgements but account for why you made the choices you did.

It is clear that our simple Asimovian robot is not a full ethical agent. It cannot choose how to behave (like you or I), but is compelled to make decisions based on the harm-minimisation rules hard-coded into it. And it cannot justify those decisions post-hoc. It is, as I’ve asserted elsewhere, an ethical zombie. I would however argue that because of the cognitive machinery the robot uses to simulate ahead, to model and evaluate the consequences of each of its next possible actions combined with its safety/ethical logical rules to choose between those actions, then the robot can be said to be reasoning about ethics. I believe our robot is an explicit ethical agent in Moor’s scheme.

Assuming you agree with me, then does the fact that we have reached the third category in Moor’s scheme mean that full ethical agents are on the horizon? The answer is a big NO. The scale of Moor’s scheme is not linear. It’s a relatively small step from ethical impact agents to implicit ethical agents. Then a very much bigger step to explicit ethical agents, which we are only just beginning to take. But there is a huge gulf then to full ethical agents, since they would almost certainly need something approaching human equivalent intelligence.

But maybe it’s just as well. The societal implications of full ethical agents, if and when they exist, would be huge. And I think that explicit ethical agents have huge potential for good.

*Moor JH (2006), The Nature, Importance and Difficulty of Machine Ethics, IEEE Intelligent Systems, 21 (4), 18-21.



tags: ,


Alan Winfield is Professor in robotics at UWE Bristol. He communicates about science on his personal blog.
Alan Winfield is Professor in robotics at UWE Bristol. He communicates about science on his personal blog.


Subscribe to Robohub newsletter on substack



Related posts :

Resource-sharing boosts robotic resilience

  31 Mar 2026
When a modular robot shares power, sensing, and communication resources among its individual units, it is significantly more resistant to failure than traditional robotic systems.

Robot Talk Episode 150 – House building robots, with Vikas Enti

  27 Mar 2026
In the latest episode of the Robot Talk podcast, Claire chatted to Vikas Enti from Reframe Systems about using robotics and automation to build climate-resilient, high-performance homes.

A history of RoboCup with Manuela Veloso

and   24 Mar 2026
Find out how RoboCup got started and how the competition has evolved, from one of the co-founders.

Robot Talk Episode 149 – Robot safety and security, with Krystal Mattich

  20 Mar 2026
In the latest episode of the Robot Talk podcast, Claire chatted to Krystal Mattich from Brain Corp about trustworthy autonomous robots in public spaces.

A multi-armed robot for assisting with agricultural tasks

  18 Mar 2026
How can a robot safely manipulate branches to reveal hidden flowers while remaining aware of interaction forces and minimizing damage?

Graphene-based sensor to improve robot touch

  16 Mar 2026
Multiscale-structured miniaturized 3D force sensors for improved robot touch.

Robot Talk Episode 148 – Ethical robot behaviour, with Alan Winfield

  13 Mar 2026
In the latest episode of the Robot Talk podcast, Claire chatted to Alan Winfield from the University of the West of England about developing new standards for ethics and transparency in robotics.

Coding for underwater robotics

  12 Mar 2026
Lincoln Laboratory intern Ivy Mahncke developed and tested algorithms to help human divers and robots navigate underwater.



Robohub is supported by:


Subscribe to Robohub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence