Robohub.org
 

How ethical is your ethical robot?


by
13 November 2015



share this:
NAOrobots

If you’re in the business of making ethical robots, then sooner or later you have to face the question: how ethical is your ethical robot? If you’ve read my previous blog posts then you will probably have come to the conclusion ‘not very’ – and you would be right – but here I want to explore the question in a little more depth.

First let us consider whether our ‘Asimovian’ robot can be considered ethical at all. For the answer I’m indebted to philosopher Dr Rebecca Reilly-Cooper who read our paper and concluded that yes, we can legitimately describe our robot as ethical, at least in a limited sense. She explained that the robot implements consequentialist ethics. Rebbeca wrote:

“The obvious point that any moral philosopher is going to make is that you are assuming that an essentially consequentialist approach to ethics is the correct one. My personal view, and I would guess the view of most moral philosophers, is that any plausible moral theory is going to have to pay at least some attention to the consequences of an action in assessing its rightness, even if it doesn’t claim that consequences are all that matter, or that rightness is entirely instantiated in consequences. So on the assumption that consequences have at least some significance in our moral deliberations, you can claim that your robot is capable of attending to one kind of moral consideration, even if you don’t make the much stronger claim that is capable of choosing the right action all things considered.”

One of the great things about consequences is that they can be estimated – in our case using a simulation-based internal model which we call a consequence engine. So, from a practical point of view, it seems that we can build a robot with consequentialist ethics, whereas it is much harder to think about how to build a robot with say Deontic ethics, or Virtue ethics.

Having established what kind of ethics that our ethical robot has, now consider the question of how far does the robot go toward moral agency. Here we can turn to an excellent paper by James Moor, called The Nature, Importance and Difficulty of Machine Ethics. In that paper* Moor suggests four categories of ethical agency – starting with the lowest. Let me summarise those here:

  • Ethical impact agents: Any machine that can be evaluated for its ethical consequences.
  • Implicit ethical agents: Designed to avoid negative ethical effects.
  • Explicit ethical agents: Machines that can reason about ethics.
  • Full ethical agents: Machines that can make explicit moral judgments and justify them.

The first category: ethical impact agents, really includes all machines. A good example is a knife, which can clearly be used for good (chopping food, or surgery) or ill (as a lethal weapon). Now think about the blunt plastic knife that comes with airplane food – that falls into Moor’s second category, since it has been designed to reduce the potential of ethical misuse – it is an implicit ethical agent. Most robots fall into the first category: they are ethical impact agents, and a subset – those that have been designed to avoid harm by, for instance, detecting if a human walks in front of them and automatically coming to a stop – are implicit ethical agents.
Let’s now skip to Moor’s fourth category, because it helps to frame our question – how ethical is your ethical robot? At present I would say there are no machines that are full ethical agents. In fact the only full ethical agents we know are ‘adult humans of sound mind’. The point is this – to be a full ethical agent you need to be able to not only make moral judgements but account for why you made the choices you did.

It is clear that our simple Asimovian robot is not a full ethical agent. It cannot choose how to behave (like you or I), but is compelled to make decisions based on the harm-minimisation rules hard-coded into it. And it cannot justify those decisions post-hoc. It is, as I’ve asserted elsewhere, an ethical zombie. I would however argue that because of the cognitive machinery the robot uses to simulate ahead, to model and evaluate the consequences of each of its next possible actions combined with its safety/ethical logical rules to choose between those actions, then the robot can be said to be reasoning about ethics. I believe our robot is an explicit ethical agent in Moor’s scheme.

Assuming you agree with me, then does the fact that we have reached the third category in Moor’s scheme mean that full ethical agents are on the horizon? The answer is a big NO. The scale of Moor’s scheme is not linear. It’s a relatively small step from ethical impact agents to implicit ethical agents. Then a very much bigger step to explicit ethical agents, which we are only just beginning to take. But there is a huge gulf then to full ethical agents, since they would almost certainly need something approaching human equivalent intelligence.

But maybe it’s just as well. The societal implications of full ethical agents, if and when they exist, would be huge. And I think that explicit ethical agents have huge potential for good.

*Moor JH (2006), The Nature, Importance and Difficulty of Machine Ethics, IEEE Intelligent Systems, 21 (4), 18-21.



tags: ,


Alan Winfield is Professor in robotics at UWE Bristol. He communicates about science on his personal blog.
Alan Winfield is Professor in robotics at UWE Bristol. He communicates about science on his personal blog.





Related posts :



Robot Talk Episode 103 – Keenan Wyrobek

  20 Dec 2024
In the latest episode of the Robot Talk podcast, Claire chatted to Keenan Wyrobek from Zipline about drones for delivering life-saving medicine to remote locations.

Robot Talk Episode 102 – Isabella Fiorello

  13 Dec 2024
In the latest episode of the Robot Talk podcast, Claire chatted to Isabella Fiorello from the University of Freiburg about bioinspired living materials for soft robotics.

Robot Talk Episode 101 – Christos Bergeles

  06 Dec 2024
In the latest episode of the Robot Talk podcast, Claire chatted to Christos Bergeles from King's College London about micro-surgical robots to deliver therapies deep inside the body.

Robot Talk Episode 100 – Mini Rai

  29 Nov 2024
In the latest episode of the Robot Talk podcast, Claire chatted to Mini Rai from Orbit Rise about orbital and planetary robots.

Robot Talk Episode 99 – Joe Wolfel

  22 Nov 2024
In the latest episode of the Robot Talk podcast, Claire chatted to Joe Wolfel from Terradepth about autonomous submersible robots for collecting ocean data.

Robot Talk Episode 98 – Gabriella Pizzuto

  15 Nov 2024
In the latest episode of the Robot Talk podcast, Claire chatted to Gabriella Pizzuto from the University of Liverpool about intelligent robotic manipulators for laboratory automation.

Online hands-on science communication training – sign up here!

  13 Nov 2024
Find out how to communicate about your work with experts from Robohub, AIhub, and IEEE Spectrum.

Robot Talk Episode 97 – Pratap Tokekar

  08 Nov 2024
In the latest episode of the Robot Talk podcast, Claire chatted to Pratap Tokekar from the University of Maryland about how teams of robots with different capabilities can work together.





Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association