Robohub.org
 

Ethical robots: Some technical and ethical challenges

by
19 November 2013



share this:

I’ve been talking about robot ethics for several years now, but that’s mostly been about how we roboticists must be responsible and mindful of the societal impact of our creations. Two years ago I wrote – in my Very Short Introduction to Robotics – that robots cannot be ethical. Since then I’ve completely changed my mind*. I now think there is a way of making a robot that is at least minimally ethical. It’s a huge technical challenge which, in turn, raises new ethical questions. For instance: if we can build ethical robots, should we? Must we..? Would we have an ethical duty to do so? After all, the alternative would be to build amoral robots. Or, would building ethical robots create a new set of ethical problems? An ethical Pandora’s box.

Here are the slides of my keynote at last week’s excellent EUCog meeting: Social and Ethical Aspects of Cognitive Systems. And the talk itself is here, on YouTube.

The talk was in three parts.

Part 1: here I outline why and how roboticists must be ethical. This is essentially a recap of previous talks. I start with the societal context: the frustrating reality that even when we meet to discuss robot ethics this can be misinterpreted as scientists fear a revolt of killer robots. This kind of media reaction is just one part of three linked expectation gaps, in what I characterise as a crisis of expectations. I then outline a few ethical problems in robotics – just as examples. Here I argue it’s important to link safe and ethical behaviour – something that I return to later. Then I recap the five draft principles of robotics.

Part 2: here I ask the question: what if we could make ethical robots? I outline new thinking which brings together the idea of robots with internal models, with Dennett’s Tower of Generate and Test, as a way of making robots that can predict the consequences of their own actions. I then outline a generic control architecture for robot safety, even in unpredictable environments. The important thing about this approach is that the robot can generate next possible actions, test them in its internal model, and evaluate the safety consequences of each possible action. The unsafe actions are then inhibited – and the robot controller determines which of the remaining safe actions is chosen, using its usual action-selection mechanism. Then I argue that it is surprisingly easy to extend this architecture for ethical behaviour, to allow the robot to predict the robot actions that would minimise harm for a human in its environment. This appears to represent an implementation of Asimov’s 1st and 3rd laws. I outline the significant technical challenges that would need to be overcome to make this work.

But, assuming such a robot could be built, how ethical would it be? I suggest that with a subset of Asimovian ethics it probably wouldn’t satisfy an ethicist or moral philosopher. But, nevertheless – I argue there’s a good chance that such a minimally ethical robot could help to increase trust, in the robot, from its users.

Part 3: in the final part of the talk I conclude with some ethical questions. The first is: if we could build an ethical robot, are we ethically compelled to do so? Some argue that we have an ethical duty to try and build moral machines. I agree. But the counter argument, my second ethical question, is are there ethical hazards? Are we opening a kind of ethical Pandora’s box, by building robots that might have an implicit claim to rights, or responsibilities. I don’t mean that such a robot would ask for rights, but instead that, because it is has some moral agency, then we might think it should be accorded rights. I conclude that we should try and build ethical robots. The benefits I think far outweigh any ethical hazards, which in any event can, I think, be minimised.


*It was not so much an epiphany, as a slow conversion from sceptic to believer. I have long term collaborator Michael Fisher to thank for doggedly arguing with me that it was worth thinking deeply about how to build ethical robots.



tags: , , , , , ,


Alan Winfield is Professor in robotics at UWE Bristol. He communicates about science on his personal blog.
Alan Winfield is Professor in robotics at UWE Bristol. He communicates about science on his personal blog.





Related posts :



Open Robotics Launches the Open Source Robotics Alliance

The Open Source Robotics Foundation (OSRF) is pleased to announce the creation of the Open Source Robotics Alliance (OSRA), a new initiative to strengthen the governance of our open-source robotics so...

Robot Talk Episode 77 – Patricia Shaw

In the latest episode of the Robot Talk podcast, Claire chatted to Patricia Shaw from Aberystwyth University all about home assistance robots, and robot learning and development.
18 March 2024, by

Robot Talk Episode 64 – Rav Chunilal

In the latest episode of the Robot Talk podcast, Claire chatted to Rav Chunilal from Sellafield all about robotics and AI for nuclear decommissioning.
31 December 2023, by

AI holidays 2023

Thanks to those that sent and suggested AI and robotics-themed holiday videos, images, and stories. Here’s a sample to get you into the spirit this season....
31 December 2023, by and

Faced with dwindling bee colonies, scientists are arming queens with robots and smart hives

By Farshad Arvin, Martin Stefanec, and Tomas Krajnik Be it the news or the dwindling number of creatures hitting your windscreens, it will not have evaded you that the insect world in bad shape. ...
31 December 2023, by

Robot Talk Episode 63 – Ayse Kucukyilmaz

In the latest episode of the Robot Talk podcast, Claire chatted to Ayse Kucukyilmaz from the University of Nottingham about collaboration, conflict and failure in human-robot interactions.
31 December 2023, by





Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association