Robohub.org
 

Why robots need to be able to say ‘No’


by
13 April 2016



share this:
Photo by Jiuguang Wang

Photo by Jiuguang Wang

Should you always do what other people tell you to do? Clearly not. Everyone knows that. So should future robots always obey our commands? At first glance, you might think they should, simply because they are machines and that’s what they are designed to do. But then think of all the times you would not mindlessly carry out others’ instructions – and put robots into those situations.

Just consider:

  • An elder-care robot tasked by a forgetful owner to wash the “dirty clothes,” even though the clothes had just come out of the washer
  • A preschooler who orders the daycare robot to throw a ball out the window
  • A student commanding her robot tutor to do all the homework instead doing it herself
  • A household robot instructed by its busy and distracted owner to run the garbage disposal even though spoons and knives are stuck in it.

There are plenty of benign cases where robots receive commands that ideally should not be carried out because they lead to unwanted outcomes. But not all cases will be that innocuous, even if their commands initially appear to be.

Consider a robot car instructed to back up while the dog is sleeping in the driveway behind it, or a kitchen aid robot instructed to lift a knife and walk forward when positioned behind a human chef. The commands are simple, but the outcomes are significantly worse.

How can we humans avoid such harmful results of robot obedience? If driving around the dog were not possible, the car would have to refuse to drive at all. And similarly, if avoiding stabbing the chef were not possible, the robot would have to either stop walking forward or not pick up the knife in the first place.

In either case, it is essential for both autonomous machines to detect the potential harm their actions could cause and to react to it by either attempting to avoid it, or if harm cannot be avoided, by refusing to carry out the human instruction. How do we teach robots when it’s OK to say no?

How can robots know what will happen next?

In our lab, we have started to develop robotic controls that make simple inferences based on human commands. These will determine whether the robot should carry them out as instructed or reject them because they violate an ethical principle the robot is programmed to obey.

Telling robots how and when – and why – to disobey is far easier said than done. Figuring out what harm or problems might result from an action is not simply a matter of looking at direct outcomes. A ball thrown out a window could end up in the yard, with no harm done. But the ball could end up on a busy street, never to be seen again, or even causing a driver to swerve and crash. Context makes all the difference.

It is difficult for today’s robots to determine when it is okay to throw a ball – such as to a child playing catch – and when it’s not – such as out the window or in the garbage. Even harder is if the child is trying to trick the robot, pretending to play a ball game but then ducking, letting the ball disappear through the open window.

Explaining morality and law to robots

Understanding those dangers involves a significant amount of background knowledge (including the prospect that playing ball in front of an open window could send the ball through the window). It requires the robot not only to consider action outcomes by themselves, but also to contemplate the intentions of the humans giving the instructions.

To handle these complications of human instructions – benevolent or not – robots need to be able to explicitly reason through consequences of actions and compare outcomes to established social and moral principles that prescribe what is and is not desirable or legal. As seen above, our robot has a general rule that says, “If you are instructed to perform an action and it is possible that performing the action could cause harm, then you are allowed to not perform it.” Making the relationship between obligations and permissions explicit allows the robot to reason through the possible consequences of an instruction and whether they are acceptable.

In general, robots should never perform illegal actions, nor should they perform legal actions that are not desirable. Hence, they will need representations of laws, moral norms and even etiquette in order to be able to determine whether the outcomes of an instructed action, or even the action itself, might be in violation of those principles.

While our programs are still a long way from what we will need to allow robots to handle the examples above, our current system already proves an essential point: robots must be able to disobey in order to obey.

This article was originally published on The Conversation. Read the original article.


If you liked this article, you may also be interested in:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.



tags: , , , , ,


Matthias Scheutz is a Professor of Cognitive and Computer Science, Tufts University
Matthias Scheutz is a Professor of Cognitive and Computer Science, Tufts University





Related posts :



Robot Talk Episode 126 – Why are we building humanoid robots?

  20 Jun 2025
In this special live recording at Imperial College London, Claire chatted to Ben Russell, Maryam Banitalebi Dehkordi, and Petar Kormushev about humanoid robotics.

Gearing up for RoboCupJunior: Interview with Ana Patrícia Magalhães

and   18 Jun 2025
We hear from the organiser of RoboCupJunior 2025 and find out how the preparations are going for the event.

Robot Talk Episode 125 – Chatting with robots, with Gabriel Skantze

  13 Jun 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Gabriel Skantze from KTH Royal Institute of Technology about having natural face-to-face conversations with robots.

Preparing for kick-off at RoboCup2025: an interview with General Chair Marco Simões

and   12 Jun 2025
We caught up with Marco to find out what exciting events are in store at this year's RoboCup.

Interview with Amar Halilovic: Explainable AI for robotics

  10 Jun 2025
Find out about Amar's research investigating the generation of explanations for robot actions.

Robot Talk Episode 124 – Robots in the performing arts, with Amy LaViers

  06 Jun 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Amy LaViers from the Robotics, Automation, and Dance Lab about the creative relationship between humans and machines.

Robot Talk Episode 123 – Standardising robot programming, with Nick Thompson

  30 May 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Nick Thompson from BOW about software that makes robots easier to program.

Congratulations to the #AAMAS2025 best paper, best demo, and distinguished dissertation award winners

  29 May 2025
Find out who won the awards presented at the International Conference on Autonomous Agents and Multiagent Systems last week.



 

Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


©2025.05 - Association for the Understanding of Artificial Intelligence


 












©2025.05 - Association for the Understanding of Artificial Intelligence