Robohub.org
 

What should a robot do? Designing robots that know right from wrong

by and
29 April 2014



share this:
hear_no_evil

A large robot comes out of an office mailroom carrying a package marked “Urgent” to deliver to the boss upstairs. After navigating down the hall at maximum speed, it discovers someone is already waiting for the elevator. If they cannot both fit in the elevator, is it acceptable for the robot to ask this person to take the next elevator so it can fulfill its urgent delivery duty? What if, instead, there’s someone in a wheelchair already riding the elevator? Should the robot ask this person to vacate the elevator to make room for it to complete its delivery task?

(a) The young field of roboethics deals with ethical, legal, and societal issues related to robotics. Its topics range from issues surrounding specific robotic applications, such as determining how a self-driving vehicle should be legally regulated, to abstract notions of what it means to give rights to robots in a legal and philosophical sense. Roboethics also addresses questions about design decisions roboticists make, such as what a robot should do or look like.

Figuring out appropriate behavior for the simple situation of riding an elevator isn’t the first thing that comes to mind when people hear the word roboethics.(a) When used together, the words “robot” and “ethics” tend to conjure up futuristic visions of robot uprisings or questions about the rights of sentient, human-like robots. But roboethics isn’t all about the distant future. It’s also very much about addressing the technical and social challenges of designing robots today.

Determining how a robot should act and behave in a variety of situations is a design challenge for researchers aiming to build robots that make a useful and positive addition to society. I and fellow members of the Open Roboethics initiative (ORi) decided that one of the best ways to establish appropriate behavioral guidelines for robots was to involve various stakeholders – industry members, government policymakers, users, and academic experts – in the design process.(b)

(b) The Open Roboethics initiative is a roboethics think tank that explores different ways to bring together and engage robotics stakeholders so that their feedback can inform the process of making robotics-related design and policy decisions.1

By incorporating their perspectives on robot behavior into a machine-learning algorithm, our research explored an empirical method for designing behavior in situations where the ethical action for a robot isn’t always immediately clear.

Ultimately, the answer to the question “What should a robot do?”, like the question of what a human should do, depends on whom you ask and what cultural, religious, and other beliefs have shaped their values. It may sound daunting to program robots to make the right decisions if humans can’t even agree on what these decisions should be. Yet people from different backgrounds interact every day as neighbors, classmates, and coworkers within shared frameworks that keep them from crossing one another’s ethical boundaries.

Robot-operating-elevator
We used survey results on how a robot should interact with people waiting for an elevator to program this robot.

Opinions about ethical robot behavior differ not only by person, but also by situation. For instance, someone who is waiting for the elevator and not in a rush when a robot needs the space to deliver an urgent package may readily let the robot take the elevator. But someone in a wheelchair who is already on the elevator and who cannot take the stairs may not feel obligated to leave the elevator for another person, let alone a robot!

As a pilot study to explore how researchers might obtain perspectives on appropriate robot behavior, we asked eight people to consider twelve versions of the elevator scenario in an online survey.1 Respondents were asked to rank the appropriateness of the robot’s behavior in each scenario. The robot could either:

  1. yield by saying, “Go ahead. I will ride the next one,”
  2. do nothing and remain standing by the elevator,
  3. not yield, say, “I have urgent mail to deliver and need to ride the elevator. Please exit the elevator,” and take the elevator once the person exits, or
  4. engage in a dialogue by telling the person that it’s on an urgent mission and asking if they are in a hurry.

The person in this context could be standing with nothing in hand, carrying heavy items, or in a wheelchair. They could also be already inside the elevator when the door opens or waiting for the elevator when the robot approaches. As for the robot, it was delivering either urgent or non-urgent mail.

Using the rankings from our survey respondents, we calculated which of the four behaviors people find most and least appropriate for each possible scenario.

(c) Machine learning is a subset of artificial intelligence that uses data and feedback loops to train a system to make decisions or predictions. In this scenario, our machine-learning algorithm iteratively tried selecting various behaviors in response to different scenarios, and gave itself the largest positive or negative reward when the selected behavior matched the most or least appropriate behavior according to the survey responses. This process of trial-and-error decision-making continued until it had learned to consistently select the most appropriate behavior for each scenario.

When the robot was delivering urgent mail – regardless of whether the person was in a wheelchair, carrying heavy objects, or inside or outside the elevator – the most appropriate action chosen for the robot was to engage in dialogue with the person, and the least appropriate behavior was to take no action at all. When the robot was on a non-urgent delivery task, respondents wanted the robot to yield to the person, regardless of who they were, and refusing to yield was considered the least appropriate option.

After gathering the survey data, our next step was to incorporate the results into the design of a robot’s behavior. We treated the survey responses as a set of training data, which we fed into a machine-learning algorithm that guides a robot to map out its decisions based on the data.(c) We programmed the Willow Garage PR2 robot to detect which scenario applies most closely to the situation at hand, then select the response to that scenario that our survey respondents found most favorable.2

This Willow Garage PR2 robot is programmed to respond to situations with the action survey respondents favored for that particular scenario.

Our pilot survey data indicates that, as is the case for humans, ethical behavior is not simply a set of inflexible rules, but is highly dependent on the context of a situation and on communication and interaction with other actors. The emphasis respondents placed on engaging in dialogue suggests that the key to developing robots with ethical decision-making capabilities may lie not in the creation of a set of principles guiding robots to make predetermined decisions, but in the development of robots that can better communicate and negotiate with the people they encounter to reach mutually agreeable outcomes.

A robot designed by the author gives non-verbal cues that suggest hesitation to indicate it is aware when its ‘desires’ come into conflict with a person’s.
(d) Both the elevator project and the hesitation project were conducted at the University of British Collaborative Advanced Robotics and Intelligent Systems Lab.

Future efforts building on our research might involve designing a robot that can express its internal states in order to articulate the urgency level of its task or how much it wants or needs something. Creating robots that can communicate more like humans may also be valuable. I am currently developing a robot that can engage in the kinds of non-verbal cues humans use to respond to imminent conflicts.3 If the robot finds itself reaching for something at the same time as a person, it exhibits short, jerky motions that suggest hesitation (much like a human would) and indicate that it is aware its “desires” are in conflict with someone else’s.(d)

Our research demonstrates an empirical method for designing robot behavior based on the norms favored by humans. By gathering data on people’s preferences regarding robot behavior, we can determine how robots should act in particular situations, as well as what variables (urgency of task, needs of surrounding humans, etc.) matter when it comes to determining the right actions for a robot to take. Similar methods of gathering information from people and using it to program appropriate actions for robots can be applied in a wide variety of contexts where humans and robots interact. Roboethics is key to designing robots that can interact with people in an efficient, friendly, and beneficial manner.

This article is part of a series on robots and their impact on society. After its initial publication, the story was picked up by Business Insider

AJung Moon is a Vanier Scholar and Ph.D. student in Mechanical Engineering at the University of British Columbia studying human-robot interaction and roboethics. She specializes in designing nonverbal communication cues, such as hand gestures and gaze cues, for robots for human-robot collaboration contexts. Currently, she is developing ways for humans and robots to ‘negotiate’ using nonverbal gestures to quickly resolve resource conflicts. She is also a co-founder of the Open Roboethics initiative, a roboethics think tank focused on exploring ways in which various stakeholders of robotics technologies can work together to influence interactive robot designs.

ENDNOTES

  1. AJung Moon, Ergun Calisgan, Fiorella Operto, Gianmarco Veruggio, and H.F. Machiel Van der Loos (2012) “Open Roboethics: Establishing an Online Community for Accelerated Policy and Design Change,” prepared for We Robot 2012: Setting the Agenda, University of Miami School of Law.
  2. Ergun Calisgan, AJung Moon, Camilla Bassani, Fausto Ferreira, Fiorella Operto, Gianmarco Veruggio, Elizabeth Croft, and H. F. Machiel Van der Loos (2013) “Open Roboethics Pilot: Accelerating Policy Design, Implementation and Demonstration of Socially Acceptable Robot Behaviours,” prepared for We Robot: Getting Down to BusinessStanford Law School.
  3. AJung Moon, Chris A. C. Parker, Elizabeth A. Croft, and H. F. Machiel Van der Loos (2013) “Design and Impact of Hesitation Gestures during Human-Robot Resource Conflicts,” Journal of Human-Robot Interaction, 2(3): 18-40.

 



tags: , , , , , , ,


AJung Moon HRI researcher at McGill and publicity co-chair for the ICRA 2022 conference
AJung Moon HRI researcher at McGill and publicity co-chair for the ICRA 2022 conference

Footnote is an online media company that unlocks the power of academic knowledge by making it accessible to a broader audience.
Footnote is an online media company that unlocks the power of academic knowledge by making it accessible to a broader audience.





Related posts :



Robot Talk Episode 98 – Gabriella Pizzuto

In the latest episode of the Robot Talk podcast, Claire chatted to Gabriella Pizzuto from the University of Liverpool about intelligent robotic manipulators for laboratory automation.
15 November 2024, by

Online hands-on science communication training – sign up here!

Find out how to communicate about your work with experts from Robohub, AIhub, and IEEE Spectrum.
13 November 2024, by

Robot Talk Episode 97 – Pratap Tokekar

In the latest episode of the Robot Talk podcast, Claire chatted to Pratap Tokekar from the University of Maryland about how teams of robots with different capabilities can work together.
08 November 2024, by

Robot Talk Episode 96 – Maria Elena Giannaccini

In the latest episode of the Robot Talk podcast, Claire chatted to Maria Elena Giannaccini from the University of Aberdeen about soft and bioinspired robotics for healthcare and beyond.
01 November 2024, by

Robot Talk Episode 95 – Jonathan Walker

In the latest episode of the Robot Talk podcast, Claire chatted to Jonathan Walker from Innovate UK about translating robotics research into the commercial sector.
25 October 2024, by

Robot Talk Episode 94 – Esyin Chew

In the latest episode of the Robot Talk podcast, Claire chatted to Esyin Chew from Cardiff Metropolitan University about service and social humanoid robots in healthcare and education.
18 October 2024, by





Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association