A large robot comes out of an office mailroom carrying a package marked “Urgent” to deliver to the boss upstairs. After navigating down the hall at maximum speed, it discovers someone is already waiting for the elevator. If they cannot both fit in the elevator, is it acceptable for the robot to ask this person to take the next elevator so it can fulfill its urgent delivery duty? What if, instead, there’s someone in a wheelchair already riding the elevator? Should the robot ask this person to vacate the elevator to make room for it to complete its delivery task?
Figuring out appropriate behavior for the simple situation of riding an elevator isn’t the first thing that comes to mind when people hear the word roboethics.(a) When used together, the words “robot” and “ethics” tend to conjure up futuristic visions of robot uprisings or questions about the rights of sentient, human-like robots. But roboethics isn’t all about the distant future. It’s also very much about addressing the technical and social challenges of designing robots today.
Determining how a robot should act and behave in a variety of situations is a design challenge for researchers aiming to build robots that make a useful and positive addition to society. I and fellow members of the Open Roboethics initiative (ORi) decided that one of the best ways to establish appropriate behavioral guidelines for robots was to involve various stakeholders – industry members, government policymakers, users, and academic experts – in the design process.(b)
By incorporating their perspectives on robot behavior into a machine-learning algorithm, our research explored an empirical method for designing behavior in situations where the ethical action for a robot isn’t always immediately clear.
Ultimately, the answer to the question “What should a robot do?”, like the question of what a human should do, depends on whom you ask and what cultural, religious, and other beliefs have shaped their values. It may sound daunting to program robots to make the right decisions if humans can’t even agree on what these decisions should be. Yet people from different backgrounds interact every day as neighbors, classmates, and coworkers within shared frameworks that keep them from crossing one another’s ethical boundaries.
Opinions about ethical robot behavior differ not only by person, but also by situation. For instance, someone who is waiting for the elevator and not in a rush when a robot needs the space to deliver an urgent package may readily let the robot take the elevator. But someone in a wheelchair who is already on the elevator and who cannot take the stairs may not feel obligated to leave the elevator for another person, let alone a robot!
As a pilot study to explore how researchers might obtain perspectives on appropriate robot behavior, we asked eight people to consider twelve versions of the elevator scenario in an online survey.1 Respondents were asked to rank the appropriateness of the robot’s behavior in each scenario. The robot could either:
The person in this context could be standing with nothing in hand, carrying heavy items, or in a wheelchair. They could also be already inside the elevator when the door opens or waiting for the elevator when the robot approaches. As for the robot, it was delivering either urgent or non-urgent mail.
Using the rankings from our survey respondents, we calculated which of the four behaviors people find most and least appropriate for each possible scenario.
When the robot was delivering urgent mail – regardless of whether the person was in a wheelchair, carrying heavy objects, or inside or outside the elevator – the most appropriate action chosen for the robot was to engage in dialogue with the person, and the least appropriate behavior was to take no action at all. When the robot was on a non-urgent delivery task, respondents wanted the robot to yield to the person, regardless of who they were, and refusing to yield was considered the least appropriate option.
After gathering the survey data, our next step was to incorporate the results into the design of a robot’s behavior. We treated the survey responses as a set of training data, which we fed into a machine-learning algorithm that guides a robot to map out its decisions based on the data.(c) We programmed the Willow Garage PR2 robot to detect which scenario applies most closely to the situation at hand, then select the response to that scenario that our survey respondents found most favorable.2
Our pilot survey data indicates that, as is the case for humans, ethical behavior is not simply a set of inflexible rules, but is highly dependent on the context of a situation and on communication and interaction with other actors. The emphasis respondents placed on engaging in dialogue suggests that the key to developing robots with ethical decision-making capabilities may lie not in the creation of a set of principles guiding robots to make predetermined decisions, but in the development of robots that can better communicate and negotiate with the people they encounter to reach mutually agreeable outcomes.
Future efforts building on our research might involve designing a robot that can express its internal states in order to articulate the urgency level of its task or how much it wants or needs something. Creating robots that can communicate more like humans may also be valuable. I am currently developing a robot that can engage in the kinds of non-verbal cues humans use to respond to imminent conflicts.3 If the robot finds itself reaching for something at the same time as a person, it exhibits short, jerky motions that suggest hesitation (much like a human would) and indicate that it is aware its “desires” are in conflict with someone else’s.(d)
Our research demonstrates an empirical method for designing robot behavior based on the norms favored by humans. By gathering data on people’s preferences regarding robot behavior, we can determine how robots should act in particular situations, as well as what variables (urgency of task, needs of surrounding humans, etc.) matter when it comes to determining the right actions for a robot to take. Similar methods of gathering information from people and using it to program appropriate actions for robots can be applied in a wide variety of contexts where humans and robots interact. Roboethics is key to designing robots that can interact with people in an efficient, friendly, and beneficial manner.
This article is part of a series on robots and their impact on society. After its initial publication, the story was picked up by Business Insider.
If you liked this article, you may also be interested in:
See all the latest robotics news on Robohub, or sign up for our weekly newsletter.