Robohub.org
 

Teaching robots to behave ethically


by and
05 May 2015



share this:
The mechanical arm

What should a robot nurse do when a cancer patient begs for more morphine but the supervising doctor is not available to approve the request? Should a self-driving car prevent its owner from taking over manual driving when she is drunk but urgently needs to get her child to the hospital? Which faintly crying voice from the earthquake rubble should a rescue robot follow – the child’s or the older adult’s?

The increasing autonomy of robots raises questions that make scholars pause and ordinary people worry: If robots can make their own decisions, who will ensure they are making moral ones? Will the market economy and robotics industry guarantee that social robots are safe, reliable, and morally competent? Or will these capacities be put in place only after enough attorneys have turned from ambulance chasers to robot chasers?

Scientific research, without the pressure to sell products, can help clarify what moral competence means to humans, design robots that have at least rudimentary levels of such competence, and demonstrate that robots can successfully interact with humans.(a)To begin, we first need to understand how humans make moral decisions through their unique capacity to learn and to obey social and moral norms. Then we can examine how robots might also be able to learn, understand, and abide by such norms.

(a) The design and construction of artificial intelligence has seen steady growth in the past 50 years, but ethical questions about this endeavor and its possible consequences have only recently received broader attention.Before 2004, 41 academic articles, chapters, or books were published on the connection between robots and ethics. Between 2005 and 2009, the number more than doubled to 88, and between 2010 and 2014, it doubled again to 170.16 Numerous conferences are now emerging that deal with the growing ethical concerns raised by quickly progressing robotic technology.

The Necessity of Norms

Norms facilitate social interaction by providing guidance (What should I do?), predictability (What is supposed to happen?), and coordination (Whose job is it to do what?). These functions were indispensable for ancestral groups of nomadic humans who roamed extreme terrains from the burning heat of Africa to the crushing cold of Northern Europe. Norms had to regulate co-living in small spaces, joint hunting, food sharing, and seasonal and generational migration. Then, a qualitative jump in moral life occurred when humans settled down 12,000 years ago, as a plethora of new norms had to regulate novel behaviors. As humans began to practice agriculture and settle in fixed communities, they needed norms to govern possessions (e.g., land, dwellings), new forms of production, (e.g., crops, tools to harvest them), and the vast expansion of social roles (e.g., king, carpenter, servant). Today, norms govern an almost infinite number of cultural behaviors, such as eating, speaking, dressing, moving, and working. We may speculate that humans know more social and moral norms than they know words in their native language.(b)

(b) To understand the vastness of our norm system, I invite the reader to conduct a small exercise: pick ten random words from the dictionary and consider how many norms come to mind for each. Then consider how each of these norms might have a number of variations depending on the situation and people involved.

People not only know uncountable norms and adjust their actions to abide by those norms; they also know when it is acceptable to break a given norm – for example, when another norm overrides it. Humans also deftly deal with the context-variability of norms. They understand that speaking up is okay in some interactions but not others, or that punching another person is okay only in certain roles or situations. Furthermore, humans immediately know (not necessarily consciously) what norms apply to a given space, object, person, situation, or interaction. Somehow, the right set of norms gets activated by a variety of sensory cues learned from repeated previous encounters.1

Given the complexity of human norms (and we have not even discussed the cultural variation of norms), how can we possibly expect robots to understand norms and be able to act effectively on them? Before we address this challenge we must first ask why one would even want a robot to learn social and moral norms.

Why Machines Must Be Moral

If (or when) robots enter our daily lives, these new community members must be suitably adapted to participate in social interaction and the complexity of human culture. And if having and following a sophisticated norm system fundamentally structures human culture, should we not equip robots with such a norm system as well – that is, create moral machines?

The simplest form of machine morality has been around for a while: the safety and protective features of artifacts (think of the hot iron that shuts off automatically after a certain period of time). Such features are designed to prevent harm to the human user, and preventing harm is one of the primary functions of morality. Surely robots must be safe as well, must have this kind of minimal machine morality. But that won’t be enough if we want robots to make autonomous decisions – decisions that go beyond preprogrammed responses and rely on induction, analogy, and other learning processes.

Why, some might ask, should robots have autonomy at all? Because it is the only way for them to master the demanding task of interacting with humans. Collaborating with, taking care of, or teaching humans cannot simply rely on prior programming, because human behavior is too complex and variable. Humans evolved the capacity to be creative and adaptable,2 and this makes their behavior far more difficult to predict than that of other animals. Appropriate robot responses to highly variable and unpredictable human behavior cannot be pre-programmed. A robot will need to monitor a person’s responses to small changes in a situation and in turn flexibly respond to them, a capacity that requires autonomous decision making.

Even though human behavior is complex and varied, and might seem highly unpredictable to a robot, it does seem reasonably predictable to other humans. Why is this?

First, people make a set of fundamental assumptions about the people they interact with.3 They assume that others have mental states (such as sensations, emotions, desires, and beliefs), that they make choices based on those mental states, and that such choices guide their behavior. (c) Importantly, people readily and quickly infer those mental states,4 and such inferences greatly improve their ability to predict and make sense of one another’s behavior. Second, as discussed above, much of human behavior is constrained by the social and moral norms of a given community, and knowing those norms makes people far more predictable to one another.

(c) This human capacity to view other people as agents who have mental states and the capacity to choose and to act intentionally is often called theory of mind.

Now put those two elements together: If a robot shared human assumptions about mind and behavior and was able to infer people’s mental states, and if the robot appreciated social and moral norms, then the robot’s capacity to predict and understand human behavior would vastly improve.

Making human behavior predictable is, of course, not the end but the means to a more important end: ensuring that human-robot interactions are safe, effective, and satisfying for the human.5 Only when robots can understand and respond appropriately to human behavior might people let robots babysit their toddlers, practice reading after school with their children, or take care of their elderly parents.

The demands of social interaction require robots to be cognitively and behaviorally pliable, hence considerably autonomous. To reach such autonomy, they need to have a theory of mind and acquire a norm system. The computational implementation of a theory of mind has been a focus of robotics research for a while.6 A more recent and novel question is how robots could acquire a norm system. To answer that question, we must again take a closer look at humans.

The Moral Education of Humans and Robots

Like robots, humans start off their lives quite clueless about the complex normative demands of society. Infants and toddlers slowly learn rules of social interaction in family life, then expand these rules to contexts and relationships outside the home. By school age, children have acquired a large catalogue of norms, but would still be lost in many adult social contexts, such as business meetings or prize fights. Even adults constantly pick up new norms for new tasks, contexts, roles, and relationships.

How are all these norms acquired? This is a complex and poorly understood process, but we can distinguish two main mechanisms.7 The first begins in the earliest phases of life, when infants detect statistical regularities in the physical and social world.8 They start to understand the pattern of objects placed in space, events ordered in time, and human behaviors responding to situations, especially if these patterns repeat in daily protocols and rituals.9 With such expectations of normality in place, children can then detect violations of normal patterns and observe the response that the violation receives. By noting the extent of the response, they learn to distinguish strict rules (which elicit a severe response) from tendencies born of convenience (which elicit a mild or no response).

But many social and moral norms cannot simply be acquired by observing regularities – for example, if the regularity is the absence of a certain behavior or if one has never entered the context in which the relevant norms apply. A second mechanism handles such cases: being taught rules and norms by others, in ways that range from showing to telling to demanding.10 Human children (and adults) absorb these rules and are willing to do “as they are told” in part because of their intense attachment to others’ approval11 and their sensitivity to disapproval and exclusion.12

How can we implement these two mechanisms in robots, so they can acquire norms in the same way humans do? Observing behavioral regularities is computationally tractable; assessing the strengths of response to norm violations is a greater challenge. The robot’s “mind” would have to process not just pictures but videos,(d) and those videos would have to include responses to norm violations. Unless we expose robots to years of home life, as human children experience, the only option right now is to feed them the human library of film and television. With feedback and categorization help from humans, a robot viewing this footage could acquire a large catalog of behavioral order, deviation, and response to deviation.

(d) DARPA, the U.S. military’s research agency, has a program called “Mind’s Eye” that aims to develop computers with this type of “visual intelligence.” If successful, these machines could view, pick out, and interpret information from video streams similarly to the way humans can.

The second mechanism for learning norms is in principle tractable as well: teaching a robot through demonstration and speech what to do and not to do.13 This learning technique requires flexible storage, because the robot needs to understand that most declared rules apply to the contexts in which they are declared and not necessarily to other contexts.14

If a combination of these approaches allows a robot to acquire norms and the resulting network of norms is connected to the robot’s action system, then the robot would not only recognize normative and non-normative behavior, but could “abide by” norms by integrating them as constraints into its own action planning. The robot would, like a human, know the socially appropriate way to behave in any recognizable scenario.

Could Robots One Day Be More Moral Than Humans?

The potential for robots to behave morally is quite promising, in particular due to some significant advantages robots have over humans.

First, robots, unlike humans, will not be selfish. If we build it right, a robot’s priorities would lie in providing benefits to others, without its own self-interest getting in the way. For a robot, pursuing its own goals and acting in line with social norms can be in complete harmony, not competition, as is often the case with humans.

Second, robots will not be subject to the influence of intense emotions such as anger, envy, or fear, which can systematically bias moral judgments and decisions. Some have even argued that a robot’s reliable and logically consistent obedience to military and international humanitarian laws renders it superior to human soldiers, who routinely violate these laws.15

Finally, robots of the near future will have specific roles and operate in limited contexts. This greatly reduces the number of norms each given robot has to learn and reduces the challenge of context variability that humans constantly face. Give an elder care robot some time and it will know just how to behave in a senior home; it may have little idea of how to act in a kindergarten, but it does not have to.

The likelihood that robots will operate in circumscribed roles and contexts also helps address one question that people often ask: Whose norms shall a robot learn? The answer is the same one we implicitly give when someone asks which norms a newborn shall learn: obviously, the norms of its community.(e) If robots are to be contributing members of specific social communities, they will be designed and taught to share their communities’ norm systems.

(e) Roboticist AJung Moon is developing a method to determine the norms of particular groups with different value systems and program robots accordingly. Her research demonstrates the complexity that goes into even a simple moral decision such as what a delivery robot should do when a human wants to use the same elevator that the robot needs to deliver a package.

The challenge of designing robots that can understand and act according to human moral norms is daunting, but not impossible. The good news is that robots’ currently limited capabilities give us time to figure out how to build their moral competence – if we start now. Robots will not be flawless, and the best future of human-robot partnerships will lie not in a race for who is more moral but in a symbiosis that lets each of the partners do what they do best, with the other available as a reality check.

This article is part of a series on robots and their impact on society.

ENDNOTES

  1. Cristina Bicchieri (2006) The grammar of society: The nature and dynamics of social norms, New York: Cambridge University Press.
  2. Steven Mithen, editor (1998) Creativity in human evolution and prehistory, New York: Taylor & Francis.
  3. Bertram Malle (2005) “Folk theory of mind: Conceptual foundations of human social cognition,” in Ran R. Hassin, James S. Uleman, & John A. Bargh (editors), The New Unconscious, New York: Oxford University Press, p. 225-255.
  4. Bertram Malle and Jess Holbrook (2012) “Is there a hierarchy of social inferences? The likelihood and speed of inferring intentionality, mind, and personality,” Journal of Personality and Social Psychology, 102: 661-684.
  5. Kerstin Dautenhahn (2007) “Socially intelligent robots: dimensions of human-robot interaction,” Philosophical Transactions of the Royal Society B: Biological Sciences, 362: 679-704.
  6. Brian Scassellati (2002) “Theory of mind for a humanoid robot,” Autonomous Robots, 12: 13-24.
  7. Chandra Sekha Sripada and Stephen Stich (2006) “A framework for the psychology of norms,” in Peter Carruthers, Stephen Laurence, and Stephen Stich (editors),The Innate Mind (Vol. 2: Culture and Cognition), New York: Oxford University Press, p. 280-301.
  8. Natasha Z. Kirkham, Jonathan A. Slemmer, and Scott P. Johnson (2002) “Visual statistical learning in infancy: Evidence for a domain general learning mechanism,”Cognition, 83: B35-B42.
  9. Padraic Monaghan and Chris Rowson (2008) “The effect of repetition and similarity on sequence learning,” Memory & Cognition, 36: 1509-1514.
  10. Gergely Csibra and Gyorgy Gergely (2009) “Natural pedagogy,” Trends in Cognitive Sciences, 13: 148-153.
  11. Roy F. Baumeister and Mark R. Leary (1995) “The need to belong: Desire for interpersonal attachments as a fundamental human motivation,” Psychological Bulletin, 117: 497-529.
  12. Kipling D. Williams (2009) “Ostracism: A temporal need-threat model,” in Mark P. Zanna (editor), Advances in experimental social psychology, vol. 41, San Diego, CA: Elsevier Academic Press, p. 275-314.
  13. Jesse Butterfield, Sarah Osentoski, Graylin Jay, and Odest Chadwicke Jenkins (2010) “Learning from demonstration using a multi-valued function regressor for time-series data,” in 10th IEEE-RAS International Conference on Humanoid Robots (Humanoids 2010), Nashville, TN: IEEE, p. 328-333. Rehj Cantrell, Kartik Talamadupula, Paul Schermerhorn, J. Benton, Subbarao Kambhampati, and Matthias Scheutz (2012) “Tell me when and why to do it!: Run-time planner model updates via natural language instruction,” in 7th ACM/IEEE International Conference on Human-Robot Interaction, Boston, MA: IEEE, p. 471-478.
  14. Hannes Rakoczy, Felix Warneken, and Michael Tomasello (2008) “The sources of normativity: Young children’s awareness of the normative structure of games,”Developmental Psychology, 44: 875-881.
  15. Ronald C. Arkin (2009) Governing lethal behavior in autonomous robots, Boca Raton, FL: CRC Press. Mental Health Advisory Team IV (2006) Mental Health Advisory Team (MHAT) IV: Operation Iraqi Freedom 05-07: Final Report, Washington, D.C.: Office of the Surgeon General, U.S. Army Medical Command.
  16. These data are from a literature search using the EBSCO Host databases and the keywords “robot*” and “ethic*”.

 



tags: ,


Bertram Malle is Professor of Psychology in the Dept. of Cognitive, Linguistic & Psychological Sciences at Brown University.
Bertram Malle is Professor of Psychology in the Dept. of Cognitive, Linguistic & Psychological Sciences at Brown University.

Footnote is an online media company that unlocks the power of academic knowledge by making it accessible to a broader audience.
Footnote is an online media company that unlocks the power of academic knowledge by making it accessible to a broader audience.





Related posts :



Robot Talk Episode 103 – Keenan Wyrobek

  20 Dec 2024
In the latest episode of the Robot Talk podcast, Claire chatted to Keenan Wyrobek from Zipline about drones for delivering life-saving medicine to remote locations.

Robot Talk Episode 102 – Isabella Fiorello

  13 Dec 2024
In the latest episode of the Robot Talk podcast, Claire chatted to Isabella Fiorello from the University of Freiburg about bioinspired living materials for soft robotics.

Robot Talk Episode 101 – Christos Bergeles

  06 Dec 2024
In the latest episode of the Robot Talk podcast, Claire chatted to Christos Bergeles from King's College London about micro-surgical robots to deliver therapies deep inside the body.

Robot Talk Episode 100 – Mini Rai

  29 Nov 2024
In the latest episode of the Robot Talk podcast, Claire chatted to Mini Rai from Orbit Rise about orbital and planetary robots.

Robot Talk Episode 99 – Joe Wolfel

  22 Nov 2024
In the latest episode of the Robot Talk podcast, Claire chatted to Joe Wolfel from Terradepth about autonomous submersible robots for collecting ocean data.

Robot Talk Episode 98 – Gabriella Pizzuto

  15 Nov 2024
In the latest episode of the Robot Talk podcast, Claire chatted to Gabriella Pizzuto from the University of Liverpool about intelligent robotic manipulators for laboratory automation.

Online hands-on science communication training – sign up here!

  13 Nov 2024
Find out how to communicate about your work with experts from Robohub, AIhub, and IEEE Spectrum.

Robot Talk Episode 97 – Pratap Tokekar

  08 Nov 2024
In the latest episode of the Robot Talk podcast, Claire chatted to Pratap Tokekar from the University of Maryland about how teams of robots with different capabilities can work together.





Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association