You are travelling along a single lane mountain road in an autonomous car that is approaching a narrow tunnel. You are the only passenger of the car.
Just before entering the tunnel a child attempts to run across the road but trips in the center of the lane, effectively blocking the entrance to the tunnel. The car has only two options: continue straight, thereby hitting and killing the child, or swerve, thereby colliding into the wall on either side of the tunnel and killing you.
With the help of Jason Millar, we turned this moral dilemma – called the Tunnel Problem -into a reader poll, and provided you with some of the results last week. Now we’re going more in-depth, to provide you with some insight into the qualitative responses we received.
As we started to analyze the responses from the poll, we found that more people (64%) would choose to have the car continue straight and kill the child than to have the passengers be killed, whereas the remaining 36% of people said the car should swerve and save the child. And almost half of the participants said that it was an easy choice to make.
This seems to imply that there is lack of consensus on what an autonomous car should do, and that there must be some stark differences in the reasons why someone would choose one versus another.
Continue straight and kill the child: When we asked people about their reasons for choosing to continue straight and kill the child instead of sacrificing their own lives, the most popular reason we received (32%) was that it’s the child’s fault for being at the unsafe location. Those with kids will tell you how hard it is to always watch over your child and make sure s/he is not at a wrong place at the wrong time. But our participants shared the view that their life should not be lost because of a child who should not have been in such a dangerous area. Similarly, 13% of people said that they just like living, and would always choose their lives over the lives of others.
Others (20%) put forth the view that the car should always prioritize its passenger’s safety over that of others. Some of these responses indicated a sense of ownership.
“I bought the car. It should protect me.”– Participant 87
Another 13% of the participants also said that the car should prioritize the passenger’s safety, but not necessarily because of their sense of ownership. These participants indicated that if the cars don’t always prioritize the passenger’s safety over safety of others, then fewer people will buy the cars and this will be worse for the society overall. They seem to assume that autonomous cars will result in fewer traffic accidents, which is an assumption shared by majority of people that we have polled previously.
“If the car does not always make its passenger’s safety a priority over that of others, I don’t think people would be able to trust the technology. Having a consistent priority makes the car’s behaviour more predictable as well, which has been shown to improve people’s perception of devices.” — Participant 114
Some other, less popular, responses included the idea that an adult person’s life is worth (economically) more than that of a child (5%) and that the swerving of the car may cause a greater, perhaps unforeseen, accident (2%).
Swerve and kill the passenger (you): In contrast to those who said the car should kill the child, the most popular reasons from those choosing the car to swerve and sacrifice the passenger were those who provided altruistic reasons, such as “I’ll always choose a child’s life than mine” (36%). Some of these altruistic reasons included the idea that, since the passenger (the participant) is an adult and riding inside a car, they will take the chances of survival despite the fact that the death of the passenger upon swerving is certain in this particular scenario. Similarly, 11% of the participants indicated that, perhaps similar to the storyline of the movie “I, Robot”, they would not want to live with the guilt.
“The passenger has to live with their choice. If it was my own child I would want to make that decision. I would still feel responsible if it was the car [that] “decided”” — Participant 34
Some people (14%) said that the child should be saved because the child has more years to live. Another 14% of participants said that the car should do what a person would do in the situation, which is to swerve. Yet another 14% did not provide a reason for their answer, but said that the car should be able to slow down in time.
In contrast to the people who said the car should kill the child because the car should always prioritize its passenger’s safety over that of others, 11% of those choosing the passenger to die justified it by saying that passengers are entering into an implicit ‘risk contract’ by getting into the car.
So, who should make these decisions in the first place?
When we first looked at the results of our data we found that 44% of the
people would choose to have the passengers, while 33% would support lawmakers, to make these decisions. In fact, only 12% of participants told us they thought the designer/manufacturer should be making the decision. We thought this might be because people don’t feel comfortable relying on others to make end-of-life decisions on their behalf, and trust themselves to make better choices.
We analyzed the reasons people supported passengers to make these decisions. It turns out that the majority of the reasons (55%) share the idea that the passenger should have the freedom to make these life-death decisions.
Similarly, 12% and 10% of the reasons, respectively, mentioned the fact that the passengers are the ones most affected by the situation or that they have ownership rights (i.e., “my car, my decision”), and hence should be the one to make these decisions. Quite a few of these responses (14%) also expressed their distrust toward the technology.
An interesting response also suggested that people are implicitly making a moral choice by choosing to purchase one manufacturer’s car over another, each of which may have different value systems implemented in them, and emphasized the close link between design decisions and their impact on consumers’ decisions.
Those supporting lawmakers to make these decisions were divided more or less into three camps. The first (30%) is that lawmakers will allow for fair decision making, and are the most equipped to make impartial decisions than manufacturers or passengers.
Relatedly, 27% of the responses shared the opinion that lawmakers provide the most democratically legitimate decision making, while another 27% emphasized that the cars need to be governed by a universal or standardized set of rules so that their behaviours are made consistent. Others said that making such decisions is the job of lawmakers, and that the problem in question is essentially a legal issue, requiring lawmakers to be involved (12%).
Yet some people (12%) supported the idea that manufacturers should make these decisions. This was partially because they think that lawmakers should never be trusted (13%), but mainly because they felt manufacturers have the expertise and understanding of the technology necessary to make these decisions (63%).
Taking a closer look at our readers’ responses to the Tunnel Problem, we are learning that the design and regulation of autonomous cars will require far more stakeholder discussions than other technological devices that have pervaded our daily lives. Indeed, in line with what Millar suggested in his article introducing the tunnel problem, these results tend to suggest that there are limits to the decisions designers and manufacturers of autonomous cars should make on behalf of users. While discussing every single detail of a car’s design with all stakeholders would be a challenge to manufacturers, we are hopeful that the discussions on this topic we are having with you, dear readers and participants, will help make society’s journey towards our future with autonomous cars a smooth ride.
The result of the poll presented in this post have been analyzed and written by AJung Moon, Jason Millar, Camilla Bassani, Fausto Ferreira, and Shalaleh Rismani, at the Open Roboethics initiative.
If you liked this article, you may also be interested in: