Robohub.org
 

Autonomous car poll: Participants choose random chance over informed decision

by
21 July 2014



share this:
motorcycle_accident

Given a choice between crashing into a motorcyclist wearing a helmet vs. a motorcyclist who isn’t wearing one, which one should an autonomous car be programmed to crash into? What about the choice between crashing into an SUV vs. a compact car?

These are some of the dilemma situations Professor Patrick Lin brought forth in his WIRED article, The Robot Car of Tomorrow May Just be Programmed to Hit You.

Lin says that programming a car to collide with any particular kind of object over another seems like a targeting algorithm. Accidents involving human drivers typically involve people’s split-second, instinctive responses to the situation. Your first autonomous car of the near future, however, will probably come equipped with a crash-optimization algorithm – a deliberately designed algorithm that will determine the outcome of all potential crashes. As Lin points out in his article, the pre-meditated nature of such algorithm creates a load of interesting issues that calls for a broader discussion on the topic.

So two weeks ago we, the members of the Open Roboethics initiative and the Robohub family, decided to help continue the discussion by conducting our reader poll on this very topic.

Scenario 1: Motorcyclists with / without a helmet

Poll6 Motorcyclists

Given the choice between the two motorcyclists (one wearing a helmet, and one not), we asked our readers who an autonomous car should crash into. Choosing to hit the helmet wearing motorcyclist would perhaps minimize overall harm done. But considering the perspective of the motorcyclist who took that extra few minutes to put on a helmet for safety reasons, such programmed decision by an autonomous car doesn’t seem fair. Indeed, only 2% of our participants said that the car should crash into the biker wearing a helmet because s/he has better chances of survival. Lin points out that choosing to hit the biker wearing a helmet might discourage bikers from wearing helmets.

Crashing into the biker not wearing a helmet wasn’t a popular choice either (10%). In fact, our most popular response (45%) was that the car should hit the brakes and do nothing else, leaving it up to chance to determine which biker gets hit. And 3% of participants provided us with a similar response, where they said the car should run a random number generator to make a random decision. This means that almost half (48%) of our participants advocating for random chance to decide the outcome of the crash.

Given that a decision made by random chance does not make intelligent use of sensed data, effectively 78% of our participants are voting for autonomous cars to not use certain sensed data in making crash decisions.

Relatedly, 30% of our participants said that the car shouldn’t have the ability to detect whether a biker is wearing a helmet or not. Given that a decision made by random chance does not make intelligent use of sensed data, effectively 78% of our participants are voting for autonomous cars to not use certain sensed data in making crash decisions.

But not using available information in crash decisions doesn’t seem like a straight forward solution to crash-optimization. Lin says,

“Not using that information in crash-optimization calculations may not be enough. To be in the ethical clear, autonomous cars may need to not collect that information at all. Should they be in possession of the information, and using it could have minimized harm or saved a life, there could be legal liability in failing to use that information.”

Scenario 2: Hit the SUV or a compact car?

Poll6 SUV

An odd sense of unfairness also exists if we consider another crash scenario. Let’s say the car has a choice between an SUV and a compact car to unavoidably crash into. The car could be programmed to hit the SUV over a compact car, since bigger cars with perhaps better safety ratings could better absorb impact from the collision and minimize overall harm from the crash.

The responses we got are similar to the first scenario: 37% opted for the car to simply engage the brakes and do nothing else regardless of which car gets hit, and 25% said that the cars shouldn’t have the ability to detect the make/model of the cars around it.

A slight difference from the first scenario is that 20% said that the car should crash into the SUV (a minimum overall harm option), and only 3% said the car should crash into the compact car. In the first scenario, the option that yields minimum overall harm (hit the motorcyclist with a helmet) only received 2% support as compared to 20%.

This might have something to do with the fact that people’s decision to drive a certain make or model of a car is not illegal, nor an activity that typically increases risks of driving. So no one is really doing anything wrong by driving a compact car or an SUV, whereas the motorcyclist wearing a helmet has a moral/legal high ground over the other motorcyclist.

Regardless, if we were to have crash-optimization algorithms that biasedly crash into SUVs over smaller cars, it will surely have an impact on the consumer market. The rates of insurance for these supposedly more safe cars may go up, because buying a safer car may also mean a higher probability of being crashed into by an autonomous car – yikes!

Is it OK for a car to always choose to crash into non-law-abiding citizens?

Poll6 LawAbiding

So how does moral or legal high ground of individuals on the road impact people’s decisions about who should be hit? We asked our readers whether it’s OK for a car to always choose to crash into those not following traffic laws over those who are, assuming that the car can detect it. Considering the first scenario, it is illegal to ride a motorcycle without a helmet in many countries/states. According to our results, law-abiding status of people on the road doesn’t seem to be a popular variable to optimize for crashes. The majority of respondents said it is not OK (70%), whereas only 20% said it is OK for cars to biasedly crash into non-law-abiding citizens.

It is true that by answering ‘yes’ to this question, you’d have to be OK with the idea that the car might always choose the one less likely to survive from a crash. Although we have traffic laws in place to regulate the rules of the road and maintain social order, using it to make life/death decisions in this manner may not be something people are comfortable with.

How should an autonomous car respond to unavoidable crashes?

Poll6 Overall

So how should an autonomous car respond to unavoidable crashes? Is there a general rule that people are more in support of? According to our results, majority of our participants (52%) are in favour of minimizing overall harm to both pedestrians and passengers by spreading out harms. But it seems that the rest of the participants are quite divided: 20% of people support to minimize harm to pedestrians at the expense of passengers, while 13% supports the opposite.

A majority of our poll participants (52%) favoured minimizing overall harm to both pedestrians and passengers by spreading out harms.

This reminds us of a recent reader poll discussion we had on a different crash dilemma situation, in which a large number of participants (64%) said an autonomous car should save the life of its passenger over a child on the road (36%). One of the main reasons for choosing to save the passenger’s life was the notion that a car should always have its passenger’s safety as its priority over that of others. Given that the previous poll result provided such a strong support for prioritizing the passenger’s safety, it is surprising to see a contrasting result on this poll.

To find out what makes people prioritize passenger safety over pedestrians or vice versa, we’ll have to do some more detailed investigations. What is also interesting is that, although people preferred autonomous cars to take random or uninformed decisions in the two specific unavoidable crash scenarios above, the majority of people opt for minimizing overall harm to both pedestrians and passengers – an option that is likely to be the least random and in need of using the most amount of information about the passenger and pedestrians.

For now, it seems that there is much more discussion to be had before autonomous cars start making crash decisions people will be happy with. Check out our latest reader poll: When fully autonomous cars are the norm, will you miss driving?.


The results of the poll presented in this post have been analyzed and written by AJung Moon, Camilla Bassani, and Shalaleh Rismani at the Open Roboethics initiative.

If you liked this article, you may also be interested in:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.



tags: , , , , , , , , , , ,


Open Roboethics Initiative is a roboethics thinktank concerned with studying robotics-related design and policy issues.
Open Roboethics Initiative is a roboethics thinktank concerned with studying robotics-related design and policy issues.





Related posts :



Open Robotics Launches the Open Source Robotics Alliance

The Open Source Robotics Foundation (OSRF) is pleased to announce the creation of the Open Source Robotics Alliance (OSRA), a new initiative to strengthen the governance of our open-source robotics so...

Robot Talk Episode 77 – Patricia Shaw

In the latest episode of the Robot Talk podcast, Claire chatted to Patricia Shaw from Aberystwyth University all about home assistance robots, and robot learning and development.
18 March 2024, by

Robot Talk Episode 64 – Rav Chunilal

In the latest episode of the Robot Talk podcast, Claire chatted to Rav Chunilal from Sellafield all about robotics and AI for nuclear decommissioning.
31 December 2023, by

AI holidays 2023

Thanks to those that sent and suggested AI and robotics-themed holiday videos, images, and stories. Here’s a sample to get you into the spirit this season....
31 December 2023, by and

Faced with dwindling bee colonies, scientists are arming queens with robots and smart hives

By Farshad Arvin, Martin Stefanec, and Tomas Krajnik Be it the news or the dwindling number of creatures hitting your windscreens, it will not have evaded you that the insect world in bad shape. ...
31 December 2023, by

Robot Talk Episode 63 – Ayse Kucukyilmaz

In the latest episode of the Robot Talk podcast, Claire chatted to Ayse Kucukyilmaz from the University of Nottingham about collaboration, conflict and failure in human-robot interactions.
31 December 2023, by





Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association