Robohub.org
 

Slow down that runaway ethical trolley

by
13 January 2015



share this:
Source: Wikipedia

Source: Wikipedia

The runaway trolley has chased automated motor vehicles into the new year.

In early 2012, I raised a variation of the classic thought experiment to argue that there is not always a single absolute right choice in the design of automated vehicles — and that engineers should not presume to always know it. While this remains true, the kind of expert comments that concerned me three years ago have since become more the exception than the norm. Now, to their credit, engineers at Google and within the automotive industry openly acknowledge that significant technical hurdles to fully automated vehicles remain and that such vehicles, when they do exist, will not be perfect.

Unfortunately, the reality that automated vehicles will eventually kill people has morphed into the illusion that a paramount challenge for or to these vehicles is deciding who precisely to kill in any given crash. This was probably not the intent of the thoughtful proponents of this thought experiment, but it seems to be the result. Late last year, I was asked the “who to kill” question more than any other— by journalists, regulators, and academics. An influential working group to which I belong even (briefly) identified the trolley problem as one of the most significant barriers to fully automated motor vehicles.

Although dilemma situations are relevant to the field, they have been overhyped in comparison to other issues implicated by vehicle automation. The fundamental ethical question, in my opinion, is this: In the United States alone, tens of thousands of people die in motor vehicle crashes every year, and many more are injured. Automated vehicles have great potential to one day reduce this toll, but the path to this point will involve mistakes and crashes and fatalities. Given this stark choice, what is the proper balance between caution and urgency in bringing these systems to the market? How safe is safe enough?

When automated vehicles are good enough to reliably replace human drivers across a wide range of driving environments (and we are not there yet), the kinds of incidents that compel a choice among victims will, one hopes, be remarkably rare. In many cases, the prudent strategy for such scenarios will be to drive carefully enough that they either (a) do not arise at all or (b) can be mitigated if they do arise (by, for example, stopping quickly). This is because poor decisions by human drivers— driving too fast, while drunk, while texting, while tired, without braking quickly enough, etc.— contribute to the vast majority of today’s crashes.

In the near term, some crashes might be addressed by automated emergency intervention systems (AEISs) that automatically brake or steer when the human driver fails to act. Because these systems are designed to engage just before a crash (sometimes to lessen rather than to negate the impact), they could conceivably face the kind of dilemmas that are posited for automated vehicles. Nonetheless, some of these systems have already reached the market and are saving lives — as are airbags and electronic stability control and other technologies that necessarily involve safety tradeoffs. At the same time, these intervention systems occasionally detect objects that don’t actually exist (false positives) or fail to detect objects that actually do exist (false negatives).

This is a critical point in itself: Automation does not mean an end to uncertainty.  How is an automated vehicle (or its designers or users) to immediately know what another driver will do? How is it to precisely ascertain the number or condition of passengers in adjacent vehicles? How is it to accurately predict the harm that will follow from a particular course of action? Even if specific ethical choices are made prospectively, this continuing uncertainty could frustrate their implementation.

For this reason, a more practical approach in emergency situations may be to weight general rules of behavior: decelerate, avoid humans, avoid obstacles as they arise, stay in the lane, and so forth. As I note in a forthcoming book chapter (“Regulation and the Risk of Inaction“), this simplified approach would accept some failures in order to expedite and entrench what could be automation’s larger successes. As Voltaire reminds us, we should not allow the perfect to be the enemy of the good.

If you liked this article, you may also be interested in:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.



tags: , , , , , , ,


Bryant Walker Smith is an expert on the legal aspects of autonomous driving and a fellow at Stanford Law School.
Bryant Walker Smith is an expert on the legal aspects of autonomous driving and a fellow at Stanford Law School.





Related posts :



Robot Talk Episode 98 – Gabriella Pizzuto

In the latest episode of the Robot Talk podcast, Claire chatted to Gabriella Pizzuto from the University of Liverpool about intelligent robotic manipulators for laboratory automation.
15 November 2024, by

Online hands-on science communication training – sign up here!

Find out how to communicate about your work with experts from Robohub, AIhub, and IEEE Spectrum.
13 November 2024, by

Robot Talk Episode 97 – Pratap Tokekar

In the latest episode of the Robot Talk podcast, Claire chatted to Pratap Tokekar from the University of Maryland about how teams of robots with different capabilities can work together.
08 November 2024, by

Robot Talk Episode 96 – Maria Elena Giannaccini

In the latest episode of the Robot Talk podcast, Claire chatted to Maria Elena Giannaccini from the University of Aberdeen about soft and bioinspired robotics for healthcare and beyond.
01 November 2024, by

Robot Talk Episode 95 – Jonathan Walker

In the latest episode of the Robot Talk podcast, Claire chatted to Jonathan Walker from Innovate UK about translating robotics research into the commercial sector.
25 October 2024, by

Robot Talk Episode 94 – Esyin Chew

In the latest episode of the Robot Talk podcast, Claire chatted to Esyin Chew from Cardiff Metropolitan University about service and social humanoid robots in healthcare and education.
18 October 2024, by





Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association