Robohub.org
 

Slow down that runaway ethical trolley


by
13 January 2015



share this:
Source: Wikipedia

Source: Wikipedia

The runaway trolley has chased automated motor vehicles into the new year.

In early 2012, I raised a variation of the classic thought experiment to argue that there is not always a single absolute right choice in the design of automated vehicles — and that engineers should not presume to always know it. While this remains true, the kind of expert comments that concerned me three years ago have since become more the exception than the norm. Now, to their credit, engineers at Google and within the automotive industry openly acknowledge that significant technical hurdles to fully automated vehicles remain and that such vehicles, when they do exist, will not be perfect.

Unfortunately, the reality that automated vehicles will eventually kill people has morphed into the illusion that a paramount challenge for or to these vehicles is deciding who precisely to kill in any given crash. This was probably not the intent of the thoughtful proponents of this thought experiment, but it seems to be the result. Late last year, I was asked the “who to kill” question more than any other— by journalists, regulators, and academics. An influential working group to which I belong even (briefly) identified the trolley problem as one of the most significant barriers to fully automated motor vehicles.

Although dilemma situations are relevant to the field, they have been overhyped in comparison to other issues implicated by vehicle automation. The fundamental ethical question, in my opinion, is this: In the United States alone, tens of thousands of people die in motor vehicle crashes every year, and many more are injured. Automated vehicles have great potential to one day reduce this toll, but the path to this point will involve mistakes and crashes and fatalities. Given this stark choice, what is the proper balance between caution and urgency in bringing these systems to the market? How safe is safe enough?

When automated vehicles are good enough to reliably replace human drivers across a wide range of driving environments (and we are not there yet), the kinds of incidents that compel a choice among victims will, one hopes, be remarkably rare. In many cases, the prudent strategy for such scenarios will be to drive carefully enough that they either (a) do not arise at all or (b) can be mitigated if they do arise (by, for example, stopping quickly). This is because poor decisions by human drivers— driving too fast, while drunk, while texting, while tired, without braking quickly enough, etc.— contribute to the vast majority of today’s crashes.

In the near term, some crashes might be addressed by automated emergency intervention systems (AEISs) that automatically brake or steer when the human driver fails to act. Because these systems are designed to engage just before a crash (sometimes to lessen rather than to negate the impact), they could conceivably face the kind of dilemmas that are posited for automated vehicles. Nonetheless, some of these systems have already reached the market and are saving lives — as are airbags and electronic stability control and other technologies that necessarily involve safety tradeoffs. At the same time, these intervention systems occasionally detect objects that don’t actually exist (false positives) or fail to detect objects that actually do exist (false negatives).

This is a critical point in itself: Automation does not mean an end to uncertainty.  How is an automated vehicle (or its designers or users) to immediately know what another driver will do? How is it to precisely ascertain the number or condition of passengers in adjacent vehicles? How is it to accurately predict the harm that will follow from a particular course of action? Even if specific ethical choices are made prospectively, this continuing uncertainty could frustrate their implementation.

For this reason, a more practical approach in emergency situations may be to weight general rules of behavior: decelerate, avoid humans, avoid obstacles as they arise, stay in the lane, and so forth. As I note in a forthcoming book chapter (“Regulation and the Risk of Inaction“), this simplified approach would accept some failures in order to expedite and entrench what could be automation’s larger successes. As Voltaire reminds us, we should not allow the perfect to be the enemy of the good.

If you liked this article, you may also be interested in:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.



tags: , , , , , , ,


Bryant Walker Smith is an expert on the legal aspects of autonomous driving and a fellow at Stanford Law School.
Bryant Walker Smith is an expert on the legal aspects of autonomous driving and a fellow at Stanford Law School.





Related posts :



Social media round-up from #IROS2025

  27 Oct 2025
Take a look at what participants got up to at the IEEE/RSJ International Conference on Intelligent Robots and Systems.

Using generative AI to diversify virtual training grounds for robots

  24 Oct 2025
New tool from MIT CSAIL creates realistic virtual kitchens and living rooms where simulated robots can interact with models of real-world objects, scaling up training data for robot foundation models.

Robot Talk Episode 130 – Robots learning from humans, with Chad Jenkins

  24 Oct 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Chad Jenkins from University of Michigan about how robots can learn from people and assist us in our daily lives.

Robot Talk at the Smart City Robotics Competition

  22 Oct 2025
In a special bonus episode of the podcast, Claire chatted to competitors, exhibitors, and attendees at the Smart City Robotics Competition in Milton Keynes.

Robot Talk Episode 129 – Automating museum experiments, with Yuen Ting Chan

  17 Oct 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Yuen Ting Chan from Natural History Museum about using robots to automate molecular biology experiments.

What’s coming up at #IROS2025?

  15 Oct 2025
Find out what the International Conference on Intelligent Robots and Systems has in store.

From sea to space, this robot is on a roll

  13 Oct 2025
Graduate students in the aptly named "RAD Lab" are working to improve RoboBall, the robot in an airbag.

Robot Talk Episode 128 – Making microrobots move, with Ali K. Hoshiar

  10 Oct 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Ali K. Hoshiar from University of Essex about how microrobots move and work together.



 

Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


 












©2025.05 - Association for the Understanding of Artificial Intelligence