Robohub.org
 

Slow down that runaway ethical trolley


by
13 January 2015



share this:
Source: Wikipedia

Source: Wikipedia

The runaway trolley has chased automated motor vehicles into the new year.

In early 2012, I raised a variation of the classic thought experiment to argue that there is not always a single absolute right choice in the design of automated vehicles — and that engineers should not presume to always know it. While this remains true, the kind of expert comments that concerned me three years ago have since become more the exception than the norm. Now, to their credit, engineers at Google and within the automotive industry openly acknowledge that significant technical hurdles to fully automated vehicles remain and that such vehicles, when they do exist, will not be perfect.

Unfortunately, the reality that automated vehicles will eventually kill people has morphed into the illusion that a paramount challenge for or to these vehicles is deciding who precisely to kill in any given crash. This was probably not the intent of the thoughtful proponents of this thought experiment, but it seems to be the result. Late last year, I was asked the “who to kill” question more than any other— by journalists, regulators, and academics. An influential working group to which I belong even (briefly) identified the trolley problem as one of the most significant barriers to fully automated motor vehicles.

Although dilemma situations are relevant to the field, they have been overhyped in comparison to other issues implicated by vehicle automation. The fundamental ethical question, in my opinion, is this: In the United States alone, tens of thousands of people die in motor vehicle crashes every year, and many more are injured. Automated vehicles have great potential to one day reduce this toll, but the path to this point will involve mistakes and crashes and fatalities. Given this stark choice, what is the proper balance between caution and urgency in bringing these systems to the market? How safe is safe enough?

When automated vehicles are good enough to reliably replace human drivers across a wide range of driving environments (and we are not there yet), the kinds of incidents that compel a choice among victims will, one hopes, be remarkably rare. In many cases, the prudent strategy for such scenarios will be to drive carefully enough that they either (a) do not arise at all or (b) can be mitigated if they do arise (by, for example, stopping quickly). This is because poor decisions by human drivers— driving too fast, while drunk, while texting, while tired, without braking quickly enough, etc.— contribute to the vast majority of today’s crashes.

In the near term, some crashes might be addressed by automated emergency intervention systems (AEISs) that automatically brake or steer when the human driver fails to act. Because these systems are designed to engage just before a crash (sometimes to lessen rather than to negate the impact), they could conceivably face the kind of dilemmas that are posited for automated vehicles. Nonetheless, some of these systems have already reached the market and are saving lives — as are airbags and electronic stability control and other technologies that necessarily involve safety tradeoffs. At the same time, these intervention systems occasionally detect objects that don’t actually exist (false positives) or fail to detect objects that actually do exist (false negatives).

This is a critical point in itself: Automation does not mean an end to uncertainty.  How is an automated vehicle (or its designers or users) to immediately know what another driver will do? How is it to precisely ascertain the number or condition of passengers in adjacent vehicles? How is it to accurately predict the harm that will follow from a particular course of action? Even if specific ethical choices are made prospectively, this continuing uncertainty could frustrate their implementation.

For this reason, a more practical approach in emergency situations may be to weight general rules of behavior: decelerate, avoid humans, avoid obstacles as they arise, stay in the lane, and so forth. As I note in a forthcoming book chapter (“Regulation and the Risk of Inaction“), this simplified approach would accept some failures in order to expedite and entrench what could be automation’s larger successes. As Voltaire reminds us, we should not allow the perfect to be the enemy of the good.

If you liked this article, you may also be interested in:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.



tags: , , , , , , ,


Bryant Walker Smith is an expert on the legal aspects of autonomous driving and a fellow at Stanford Law School.
Bryant Walker Smith is an expert on the legal aspects of autonomous driving and a fellow at Stanford Law School.





Related posts :

Robot Talk Episode 144 – Robot trust in humans, with Samuele Vinanzi

  13 Feb 2026
In the latest episode of the Robot Talk podcast, Claire chatted to Samuele Vinanzi from Sheffield Hallam University about how robots can tell whether to trust or distrust people.

How can robots acquire skills through interactions with the physical world? An interview with Jiaheng Hu

and   12 Feb 2026
Find out more about work published at the Conference on Robot Learning (CoRL).

Sven Koenig wins the 2026 ACM/SIGAI Autonomous Agents Research Award

  10 Feb 2026
Sven honoured for his work on AI planning and search.

Robot Talk Episode 143 – Robots for children, with Elmira Yadollahi

  06 Feb 2026
In the latest episode of the Robot Talk podcast, Claire chatted to Elmira Yadollahi from Lancaster University about how children interact with and relate to robots.

New frontiers in robotics at CES 2026

  03 Feb 2026
Henry Hickson reports on the exciting developments in robotics at Consumer Electronics Show 2026.

Robot Talk Episode 142 – Collaborative robot arms, with Mark Gray

  30 Jan 2026
In the latest episode of the Robot Talk podcast, Claire chatted to Mark Gray from Universal Robots about their lightweight robotic arms that work alongside humans.

Robot Talk Episode 141 – Our relationship with robot swarms, with Razanne Abu-Aisheh

  23 Jan 2026
In the latest episode of the Robot Talk podcast, Claire chatted to Razanne Abu-Aisheh from the University of Bristol about how people feel about interacting with robot swarms.

Vine-inspired robotic gripper gently lifts heavy and fragile objects

  23 Jan 2026
The new design could be adapted to assist the elderly, sort warehouse products, or unload heavy cargo.


Robohub is supported by:





 













©2026.01 - Association for the Understanding of Artificial Intelligence