Robohub.org
 

Slow down that runaway ethical trolley

by
13 January 2015



share this:
Source: Wikipedia

Source: Wikipedia

The runaway trolley has chased automated motor vehicles into the new year.

In early 2012, I raised a variation of the classic thought experiment to argue that there is not always a single absolute right choice in the design of automated vehicles — and that engineers should not presume to always know it. While this remains true, the kind of expert comments that concerned me three years ago have since become more the exception than the norm. Now, to their credit, engineers at Google and within the automotive industry openly acknowledge that significant technical hurdles to fully automated vehicles remain and that such vehicles, when they do exist, will not be perfect.

Unfortunately, the reality that automated vehicles will eventually kill people has morphed into the illusion that a paramount challenge for or to these vehicles is deciding who precisely to kill in any given crash. This was probably not the intent of the thoughtful proponents of this thought experiment, but it seems to be the result. Late last year, I was asked the “who to kill” question more than any other— by journalists, regulators, and academics. An influential working group to which I belong even (briefly) identified the trolley problem as one of the most significant barriers to fully automated motor vehicles.

Although dilemma situations are relevant to the field, they have been overhyped in comparison to other issues implicated by vehicle automation. The fundamental ethical question, in my opinion, is this: In the United States alone, tens of thousands of people die in motor vehicle crashes every year, and many more are injured. Automated vehicles have great potential to one day reduce this toll, but the path to this point will involve mistakes and crashes and fatalities. Given this stark choice, what is the proper balance between caution and urgency in bringing these systems to the market? How safe is safe enough?

When automated vehicles are good enough to reliably replace human drivers across a wide range of driving environments (and we are not there yet), the kinds of incidents that compel a choice among victims will, one hopes, be remarkably rare. In many cases, the prudent strategy for such scenarios will be to drive carefully enough that they either (a) do not arise at all or (b) can be mitigated if they do arise (by, for example, stopping quickly). This is because poor decisions by human drivers— driving too fast, while drunk, while texting, while tired, without braking quickly enough, etc.— contribute to the vast majority of today’s crashes.

In the near term, some crashes might be addressed by automated emergency intervention systems (AEISs) that automatically brake or steer when the human driver fails to act. Because these systems are designed to engage just before a crash (sometimes to lessen rather than to negate the impact), they could conceivably face the kind of dilemmas that are posited for automated vehicles. Nonetheless, some of these systems have already reached the market and are saving lives — as are airbags and electronic stability control and other technologies that necessarily involve safety tradeoffs. At the same time, these intervention systems occasionally detect objects that don’t actually exist (false positives) or fail to detect objects that actually do exist (false negatives).

This is a critical point in itself: Automation does not mean an end to uncertainty.  How is an automated vehicle (or its designers or users) to immediately know what another driver will do? How is it to precisely ascertain the number or condition of passengers in adjacent vehicles? How is it to accurately predict the harm that will follow from a particular course of action? Even if specific ethical choices are made prospectively, this continuing uncertainty could frustrate their implementation.

For this reason, a more practical approach in emergency situations may be to weight general rules of behavior: decelerate, avoid humans, avoid obstacles as they arise, stay in the lane, and so forth. As I note in a forthcoming book chapter (“Regulation and the Risk of Inaction“), this simplified approach would accept some failures in order to expedite and entrench what could be automation’s larger successes. As Voltaire reminds us, we should not allow the perfect to be the enemy of the good.

If you liked this article, you may also be interested in:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.



tags: , , , , , , , , ,


Bryant Walker Smith is an expert on the legal aspects of autonomous driving and a fellow at Stanford Law School.
Bryant Walker Smith is an expert on the legal aspects of autonomous driving and a fellow at Stanford Law School.





Related posts :



Robots can be companions, caregivers, collaborators — and social influencers

People are hardwired to respond socially to technology that presents itself as even vaguely social. While this may sound like the beginnings of a Black Mirror episode, this tendency is precisely what allows us to enjoy social interactions with robots and place them in caregiver, collaborator or companion roles.
26 November 2021, by

Interview with Tao Chen, Jie Xu and Pulkit Agrawal: CoRL 2021 best paper award winners

The award-winning authors describe their work on a system for general in-hand object re-orientation.
24 November 2021, by
ep.

341

podcast

How Simbe Robotics is Innovating in Retail, with Brad Bogolea

Brad Bogolea discusses the innovation behind Tally, the autonomous robot from Simbe Robotics. Tally collects real-time analytics inside retail stores to improve the customer shopping experience, as well as the efficiency of managing the store.
23 November 2021, by

Top 10 recommendations for a video gamer who you’d like to read (or even just touch) a book

Here is the Robotics Through Science Fiction Top 10 recommendations of books that have robots plus enough world building to rival Halo or Doom and lots of action or puzzles to solve. What’s even cooler is that you can cleverly use the “Topics” links to work in some STEM talking points.
20 November 2021, by

Top tweets from the Conference on Robot Learning #CoRL2021

In this post we bring you a glimpse of the conference through the most popular tweets about the conference written last week. Cool robot demos, short and sweet explanation of papers and award finalists to look forward to next year's edition.
19 November 2021, by

Finding inspiration in starfish larva

Researchers at ETH Zurich have developed a tiny robot that mimics the movement of a starfish larva. It is driven by sound waves and equipped with tiny hairs that direct the fluid around it, just like its natural model. In the future, such microswimmers could deliver drugs to diseased cells with pinpoint accuracy.
17 November 2021, by





©2021 - ROBOTS Association


 












©2021 - ROBOTS Association