Uber’s fatal crash
by CIS Blog
March 21, 2018
By Bryant Walker Smith
An automated vehicle in Uber’s fleet fatally struck a woman crossing a street in Arizona. A few points pending more information:
- This sad incident will test whether Uber is becoming a trustworthy company. Uber needs to be unflinchingly candid and unfailingly helpful in the multiple investigations that are likely to result. It shouldn’t even touch its onboard and offboard systems unless credible observers are present. In this crash, a multitude of data will likely be available to help understand what happened—but only if those data can be believed.
- The circumstances of this crash certainly suggest that something went wrong. Was the vehicle traveling at a speed appropriate for the conditions? Did the automated driving system and the safety driver recognize the victim, predict her path, and respond appropriately? The lawfulness of the victim’s actions is only marginally relevant to the technical performance of Uber’s testing system (which includes both vehicle and driver).
- Regardless of whether this crash was unavoidable, serious developers and regulators of automated driving systems understand that tragedies will occur. Automated driving is a challenging work in progress that may never be perfected, and I would be skeptical of anyone who claims that automated driving is a panacea—or who expresses shock that it is not.
- However, this incident was uncomfortably soon in the history of automated driving. In the United States, there’s about one fatality for every 100 million vehicle miles traveled, and automated vehicles are nowhere close to reaching this many real-world miles. This arguably first fatality may not tell us much statistically, but neither is it reassuring.
- On the same day that this tragic crash happened, about 100 other people died in crashes in the United States alone. Although they won’t make international news, their deaths are also tragedies. And most of them will have died because of human recklessness, including speeding, drinking, aggression, and distraction. This is a public health crisis, and automated driving may play an important role (though by no means the only role) in addressing it. In short: We should remain concerned about automated driving but terrified about conventional driving.
- Technologies are understood through stories—both good and bad. I don’t know how this tragic story will play out in the fickle public. Surprisingly, Tesla’s fatal 2016 crash doesn’t seem to have dramatically shifted attitudes toward driving technologies. But that was Tesla, and this is Uber. And whereas few people use Autopilot, almost everyone is a pedestrian.
- The current tragedy includes a long prologue that does not look good. In 2016, Uber refused to comply with California’s automated vehicle law, the state revoked the company’s vehicle registrations, Arizona’s governor tweeted “This is what OVER-regulation looks like! #ditchcalifornia,” and Uber trucked its vehicles down to his state.
- Developers need to show that they are worthy of the tremendous trust that regulators and the public necessarily place in them. They need to explain what they’re doing, why they believe it is reasonably safe, and why we should believe them. They need to candidly acknowledge their challenges and failures, and they need to readily mitigate the harms caused by those failures. I expand on these principles in a paper (“The Trustworthy Company”) forthcoming at newlypossible.org.
guest authorCIS Blog is produced by the Center for Internet and Society at Stanford Law School.