Robohub.org
 

Tesla’s fatal crash

by
26 April 2018



share this:

Source: Tesla

Tesla can do better than its current public response to the recent fatal crash involving one of its vehicles. I would like to see more introspection, credibility, and nuance.

Introspection

Over the last few weeks, Tesla has blamed the deceased driver and a damaged highway crash attenuator while lauding the performance of Autopilot, its SAE level 2 driver assistance system that appears to have directed a Model X into the attenuator. The company has also disavowed its own responsibility: “The fundamental premise of both moral and legal liability is a broken promise, and there was none here.”

In Tesla’s telling, the driver knew he should pay attention, he did not pay attention, and he died. End of story. The same logic would seem to apply if the driver had hit a pedestrian instead of a crash barrier. Or if an automaker had marketed an outrageously dangerous car accompanied by a warning that the car was, in fact, outrageously dangerous. In the 1980 comedy Airplane!, a television commentator dismisses the passengers on a distressed airliner: “They bought their tickets. They knew what they were getting into. I say let ‘em crash.” As a rule, it’s probably best not to evoke a character in a Leslie Nielsen movie.

It may well turn out that the driver in this crash was inattentive, just as the US National Transportation Safety Board (NTSB) concluded that the Tesla driver in an earlier fatal Florida crash was inattentive. But driver inattention is foreseeable (and foreseen), and “[j]ust because a driver does something stupid doesn’t mean they – or others who are truly blameless – should be condemned to an otherwise preventable death.” Indeed, Ralph Nader’s argument that vehicle crashes are foreseeable and could be survivable led Congress to establish the National Highway Traffic Safety Administration (NHTSA).

Airbags are a particularly relevant example. Airbags are unquestionably a beneficial safety technology. But early airbags were designed for average-size male drivers—a design choice that endangered children and lighter adults. When this risk was discovered, responsible companies did not insist that because an airbag is safer than no airbag, nothing more should be expected of them. Instead, they designed second-generation airbags that are safer for everyone.

Similarly, an introspective company—and, for that matter, an inquisitive jury—would ask whether and how Tesla’s crash could have been reasonably prevented. Tesla has appropriately noted that Autopilot is neither “perfect” nor “reliable,” and the company is correct that the promise of a level 2 system is merely that the system will work unless and until it does not. Furthermore, individual autonomy is an important societal interest, and driver responsibility is a critical element of road traffic safety. But it is because driver responsibility remains so important that Tesla should consider more effective ways of engaging and otherwise managing the imperfect human drivers on which the safe operation of its vehicles still depends.

Such an approach might include other ways of detecting driver engagement. NTSB has previously expressed its concern over using only steering wheel torque as a proxy for driver attention. And GM’s own level 2 system, Super Cruise, tracks driver head position.

Such an approach may also include more aggressive measures to deter distraction. Tesla could alert law enforcement when drivers are behaving dangerously. It could also distinguish safety features from convenience features—and then more stringently condition convenience on the concurrent attention of the driver. For example, active lane keeping (which might ping pong the vehicle between lane boundaries) could enhance safety even if active lane centering is not operative. Similarly, automatic deceleration could enhance safety even if automatic acceleration is inoperative.

NTSB’s ongoing investigation is an opportunity to credibly address these issues. Unfortunately, after publicly stating its own conclusions about the crash, Tesla is no longer formally participating in NTSB’s investigation. Tesla faults NTSB for this outcome: “It’s been clear in our conversations with the NTSB that they’re more concerned with press headlines than actually promoting safety.” That is not my impression of the people at NTSB. Regardless, Tesla’s argument might be more credible if it did not continue what seems to be the company’s pattern of blaming others.

Credibility

Tesla could also improve its credibility by appropriately qualifying and substantiating what it says. Unfortunately, Tesla’s claims about the relative safety of its vehicles still range from “lacking” to “ludicrous on their face.” (Here are some recent views.)

Tesla repeatedly emphasizes that “our first iteration of Autopilot was found by the U.S. government to reduce crash rates by as much as 40%.” NHTSA reached its conclusion after (somehow) analyzing Tesla’s data—data that both Tesla and NHTSA have kept from public view. Accordingly, I don’t know whether the underlying math actually took only five minutes, but I can attempt some crude reverse engineering to complement the thoughtful analyses already done by others.

Let’s start with NHTSA’s summary: The Office of Defects Investigation (ODI) “analyzed mileage and airbag deployment data supplied by Tesla for all MY 2014 through 2016 Model S and 2016 Model X vehicles equipped with the Autopilot Technology Package, either installed in the vehicle when sold or through an OTA update, to calculate crash rates by miles travelled prior to and after Autopilot installation. [An accompanying chart] shows the rates calculated by ODI for airbag deployment crashes in the subject Tesla vehicles before and after Autosteer installation. The data show that the Tesla vehicles crash rate dropped by almost 40 percent after Autosteer installation”—from 1.3 to 0.8 crashes per million miles.

This raises at least two questions. First, how do these rates compare to those for other vehicles? Second, what explains the asserted decline?

Comparing Tesla’s rates is especially difficult because of a qualification that NHTSA’s report mentions only once and that Tesla’s statements do not acknowledge at all. The rates calculated by NHTSA are for “airbag deployment crashes” only—a category that NHSTA does not generally track for nonfatal crashes.

NHTSA does estimate rates at which vehicles are involved in crashes. (For a fair comparison, I look at crashed vehicles rather than crashes.) With respect to crashes resulting in injury, 2015 rates were 0.88 crashes per million miles for light trucks and 1.26 for passenger cars. And with respect to property-damage only crashes, they were 2.35 for light trucks and 3.12 for passenger cars. This means that, depending on the correlation between airbag deployment and crash injury (and accounting for the increasing number and sophistication of airbags), Tesla’s rates could be better than, worse than, or comparable to these national estimates.

Airbag deployment is a complex topic, but the upshot is that, by design, airbags do not always inflate. An analysis by the Pennsylvania Department of Transportation suggests that airbags deploy in less than half of the airbag-equipped vehicles that are involved in reported crashes, which are generally crashes that cause physical injury or significant property damage. (The report’s shift from reportable crashes to reported crashes creates some uncertainty, but let’s assume that any crash that results in the deployment of an airbag is serious enough to be counted.)

Data from the same analysis show about two reported crashed vehicles per million miles traveled. Assuming a deployment rate of 50 percent suggests that a vehicle deploys an airbag in a crash about once every million miles that it travels, which is roughly comparable to Tesla’s post-Autopilot rate.

Indeed, at least two groups with access to empirical data—the Highway Loss Data Institute and AAA – The Auto Club Group—have concluded that Tesla vehicles do not have a low claim rate (in addition to having a high average cost per claim), which suggests that these vehicles do not have a low crash rate either.

Tesla offers fatality rates as another point of comparison: “In the US, there is one automotive fatality every 86 million miles across all vehicles from all manufacturers. For Tesla, there is one fatality, including known pedestrian fatalities, every 320 million miles in vehicles equipped with Autopilot hardware. If you are driving a Tesla equipped with Autopilot hardware, you are 3.7 times less likely to be involved in a fatal accident.”

In 2016, there was one fatality for every 85 million vehicle miles traveled—close to the number cited by Tesla. For that same year, NHTSA’s FARS database shows 14 fatalities across 13 crashes involving Tesla vehicles. (Ten of these vehicles were model year 2015 or later; I don’t know whether Autopilot was equipped at the time of the crash.) By the end of 2016, Tesla vehicles had logged about 3.5 billion miles worldwide. If we accordingly assume that Tesla vehicles traveled 2 billion miles in the United States in 2016 (less than one tenth of one percent of US VMT), we can estimate one fatality for every 150 million miles traveled.

It is not surprising if Tesla’s vehicles are less likely to be involved in a fatal crash than the US vehicle fleet in its entirety. That fleet, after all, has an average age of more than a decade. It includes vehicles without electronic stability control, vehicles with bald tires, vehicles without airbags, and motorcycles. Differences between crashes involving a Tesla vehicle and crashes involving no Tesla vehicles could therefore have nothing to do with Autopilot.

More surprising is the statement that Tesla vehicles equipped with Autopilot are much safer than Tesla vehicles without Autopilot. At the outset, we don’t know how often Autopilot was actually engaged (rather than merely equipped), we don’t know the period of comparison (even though crash and injury rates fluctuate over the calendar year), and we don’t even know whether this conclusion is statistically significant. Nonetheless, on the assumption that the unreleased data support this conclusion, let’s consider three potential explanations:

First, perhaps Autopilot is incredibly safe. If we assume (again, because we just don’t know otherwise) that Autopilot is actually engaged for half of the miles traveled by vehicles on which it is installed, then a 40 percent reduction in airbag deployments per million miles really means an 80 percent reduction in airbag deployments while Autopilot is engaged. Pennsylvania data show that about 20 percent of vehicles in reported crashes are struck in the rear, and if we further assume that Autopilot would rarely prevent another vehicle from rear-ending a Tesla, then Autopilot would essentially need to prevent every other kind of crash while engaged in order to achieve such a result.

Second, perhaps Tesla’s vehicles had a significant performance issue that the company corrected in an over-the-air update at or around the same time that it introduced Autopilot. I doubt this—but the data released are as consistent with this conclusion as with a more favorable one.

Third, perhaps Tesla introduced or upgraded other safety features in one of these OTA updates. Indeed, Tesla added automatic emergency braking and blind spot warning about half a year before releasing Autopilot, and Autopilot itself includes side collision avoidance. Because these features may function even when Autopilot is not engaged and might not induce inattention to the same extent as Autopilot, they should be distinguished from rather than conflated with Autopilot. I can see an argument that more people will be willing to pay for convenience plus safety than for just safety alone, but I have not seen Tesla make this more nuanced argument.

Nuance

In general, Tesla should embrace more nuance. Currently, the company’s explicit and implicit messages regarding this fatal crash have tended toward the absolute. The driver was at fault—and therefore Tesla was not. Autopilot improves safety—and therefore criticism is unwarranted. The company needs to be able to communicate with the public about Autopilot—and therefore it should share specific and, in Tesla’s view, exculpatory information about the crash that NTSB is investigating.

Tesla understands nuance. Indeed, in its statement regarding its relationship with NTSB, the company noted that “we will continue to provide technical assistance to the NTSB.” Tesla should embrace a systems approach to road traffic safety and acknowledge the role that the company can play in addressing distraction. It should emphasize the limitations of Autopilot as vigorously as it highlights the potential of automation. And it should cooperate with NTSB while showing that it “believe[s] in transparency” by releasing data that do not pertain specifically to this crash but that do support the company’s broader safety claims.

For good measure, Tesla should also release a voluntary safety self-assessment. (Waymo and General Motors have.) Autopilot is not an automated driving system, but that is where Tesla hopes to go. And by communicating with introspection, credibility, and nuance, the company can help make sure the public is on board.



tags:


Bryant Walker Smith is an expert on the legal aspects of autonomous driving and a fellow at Stanford Law School.
Bryant Walker Smith is an expert on the legal aspects of autonomous driving and a fellow at Stanford Law School.





Related posts :



Robot Talk Episode 99 – Joe Wolfel

In the latest episode of the Robot Talk podcast, Claire chatted to Joe Wolfel from Terradepth about autonomous submersible robots for collecting ocean data.
22 November 2024, by

Robot Talk Episode 98 – Gabriella Pizzuto

In the latest episode of the Robot Talk podcast, Claire chatted to Gabriella Pizzuto from the University of Liverpool about intelligent robotic manipulators for laboratory automation.
15 November 2024, by

Online hands-on science communication training – sign up here!

Find out how to communicate about your work with experts from Robohub, AIhub, and IEEE Spectrum.
13 November 2024, by

Robot Talk Episode 97 – Pratap Tokekar

In the latest episode of the Robot Talk podcast, Claire chatted to Pratap Tokekar from the University of Maryland about how teams of robots with different capabilities can work together.
08 November 2024, by

Robot Talk Episode 96 – Maria Elena Giannaccini

In the latest episode of the Robot Talk podcast, Claire chatted to Maria Elena Giannaccini from the University of Aberdeen about soft and bioinspired robotics for healthcare and beyond.
01 November 2024, by

Robot Talk Episode 95 – Jonathan Walker

In the latest episode of the Robot Talk podcast, Claire chatted to Jonathan Walker from Innovate UK about translating robotics research into the commercial sector.
25 October 2024, by





Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association