Robohub.org
 

Comparing the Uber and Tesla fatalities with a table


by
04 April 2018



share this:

The Uber car and Tesla’s autopilot, both in the news for fatalities are really two very different things. This table outlines the difference. Also, see below for some new details on why the Tesla crashed and more.

Uber ATG Test Tesla Autopilot
A prototype full robocar capable of unmanned operations on city streets. A driver assist system for highways and expressways
Designed for taxi service Designed for privately owned and driven cars
A full suite of high end roobcar sensors including LIDAR Productive automotive sensors, cameras and radar.
1 pedestran fatality, other accidents unknown Fatalities in Florida, China, California, other serious crashes without injury
Approximately 3 million miles of testing Late 2016: 300M miles, 1.3B miles data gathering.
A prototype in testing which needs a human safety driver monitoring it A production product overseen by the customer
Designed to handle everything it might encounter on the road Designed to handle only certain situations. Users are expressly warned it doesn’t handle major things like cross traffic, stop signs and traffic lights.
Still in an early state, needing intervention every 13 miles on city streets In production and needing intervention rarely on highways but if you tried to drive it on city streets it would need it very frequently
Needs a state licence for testing with rules requring safety drivers No government regulation needed, similar to the adpative cruise control that it is based on
Only Uber employees can get behind the wheel Anybody can be behind the wheel
Vehicle failed in manner outside its design constraints — it should have readily detected and stopped for the pedestrian Vehicles had incidents in ways expected under their design constraints
Vehicle was trusted too much by safety driver, took eyes off road for 5 seconds Vehicles trusted too much by drivers, took eyes off road for 6 seconds or longer
Safety drivers get 3 weeks training, fired if caught using a phone No training or punishments for customers, though manual and screen describe proper procedures for operating
Safety driver recorded with camera, no warnings by software of inattention Tesla drivers get visible, then audibile alerts if they take hands off the wheel for too long
Criticism that solo safety driver job is too hard, that inattention will happen Criticism that drivers are overtrusting the system, regularly not looking at the road
Killed a bystander, though it had right of way Killed customers who were ignoring monitoring requirements
NTSB Investigating NTSB Investigating

Each company faces a different challenge to fix its problems. For Uber, they need to improve the quality of their self-drive software so that such a basic failure as we saw here is extremely unlikely. Perhaps even more importantly, they need to revamp their safety driver system so that safety driver alertness is monitored and assured, including going back to two safety drivers in all situations. Further, they should consider some “safety driver assist” technology, such as the use of the system in the Volvo (or some other aftermarket system) to provide alerts to the safety drivers if it looks like something is going wrong. That’s not trivial — if the system beeps too much it gets ignored, but it can be done.

For Tesla, they face a more interesting challenge. Their claim is that in spite of the accidents, the autopilot is still a net win. That because people who drive properly with autopilot have half the accidents of people who drive without it, the total number of accidents is still lower, even if you include the accidents, including these fatalities, which come to those who disregard the warnings about how to properly use it.

That people disregard those warnings is obvious and hard to stop. Tesla argues, however, that turning off Autopilot because of them would make Telsa driving and the world less safe. For them, options exist to make people drive diligently with the autopilot, but they must not make the autopilot so much less pleasant such that people decide to not use it even properly. That would actually make driving less safe if enough people did that.

Why the Tesla crashed

A theory, now given credence by some sample videos, suggests the Telsa was confused by the white lines which divide the road at an off-ramp, the expanding triangle known as the “gore.” As the triangle expands, a simple system might think they were the borders of a lane. Poor lane marking along the gore might make the vehicle even think the new “lane” is a continuation of the lane the car is in, making the car try to drive the lane — right into the barrier.

This video made by Tesla owners near Indiana, shows a Telsa doing this when the right line of the gore is very washed out compared to the left. At 85/101 (the recent Tesla crash) the lines are mostly stronger but there is a 30-40 foot gap in the right line which perhaps could trick a car into entering and following the gore. The gore at 85/101 also is lacking the chevron “do not drive here” stripes often found at these gores. It is not good at stationary objects like the crumple barrier, but its warning stripes are something that should be in its classification database.

Once again, the Tesla is just a smart cruise control. It is going to make mistakes like this, which is why they tell you you have to keep watching. Perhaps crashes like this will make people do that.

The NTSB is angry that Tesla released any information. I was not aware they frowned on this. This may explain Uber’s silence during the NTSB investigation there.




Brad Templeton, Robocars.com is an EFF board member, Singularity U faculty, a self-driving car consultant, and entrepreneur.
Brad Templeton, Robocars.com is an EFF board member, Singularity U faculty, a self-driving car consultant, and entrepreneur.





Related posts :



A flexible lens controlled by light-activated artificial muscles promises to let soft machines see

  30 Oct 2025
Researchers have designed an adaptive lens made of soft, light-responsive, tissue-like materials.

Social media round-up from #IROS2025

  27 Oct 2025
Take a look at what participants got up to at the IEEE/RSJ International Conference on Intelligent Robots and Systems.

Using generative AI to diversify virtual training grounds for robots

  24 Oct 2025
New tool from MIT CSAIL creates realistic virtual kitchens and living rooms where simulated robots can interact with models of real-world objects, scaling up training data for robot foundation models.

Robot Talk Episode 130 – Robots learning from humans, with Chad Jenkins

  24 Oct 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Chad Jenkins from University of Michigan about how robots can learn from people and assist us in our daily lives.

Robot Talk at the Smart City Robotics Competition

  22 Oct 2025
In a special bonus episode of the podcast, Claire chatted to competitors, exhibitors, and attendees at the Smart City Robotics Competition in Milton Keynes.

Robot Talk Episode 129 – Automating museum experiments, with Yuen Ting Chan

  17 Oct 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Yuen Ting Chan from Natural History Museum about using robots to automate molecular biology experiments.

What’s coming up at #IROS2025?

  15 Oct 2025
Find out what the International Conference on Intelligent Robots and Systems has in store.

From sea to space, this robot is on a roll

  13 Oct 2025
Graduate students in the aptly named "RAD Lab" are working to improve RoboBall, the robot in an airbag.



 

Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


 












©2025.05 - Association for the Understanding of Artificial Intelligence