Robohub.org
 

Comparing the Uber and Tesla fatalities with a table


by
04 April 2018



share this:

The Uber car and Tesla’s autopilot, both in the news for fatalities are really two very different things. This table outlines the difference. Also, see below for some new details on why the Tesla crashed and more.

Uber ATG Test Tesla Autopilot
A prototype full robocar capable of unmanned operations on city streets. A driver assist system for highways and expressways
Designed for taxi service Designed for privately owned and driven cars
A full suite of high end roobcar sensors including LIDAR Productive automotive sensors, cameras and radar.
1 pedestran fatality, other accidents unknown Fatalities in Florida, China, California, other serious crashes without injury
Approximately 3 million miles of testing Late 2016: 300M miles, 1.3B miles data gathering.
A prototype in testing which needs a human safety driver monitoring it A production product overseen by the customer
Designed to handle everything it might encounter on the road Designed to handle only certain situations. Users are expressly warned it doesn’t handle major things like cross traffic, stop signs and traffic lights.
Still in an early state, needing intervention every 13 miles on city streets In production and needing intervention rarely on highways but if you tried to drive it on city streets it would need it very frequently
Needs a state licence for testing with rules requring safety drivers No government regulation needed, similar to the adpative cruise control that it is based on
Only Uber employees can get behind the wheel Anybody can be behind the wheel
Vehicle failed in manner outside its design constraints — it should have readily detected and stopped for the pedestrian Vehicles had incidents in ways expected under their design constraints
Vehicle was trusted too much by safety driver, took eyes off road for 5 seconds Vehicles trusted too much by drivers, took eyes off road for 6 seconds or longer
Safety drivers get 3 weeks training, fired if caught using a phone No training or punishments for customers, though manual and screen describe proper procedures for operating
Safety driver recorded with camera, no warnings by software of inattention Tesla drivers get visible, then audibile alerts if they take hands off the wheel for too long
Criticism that solo safety driver job is too hard, that inattention will happen Criticism that drivers are overtrusting the system, regularly not looking at the road
Killed a bystander, though it had right of way Killed customers who were ignoring monitoring requirements
NTSB Investigating NTSB Investigating

Each company faces a different challenge to fix its problems. For Uber, they need to improve the quality of their self-drive software so that such a basic failure as we saw here is extremely unlikely. Perhaps even more importantly, they need to revamp their safety driver system so that safety driver alertness is monitored and assured, including going back to two safety drivers in all situations. Further, they should consider some “safety driver assist” technology, such as the use of the system in the Volvo (or some other aftermarket system) to provide alerts to the safety drivers if it looks like something is going wrong. That’s not trivial — if the system beeps too much it gets ignored, but it can be done.

For Tesla, they face a more interesting challenge. Their claim is that in spite of the accidents, the autopilot is still a net win. That because people who drive properly with autopilot have half the accidents of people who drive without it, the total number of accidents is still lower, even if you include the accidents, including these fatalities, which come to those who disregard the warnings about how to properly use it.

That people disregard those warnings is obvious and hard to stop. Tesla argues, however, that turning off Autopilot because of them would make Telsa driving and the world less safe. For them, options exist to make people drive diligently with the autopilot, but they must not make the autopilot so much less pleasant such that people decide to not use it even properly. That would actually make driving less safe if enough people did that.

Why the Tesla crashed

A theory, now given credence by some sample videos, suggests the Telsa was confused by the white lines which divide the road at an off-ramp, the expanding triangle known as the “gore.” As the triangle expands, a simple system might think they were the borders of a lane. Poor lane marking along the gore might make the vehicle even think the new “lane” is a continuation of the lane the car is in, making the car try to drive the lane — right into the barrier.

This video made by Tesla owners near Indiana, shows a Telsa doing this when the right line of the gore is very washed out compared to the left. At 85/101 (the recent Tesla crash) the lines are mostly stronger but there is a 30-40 foot gap in the right line which perhaps could trick a car into entering and following the gore. The gore at 85/101 also is lacking the chevron “do not drive here” stripes often found at these gores. It is not good at stationary objects like the crumple barrier, but its warning stripes are something that should be in its classification database.

Once again, the Tesla is just a smart cruise control. It is going to make mistakes like this, which is why they tell you you have to keep watching. Perhaps crashes like this will make people do that.

The NTSB is angry that Tesla released any information. I was not aware they frowned on this. This may explain Uber’s silence during the NTSB investigation there.




Brad Templeton, Robocars.com is an EFF board member, Singularity U faculty, a self-driving car consultant, and entrepreneur.
Brad Templeton, Robocars.com is an EFF board member, Singularity U faculty, a self-driving car consultant, and entrepreneur.





Related posts :



Robot Talk Episode 126 – Why are we building humanoid robots?

  20 Jun 2025
In this special live recording at Imperial College London, Claire chatted to Ben Russell, Maryam Banitalebi Dehkordi, and Petar Kormushev about humanoid robotics.

Gearing up for RoboCupJunior: Interview with Ana Patrícia Magalhães

and   18 Jun 2025
We hear from the organiser of RoboCupJunior 2025 and find out how the preparations are going for the event.

Robot Talk Episode 125 – Chatting with robots, with Gabriel Skantze

  13 Jun 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Gabriel Skantze from KTH Royal Institute of Technology about having natural face-to-face conversations with robots.

Preparing for kick-off at RoboCup2025: an interview with General Chair Marco Simões

and   12 Jun 2025
We caught up with Marco to find out what exciting events are in store at this year's RoboCup.

Interview with Amar Halilovic: Explainable AI for robotics

  10 Jun 2025
Find out about Amar's research investigating the generation of explanations for robot actions.

Robot Talk Episode 124 – Robots in the performing arts, with Amy LaViers

  06 Jun 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Amy LaViers from the Robotics, Automation, and Dance Lab about the creative relationship between humans and machines.

Robot Talk Episode 123 – Standardising robot programming, with Nick Thompson

  30 May 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Nick Thompson from BOW about software that makes robots easier to program.

Congratulations to the #AAMAS2025 best paper, best demo, and distinguished dissertation award winners

  29 May 2025
Find out who won the awards presented at the International Conference on Autonomous Agents and Multiagent Systems last week.



 

Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


©2025.05 - Association for the Understanding of Artificial Intelligence


 












©2025.05 - Association for the Understanding of Artificial Intelligence