Robohub.org
 

Understanding the massive gulf between the Tesla Autopilot and a real robocar, in light of the crash

by
11 July 2016



share this:
Tesla P85D car. Source: Tesla

Tesla car. Source: Tesla

Brad Templeton describes Tesla’s Autopilot as a ‘distant cousin of a real robocar’ that primarily uses a MobilEye EyeQ3 camera combined with radars and ultrasonic sensors. Unlike robocar sensors, Tesla doesn’t have a lidar or use a map to help it understand the road and environment.

It’s not surprising there is much debate about the fatal Tesla Autopilot crash revealed to us last week. The big surprise to me is that Tesla and MobilEye stock seem entirely unaffected. For many years, one of the most common refrains I would hear in discussions about robocars was, “This is all great, but the first fatality and it’s all over.” I never believed it would all be over, but I didn’t think there would barely be a blip.

There has been lots of blips in the press and online, of course, but most have had some pretty wrong assumptions. Tesla’s Autopilot is a distant cousin of a real robocar, and that would explain why the fatality is no big deal for the field, but the press shows that people don’t know that.

Tesla’s Autopilot is really a fancy cruise control. It combines several key features from the ADAS (Advance Driver Assist) world, such as adaptive cruise control, lane-keeping and forward collision avoidance, among others. All these features have been in cars for years, and they are also combined in similar products in other cars, both commercial offerings and demonstrated prototypes. In fact, Honda promoted such a function over 10 years ago!

Tesla’s Autopilot primarily uses the MobilEye EyeQ3 camera, combined with radars and some ultrasonic sensors. It doesn’t have a lidar (the gold standard in robocar sensors) and it doesn’t use a map to help it understand the road and environment.

Most importantly, it is far from complete. There is tons of stuff it’s not able to handle. Some of those things it can’t do are known, some are unknown. Because of this, it is designed to only work under constant supervision by a driver. Tesla drivers get this explained in detail in their manual and when they turn on the Autopilot.

ADAS cars are declared not to be self-driving cars in many state laws

This is nothing new — lots of cars have lots of features to help drive (including the components used like cruise controls, each available on their own) which are not good enough to drive the car, and only are supposed to augment an alert driver, not replace one. Because car companies have been selling things like this for years, when the first robocar laws were drafted, they made sure there was a carve-out in the laws so that their systems would not be subject to the robocar regulations companies like Google wanted.

The Florida law, similar to other laws, says:

“The term [Autonomous Vehicle] excludes a motor vehicle enabled with active safety systems or driver assistance systems, including, without limitation, a system to provide electronic blind spot assistance, crash avoidance, emergency braking, parking assistance, adaptive cruise control, lane keep assistance, lane departure warning, or traffic jam and queuing assistant, unless any such system alone or in combination with other systems
enables the vehicle on which the technology is installed to drive without the active control or monitoring by a human operator.”

The Tesla’s failure to see the truck was not surprising

There’s been a lot of writing (and I did some of it) about the particulars of the failure of Tesla’s technology, and what might be done to fix it. That’s an interesting topic, but it misses a very key point. Tesla’s system did not fail. It operated within its design parameters, and according to the way Tesla describes it, in its manuals and warnings. The Tesla system, not being a robocar system, has tons of stuff it does not properly detect. A truck crossing the road is just one of those things. It’s also poor on stopped vehicles and many other situations.

Tesla could (and in time, will) fix the system’s problem with cross traffic. (MobilEye itself has that planned for its EyeQ4 chip coming out in 2018, and freely admits that the EyeQ3 Tesla uses does not detect cross traffic well.) But fixing that problem would not change what the system is, and not change the need for constant monitoring that Tesla has always declared it to have.

People must understand that the state of the art in camera systems is not today anywhere near the level needed for a robocar. That’s why most advanced robocar research projects all use LIDAR. There are those (including Tesla and MobilEye) who hope that the day might come soon when a camera can do it, but that day is not yet here. As such, any camera based car is going to make mistakes like these. Fix this one and there will be another. While the Tesla Autopilot failed to see the truck, that was an error, a tragic one, but not a failure of the Autopilot. It was an expected limitation, one of many. The system performed as specified, and as described in the Tesla user manual and the warnings to drivers. I will say this again, the Tesla Autopilot did not fail, it made an error expected under its design parameters.

The problem of getting too good — and punishing that

No, the big issue here is not what the Tesla Autopilot can’t handle, but the opposite. The issue is that it’s gotten good enough that people are mistaking it for a self-driving car, and they are taking their eyes off the road. Many stories abound of people doing e-mail, and there is an allegation that Brown, the deceased driver of this Tesla, might have been watching a movie. Movie or not Brown was not paying attention, as he could have easily seen and braked in time to avoid hitting that truck.

Tesla warns explicitly not to do that. Brown was a fairly skilled guy and he also should have known not to do that (if he did.) But as Tesla has gotten better, there is no question that people are not heeding the warning and getting a bit reckless. And Tesla knows this, of course.

This brings up all sorts of issues. Does it matter that Tesla knows that people are ignoring the warnings so long as it gives them sternly? Is there a duty of care to warn people that there is a danger you will ignore the warnings? Is there a duty of care to make sure (with technology) that they don’t ignore them?

Perhaps you see the problem here — the better the system gets, the more likely it is that it will make people complacent. People are stupid. They get away with something a few times and they imagine they will always get away with it. Older supervised products like cruise control needed your attention fairly often; there was no way you could watch a movie when using them. Cruise control needs your intervention to steer a few times a minute. Early autopilot-like systems need your intervention every few minutes. But Tesla got good enough that on open highways, it might not need your intervention for hours, or the entire highway part of a trip. Eventually it will get to not needing it for days, or even a month. Who would not be tempted to look away for a little while, and then a lot, if it gets there — or gets to needing only one intervention a year?

Our minds are bad at statistics. We, human drivers, have a tiny accident perhaps once every 10 years on average and one that is reported to insurance about every 25 years. We have a fatal accident about every 8,000 years of average driving, closer to 20,000 years on the highway. A rate of just one serious intervention every year sounds amazingly trustworthy. It seems like you took a bigger risk just getting on the road. But in fact, it’s very poor performance. Most people agree to be a true robocar, we’re going to need a car that makes an accident-causing mistake less often as a human, perhaps even twice as good or more. And the Tesla isn’t even remotely close. Even Google, which is much, much closer, isn’t there yet.

The incremental method

But we want systems to get better. It seems wrong to have to say that the better a system gets, the more dangerous it is. That a company should face extra liability for making the system better than the others. That’s not the direction we want to go. It’s definitely not the way that all the car companies want to go. They want to build their self-driving car in an evolutionary incremental way. They want to put supervised autopilots out on the road, and keep improving them until one day they wake up and say, “Hey, our system hasn’t needed an intervention on this class of roads for a million miles!” That day they can consider making it a real robocar for those roads. That’s different from companies like Google, Uber, Apple, Zoox and other non-car companies who want to aim directly for the final target, and are doing so with different development and testing methods.

Other views on the complacency issue

It should be noted that most other automakers who have made similar products have been much more keen on using tools to stop drivers from getting complacent and not being diligent in supervising. Some make you touch the wheel every few seconds. Some have experimented with cameras to watch the driver’s eyes. GM has announced a “super cruise” product for higher end Cadillacs for several years, but every year pulled back on shipping it, not feeling they have sufficient “countermeasures” to stop accidents like the Tesla one.

Google famously did testing with their own employees of a car that was quite a bit superior to the Tesla back in 2012-2013. Even though Google was working on a car that would not need supervision on certain routes, they required their employees (regular employees, not those on the car team) to nonetheless pay attention. They found that in spite of their warnings, about a week into commuting, some of these employees would do things that made the car team uncomfortable. This is what led Google to decide to make a prototype with no steering wheel, setting the bar higher for the team, requiring the car to handle every situation and not depend on an unreliable human.

Putting people at risk

Tesla drivers are ignoring warnings and getting complacent, and putting themselves and others at risk. But cars are full of features that cause this. The Tesla, and many other sports cars, can accelerate like crazy. It can drive 150mph. And every car maker knows that people take their muscle cars and speed with them, sometimes excessively and recklessly. We don’t blame the car makers for making cars capable of this, or for knowing this will happen. Car makers put in radios, knowing folks will get distracted fiddling with them. Car makers know many of their customers will drive their cars drunk, and kill lots of people. Generally, the law and society does not want to blame car makers for not having a breath alcohol test. People drive sleepy and half-blind from cataracts, and we always blame the driver for being reckless.

There is some difference between enabling a driver to take risks, and encouraging complacency. Is that enough of a difference to change how we think about this? If we change how we think, how much will we slow down the development of technology that in the long term will prevent many accidents and save many lives?

In an earlier post I have suggested Tesla might use more “countermeasures” to assure drivers are being diligent. Other automakers have deployed or experimented with such systems, or even held back on releasing their similar products because they want more countermeasures. At the same time, many of the countermeasures are annoying and people worry they might discourage use of the autopilots. Indeed, my proposal, which states that if you fail to pay attention, your autopilot is disabled for the rest of the day, and eventually permanently, would frighten customers. They love their autopilot and would value it less if they had to worry about losing it. But such people are also the sort of people who are making the mistake of thinking a car that needs intervention once a year is essentially the same as a robocar.

The unwritten rules of the road

I have often mused on the fact that real driving involves breaking the rules of the road all the time, and this accident might be such a situation. While the final details are not in, it seems this intersection might not be a very safe one. It is possible, by the math of some writers, that the truck began its turn unable to see the Tesla, because there is a small crest in the road 1200’ out. In addition, some allege the Tesla might have been speeding as fast as 90mph — which is not hard to believe since the driver had many speeding tickets.

Normally, if you make an unprotected left turn, your duty is to yield right-of-way to oncoming traffic. Normally, the truck driver would be at fault. But perhaps the road was clear when he started turning, and the Tesla only appeared once he was in the turn? Perhaps the super wide median offers an opportunity the truck driver didn’t take, to pause after starting the turn to check again, and the truck driver remains at fault for not double checking with such a big rig.

Ordinarily, if a truck did turn and not yield to oncoming traffic, no accident would happen. That’s because the oncoming vehicle would see the truck, and the road has a lot of space for that vehicle to brake. Sure, the truck is in the wrong, but the driver of an oncoming vehicle facing a truck would have to be insane to proudly assert their right-of-way and plow into the truck. No sane and sober human driver would ever do that. As such, the intersection is actually safe, even without sufficient sightlines for a slow truck and fast car.

Because of that, real world driving involves stealing right-of-way all the time. People cut people off, nudge into lanes and get away with it because rational drivers yield the ROW that belongs to them. Sometimes this is even necessary to make the roads work.

With the Tesla driver inattentive, the worst happened. The Tesla’s sensors were not good enough to detect the truck and brake for it (and the human wasn’t insane but wasn’t looking at the road) so he didn’t do the job of compensating for the autopilot’s inadequacy. The result was death.

Measuring the safety of the Autopilot

Tesla touted a misleading number, saying they had gone 130 million miles in Autopilot before this first fatality. The reality is that on limited access freeways, the USA number for human drivers is one fatality per 180 million miles, not the 90 million they cited. The Autopilot is used mostly on freeways where it functions best.

Tesla will perhaps eventually publish numbers to help judge the performance of the Autopilot. Companies participating in the California self-driving car registration program are required to publish their numbers. Tesla participates but publishes “zero” all the time because the Autopilot does not qualify as a self-driving car.

Here are numbers we might like to see, perhaps broken down by classes or road, weather conditions, lighting conditions and more:

  • Number of “safety” disengagements per mile, where a driver had to take the controls for a safety reason (as opposed to just coming to a traffic light or turn.)
  • Number of safety disengagements which, after further analysis in simulator, would have resulted in an accident if the driver had not intervened.
  • Number of “late” safety disengagements, or other indications of driver inattention.
  • Number of disengagements triggered by the system rather than the driver.

The problem is Tesla doesn’t really have a source for many of these numbers. Professional safety drivers always log why they disengage, and detailed car logs allow the building of more data. Tesla would need to record more data to report this.

Tesla with LIDAR

Folks in Silicon Valley have spotted what appear to be official Tesla test cars with LIDARs on them (one research LIDAR and one looking more like a production unit.) This would be a positive sign for them on the path to a full robocar. Tesla’s problem is that until about 2018, there is not a cost-effective LIDAR they can ship in a car, and so their method of gathering data by observing customers drive doesn’t work. They could equip a subset of special cars with LIDARs and offer them to a special exclusive set of customers — Tesla drivers are such rabid fans they might even pay for the privilege of getting the special test car and trying it out. They would still need to supervise, though, and that means finding a way around the complacency issue.


If you enjoyed this article, you may also enjoy:



tags: , , , , ,


Brad Templeton, Robocars.com is an EFF board member, Singularity U faculty, a self-driving car consultant, and entrepreneur.
Brad Templeton, Robocars.com is an EFF board member, Singularity U faculty, a self-driving car consultant, and entrepreneur.





Related posts :



Open Robotics Launches the Open Source Robotics Alliance

The Open Source Robotics Foundation (OSRF) is pleased to announce the creation of the Open Source Robotics Alliance (OSRA), a new initiative to strengthen the governance of our open-source robotics so...

Robot Talk Episode 77 – Patricia Shaw

In the latest episode of the Robot Talk podcast, Claire chatted to Patricia Shaw from Aberystwyth University all about home assistance robots, and robot learning and development.
18 March 2024, by

Robot Talk Episode 64 – Rav Chunilal

In the latest episode of the Robot Talk podcast, Claire chatted to Rav Chunilal from Sellafield all about robotics and AI for nuclear decommissioning.
31 December 2023, by

AI holidays 2023

Thanks to those that sent and suggested AI and robotics-themed holiday videos, images, and stories. Here’s a sample to get you into the spirit this season....
31 December 2023, by and

Faced with dwindling bee colonies, scientists are arming queens with robots and smart hives

By Farshad Arvin, Martin Stefanec, and Tomas Krajnik Be it the news or the dwindling number of creatures hitting your windscreens, it will not have evaded you that the insect world in bad shape. ...
31 December 2023, by

Robot Talk Episode 63 – Ayse Kucukyilmaz

In the latest episode of the Robot Talk podcast, Claire chatted to Ayse Kucukyilmaz from the University of Nottingham about collaboration, conflict and failure in human-robot interactions.
31 December 2023, by





Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association