Robohub.org
 

DARPA challenge mystery solved and how to handle Robocar failures

by
01 December 2017



share this:

A small mystery from Robocar history was resolved recently, and revealed at the DARPA grand challenge reunion at CMU.

The story is detailed here at IEEE spectrum and I won’t repeat it all, but a brief summary goes like this.

In the 2nd grand challenge, CMU’s Highlander was a favourite and doing very well. Mid-race it started losing engine power and it stalled for long enough that Stanford’s Stanley beat it by 11 minutes.

It was discovered recently a small computerized fuel injector controller in the Hummer (one of only two) may have been damaged in a roll-over that Highlander had, and if you pressed on it, the engine would reduce power or fail.

People have wondered how the robocar world might be different if they had not had that flaw. Stanford’s victory was a great boost for their team, and Sebastian Thrun was hired to start Google’s car team — but Chris Urmson, lead on Highlander, was also hired to lead engineering, and Chris would end up staying on the project for much longer than Sebastian who got seduced by the idea of doing Udacity. Google was much more likely to have closer ties to Stanford people anyway, being where it is.

CMU’s fortunes might have ended up better, but they managed to be the main source of Uber’s first team.

There are many stories of small things making a big difference. Also well known is how Anthony Levandowski, who entered a motorcycle in the race, forgot to turn on a stabilizer. The motorcycle fell over 2 seconds after he released it, dashing all of his team’s work. Anthony of course did OK (as another leader on the Google team, and then to Uber) but of course has recently had some “trouble”.

Another famous incident came when Volvo was doing a demo for press of their collision avoidance system. You could not pick a worse time for a failure, and of course there is video of it.

They had tested the demo extensively the night before. In fact they tested it too much, and left a battery connected during the night, so that it was drained by the morning when they showed off to the press.

These stories remind people of all the ways things go wrong. More to the point, they remind us that we must design expecting things to go wrong, and have systems that are able to handle that. These early demos and prototypes didn’t have that, but cars that go on the road do and will.

Making systems resilient is the only answer when they get as complex as they are. Early car computers were pretty simple, but a self-driving system is so complex that it is never going to be formally verified or perfect. Instead, it must be expected that every part will fail, and the failure of every part — or even every combination of parts — should be tested in both simulation, and where possible in reality. What is tested is how the rest of the system handles the failure, and if it doesn’t handle it, that has to be fixed.

It does not need to handle it perfectly, though. For example, in many cases the answer to failure will be, “We’re at a reduced safety level. Let’s get off the road, and summon another car to help the passengers continue on their way.”

It might even be a severely reduced safety level. Possibly even, as hard as this number may be to accept, 100 times less safe! That’s because the car will never drive very far in that degraded condition. Consider a car that has one incident every million miles. In degraded condition, it might have an incident every 10,000 miles. You clearly won’t drive home in that condition, but the 1/4 mile of driving at degraded level is as risky as 25 miles of ordinary driving at full operational level, which is a risk taken every day. As long as vehicles do not drive more than a short distance at this degraded level, the overall safety record should still be satisfactory.

Of course, if the safety level degrades to a level that could be called “dangerous” rather than “less safe” that’s another story. That must never be allowed.

An example of this would be failure of the main sensors, such as a LIDAR. Without a LIDAR, a car would rely on cameras and radar. Companies like Tesla think they can make a car fully safe with just those two, and perhaps they will some day. But even though those are not yet safe enough, they are safe enough for a problem like getting off the road, or even getting to the next exit on a highway.

This is important because we will never get perfection. We will only get lower and lower levels of risk, and the risk will not be constant — it will be changing with road conditions, and due to system or mechanical failures. But we can still get the safety level we want — and get the technology on the road.




Brad Templeton, Robocars.com is an EFF board member, Singularity U faculty, a self-driving car consultant, and entrepreneur.
Brad Templeton, Robocars.com is an EFF board member, Singularity U faculty, a self-driving car consultant, and entrepreneur.





Related posts :



Robot Talk Episode 92 – Gisela Reyes-Cruz

In the latest episode of the Robot Talk podcast, Claire chatted to Gisela Reyes-Cruz from the University of Nottingham about how humans interact with, trust and accept robots.
04 October 2024, by

Robot Talk Episode 91 – John Leonard

In the latest episode of the Robot Talk podcast, Claire chatted to John Leonard from Massachusetts Institute of Technology about autonomous navigation for underwater vehicles and self-driving cars. 
27 September 2024, by

Interview with Jerry Tan: Service robot development for education

We find out about the Jupiter2 platform and how it can be used in educational settings.
18 September 2024, by

#RoboCup2024 – daily digest: 21 July

In the last of our digests, we report on the closing day of competitions in Eindhoven.
21 July 2024, by and

#RoboCup2024 – daily digest: 20 July

In the second of our daily round-ups, we bring you a taste of the action from Eindhoven.
20 July 2024, by and

#RoboCup2024 – daily digest: 19 July

Welcome to the first of our daily round-ups from RoboCup2024 in Eindhoven.
19 July 2024, by and





Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association