Robohub.org
 

DARPA challenge mystery solved and how to handle Robocar failures


by
01 December 2017



share this:

A small mystery from Robocar history was resolved recently, and revealed at the DARPA grand challenge reunion at CMU.

The story is detailed here at IEEE spectrum and I won’t repeat it all, but a brief summary goes like this.

In the 2nd grand challenge, CMU’s Highlander was a favourite and doing very well. Mid-race it started losing engine power and it stalled for long enough that Stanford’s Stanley beat it by 11 minutes.

It was discovered recently a small computerized fuel injector controller in the Hummer (one of only two) may have been damaged in a roll-over that Highlander had, and if you pressed on it, the engine would reduce power or fail.

People have wondered how the robocar world might be different if they had not had that flaw. Stanford’s victory was a great boost for their team, and Sebastian Thrun was hired to start Google’s car team — but Chris Urmson, lead on Highlander, was also hired to lead engineering, and Chris would end up staying on the project for much longer than Sebastian who got seduced by the idea of doing Udacity. Google was much more likely to have closer ties to Stanford people anyway, being where it is.

CMU’s fortunes might have ended up better, but they managed to be the main source of Uber’s first team.

There are many stories of small things making a big difference. Also well known is how Anthony Levandowski, who entered a motorcycle in the race, forgot to turn on a stabilizer. The motorcycle fell over 2 seconds after he released it, dashing all of his team’s work. Anthony of course did OK (as another leader on the Google team, and then to Uber) but of course has recently had some “trouble”.

Another famous incident came when Volvo was doing a demo for press of their collision avoidance system. You could not pick a worse time for a failure, and of course there is video of it.

They had tested the demo extensively the night before. In fact they tested it too much, and left a battery connected during the night, so that it was drained by the morning when they showed off to the press.

These stories remind people of all the ways things go wrong. More to the point, they remind us that we must design expecting things to go wrong, and have systems that are able to handle that. These early demos and prototypes didn’t have that, but cars that go on the road do and will.

Making systems resilient is the only answer when they get as complex as they are. Early car computers were pretty simple, but a self-driving system is so complex that it is never going to be formally verified or perfect. Instead, it must be expected that every part will fail, and the failure of every part — or even every combination of parts — should be tested in both simulation, and where possible in reality. What is tested is how the rest of the system handles the failure, and if it doesn’t handle it, that has to be fixed.

It does not need to handle it perfectly, though. For example, in many cases the answer to failure will be, “We’re at a reduced safety level. Let’s get off the road, and summon another car to help the passengers continue on their way.”

It might even be a severely reduced safety level. Possibly even, as hard as this number may be to accept, 100 times less safe! That’s because the car will never drive very far in that degraded condition. Consider a car that has one incident every million miles. In degraded condition, it might have an incident every 10,000 miles. You clearly won’t drive home in that condition, but the 1/4 mile of driving at degraded level is as risky as 25 miles of ordinary driving at full operational level, which is a risk taken every day. As long as vehicles do not drive more than a short distance at this degraded level, the overall safety record should still be satisfactory.

Of course, if the safety level degrades to a level that could be called “dangerous” rather than “less safe” that’s another story. That must never be allowed.

An example of this would be failure of the main sensors, such as a LIDAR. Without a LIDAR, a car would rely on cameras and radar. Companies like Tesla think they can make a car fully safe with just those two, and perhaps they will some day. But even though those are not yet safe enough, they are safe enough for a problem like getting off the road, or even getting to the next exit on a highway.

This is important because we will never get perfection. We will only get lower and lower levels of risk, and the risk will not be constant — it will be changing with road conditions, and due to system or mechanical failures. But we can still get the safety level we want — and get the technology on the road.




Brad Templeton, Robocars.com is an EFF board member, Singularity U faculty, a self-driving car consultant, and entrepreneur.
Brad Templeton, Robocars.com is an EFF board member, Singularity U faculty, a self-driving car consultant, and entrepreneur.





Related posts :



Robot Talk Episode 140 – Robot balance and agility, with Amir Patel

  16 Jan 2026
In the latest episode of the Robot Talk podcast, Claire chatted to Amir Patel from University College London about designing robots with the agility and manoeuvrability of a cheetah.

Taking humanoid soccer to the next level: An interview with RoboCup trustee Alessandra Rossi

and   14 Jan 2026
Find out more about the forthcoming changes to the RoboCup soccer leagues.

Robots to navigate hiking trails

  12 Jan 2026
Find out more about work presented at IROS 2025 on autonomous hiking trail navigation via semantic segmentation and geometric analysis.

Robot Talk Episode 139 – Advanced robot hearing, with Christine Evers

  09 Jan 2026
In the latest episode of the Robot Talk podcast, Claire chatted to Christine Evers from University of Southampton about helping robots understand the world around them through sound.

Meet the AI-powered robotic dog ready to help with emergency response

  07 Jan 2026
Built by Texas A&M engineering students, this four-legged robot could be a powerful ally in search-and-rescue missions.

MIT engineers design an aerial microrobot that can fly as fast as a bumblebee

  31 Dec 2025
With insect-like speed and agility, the tiny robot could someday aid in search-and-rescue missions.

Robohub highlights 2025

  29 Dec 2025
We take a look back at some of the interesting blog posts, interviews and podcasts that we've published over the course of the year.

The science of human touch – and why it’s so hard to replicate in robots

  24 Dec 2025
Trying to give robots a sense of touch forces us to confront just how astonishingly sophisticated human touch really is.



 

Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


 












©2025.05 - Association for the Understanding of Artificial Intelligence