Hot on the heels of my CES Report is the release of the latest article from Chris Urmson on The View from the Front Seat of the Google Car. Chris heads engineering on the project (and until recently led the entire project.) Chris reports two interesting statistics.
The first is “simulated contacts” — times when a safety driver intervened, and the vehicle would have hit something without the intervention:
There were 13 [Simulated Contact] incidents in the DMV reporting period (though 2 involved traffic cones and 3 were caused by another driver’s reckless behavior). What we find encouraging is that 8 of these incidents took place in ~53,000 miles in ~3 months of 2014, but only 5 of them took place in ~370,000 miles in 11 months of 2015.
(There were 69 safety disengages, of which 13 were determined to be likely to cause a “contact.”)
The second is detected system anomalies:
There were 272 instances in which the software detected an anomaly somewhere in the system that could have had possible safety implications; in these cases it immediately handed control of the vehicle to our test driver. We’ve recently been driving ~5300 autonomous miles between these events, which is a nearly 7-fold improvement since the start of the reporting period, when we logged only ~785 autonomous miles between them. We’re pleased.
Let’s look at these and why they are different and how they compare to humans.
The simulated contacts are events that would have been accidents in an unsupervised or unmanned vehicle, which is serious. Google is now having one once every 74,000 miles, though Urmson suggests this rate may not keep going down as they test the vehicle in new and more challenging environments. But as noted, a few were not the fault of the system. Indeed, for the larger set of 69 safety disengagements, the rate of those is actually going up, with 29 of them in the last 5 months reported.
How does that number compare? Well, regular people in the USA report about 6 million accidents per year to the police, which means about once every 500,000 miles. But for some time, insurance companies have said the number is twice that, or once every 250,000 miles. Google’s own new research suggests even more accidents are taking place that go entirely unreported. For example, how often have you struck a curb, or even had a minor touch in a parking lot that nobody else knew about? Many people would admit to that, and altogether there are suggestions the human number for a “contact” could be as bad as one in 100,000 miles.
This would put the Google cars at close to the same level, though this is from driving in California with no snow and easy driving conditions. In other words, there is still some way to go, but at least one goal seems to be within in striking distance. Google even reports going 230,000 miles from April to November of last year without a simulated contact, a cherry-picked stretch that nonetheless matches human levels.
People often ask “Which is the biggest obstacle to robocar deployment: technology or regulation?” The answer I give is: neither.
The biggest obstacle in my view is testing. We have to figure out just how to test these vehicles so we can know when a safety goal has been met, and we also have to figure out what the safety goal is.
Various suggestions have been made for what the safety goal should be: for example, having a safety record that matches humans; or one that is twice, 10 times or even 100 times as good as humans. Those higher stretch goals will become good targets, but for now the first question is how to get to the level of humans.
One problem is that the way humans have accidents is quite different from how robots have them. Human accidents sometimes have a single cause (such as falling asleep at the wheel) but many arise because two or more things went wrong. Almost everybody I talk to will agree a time has come when they were looking away from the road to adjust the radio or even play with their phone, and they looked up to see traffic slowing ahead of them, and quickly hit the brakes just in time, resulting in no accident. Accidents often happen when luck like this runs out. Robotic accidents will probably mostly come from one cause. Robots doing anything unsafe — even for a moment — will be cause for alarm, and the source of the error will be fixed as quickly as possible.
This leads us to look at the other number — a safety anomaly. At first, this sounds more frightening. They range from 39 hardware issues and anomalies to 80 “software discrepancies”, which may include rarer full-on “blue screen” style (if the cars ran Windows, which they don’t) crashes. People often wonder how we can trust robocars when they know computers can be so unreliable. (The most common detected fault is a perception discrepancy. It is not reported, but I will presume these include strange sensor data or serious disagreement between different sensors.)
It’s important to note the hidden message. These “safety anomaly” interventions did not generally cause simulated contacts. With human beings, the fact that you zone out, take your eyes off the road, text or even in many cases even briefly fall asleep does not always result in a crash for humans, and nor will similar events for robocars. In the event of a detected anomaly, one presumes that independent (less capable) backup systems will immediately take over. Because they are less capable, they might cause an error, but that should be quite rare.
As such, the 5300 miles between anomalies, while clearly in need of improvement, may also not be a bad number. Certainly many humans have such an “anomaly” that often (about every 6 months of human driving.) It depends how often such anomalies might lead to a crash, and what severity of crash it would be.
The report does not describe something more frightening — such as a problem with the system that it does not detect. This is the sort of issue that could lead to a dangerous “careen into oncoming traffic” style event in the worst case scenario. (While I worked on Google’s car a few years ago, I have no inside data on the performance of the current generations of cars.)
I have particular concern with the new wave of projects hoping to drive with trained machine learning and neural networks. Unlike Google’s car and most others, the programmers of those vehicles have only a limited idea how the neural networks are operating. It’s harder to tell if they’re having an “anomaly,” though the usual things like hardware errors, processor faults and memory overflows are of course just as visible.
Google didn’t publish total disengagements, judging most of them to be inconsequential. Safety drivers are regularly disengaging for lots of reasons:
The latter is the most interesting. Drivers are told to take the wheel if anything dangerous is happening on the road, not just with the vehicle. This is the right approach — you don’t want to use the public as test subjects. You wouldn’t say, “Let’s leave the car auto-driving and see what it does with that group of schoolchildren jaywalking.” Instead the approach is to play out the scenario in a simulator and see if the car did the right thing.
Delphi reports 405 disengagements in 16,600 miles — but their breakdown suggests only a few were system problems. Delphi is testing on highways where disengagement rates are expected to be much lower.
Nissan reports 106 disengagements in 1485 miles, most in their early stages. For October-November, their rate was 36 for 866 miles. They seem to be reporting the more serious ones, like Google.
Tesla reports zero disengagements, presumably because they define their vehicle as not having a fully autonomous mode.
VW’s report is a bit harder to discern, but it suggests 5500 total miles and 85 disengagements.
Google’s lead is overwhelming.
If the number is the 100,000 mile or 250,000 mile number we estimate for humans, that’s still pretty hard to test. You can’t just take every new software build and drive it for a million miles (about 25,000 hours) to see if it has fewer than 4 or even 10 accidents. You can and will test the car over billions of miles in simulator, encountering every strange situation ever seen or imagined. Before the car has an accident it will be unlike a human. It will probably perform flawlessly. If it doesn’t, that will be immediate cause for alarm and correction of the problem.
Makers of robocars will need to convince themselves, their lawyers and safety officers, their boards, the public and eventually even the government that they have met some reasonable safety goal.
Over time we will hopefully see even more detailed numbers on this. That is how we’ll answer this question.
This does turn out to be one advantage of the supervised autopilots, such as what Tesla has released. Because it can count on all the Tesla owners to be the fail-safe for their autopilot system, Tesla is able to quickly gather a lot of data about the safety record of its system over a lot of miles. Far more than can be gathered if you have to run the testing operation with paid drivers or even your own unmanned cars. This ability to test could help the supervised autopilots get to good confidence numbers faster than expected.
Indeed, though I have often written that I don’t feel there is a good evolutionary path from supervised robocars to unmanned ones, this approach could make my prediction be in error. For if Tesla or some other car maker with lots of cars on the road is able to make an autopilot, and then observe that it never fails in several million miles, then they might have a legitimate claim on having something safe enough to run unmanned, at least on the classes of roads and situations in which the customers tested it on. Though a car that does 10 million perfect highway miles is still not ready to bring itself to you door to door on urban streets, as Elon Musk yesterday claimed would happen soon with Tesla.
This post originally appeared on robocars.com. If you liked this post, you may also enjoy: