Robohub.org
 

The paradox on robocar accidents

by
14 March 2018



share this:

I have written a few times about the unusual nature of robocar accidents. Recently I was discussing this with a former student who is doing some research on the area. As a first step, she began looking at lists of all the reasons that humans cause accidents. (The majority of them, on police reports, are simply that one car was not in its proper right-of-way, which doesn’t reveal a lot.)

This led me, though to the following declaration that goes against most early intuitions.

Every human accident teaches us something about the way people have accidents. Every robocar accident teaches us about a way robocars will never have an accident again.

While this statement is not 100% true, it reveals the stark difference between the way people and robots drive. The whole field of actuarial science is devoted to understanding unusual events (with car accidents being the primary subject) and their patterns. When you notice a pattern, you can calculate probabilities that other people will do it, and they use that to price insurance and even set policy.

When a robocar team discovers their car has made any mistake, and certainly caused any incident, their immediate move is to find and fix the cause of the mistake, and update the software. That particular mistake will generally never happen again. We have learned very little about the pattern of robocar accidents, though the teams have definitely learned something to fix in their software. Since for now, and probably forever, the circumstances of robocar accidents will be a matter of public record, all teams and thus call cars will also learn portions of the same thing. (More on that below.)

The rule won’t be entirely true. There are some patterns. There are patterns of software bugs too — every programmer knows the risk of off-by-one errors and memory allocation mistakes. We actually build our compilers, tools and even processors to help detect and prevent all the known common programming mistakes we can, and there’s an active field of research to develop AI able to do that. But that does not teach us a lot about what type of car accidents this might generate. We know robocars will suffer general software crashes and any good system has to be designed to robustly handle that with redundancies. There is a broad class of errors known as perception false negatives, where a system fails to see or understand something on the road, but usually learning about one such error does not teach us much about the others of its class. It is their consequences which will be similar, not their causes.

Perception errors are the easiest to analogize to human activity. Humans also don’t see things before an accident. This, however, is due to “not paying attention,” or “not looking in that direction,” something that robots won’t ever be guilty of. A robot’s perception failure would be like a human’s mini-stroke temporarily knocking out part of our visual cortex (ie. not a real-world issue) or a flaw in the “design” of the human brain. In the robot, however, design flaws can usually be fixed.

There are some things that can’t be fixed, and thus teach us patterns. Some things are just plain hard, like seeing in fog or under snow, or seeing hidden vehicles/pedestrians. The fact that these things are hard can help you calculate probabilities of error.

This is much less true for the broad class of accidents where the vehicle perceives the world correctly, but decides the wrong thing to do. These are the mistakes that once done, will probably never be done badly again.

There actually have been very few accidents involving robocars that were the fault of the robocar system. In fact, the record is surprisingly good. I am not including things like Tesla autopilot crashes — the Tesla autopilot is designed to be an incomplete system and they document explicitly what it won’t do.

Indeed the only pattern I see from the few reported incidents is the obvious one — they happened in unusual driving situations. Merging with a bus when one wide lane is often used by 2 cars at once. Dealing with a lane splitting motorcycle after aborting an attempt at a lane change. (Note that police ruled against the motorcyclist in this case but it is being disputed in court.) Whatever faults occur here have been fixed by now.

Developers know that unusual situations are an area of risk, so they go out searching for them, and use simulators and test tracks to let them work extensively with them. You may not learn patterns, but you can come up with probability estimates to measure what fraction of everyday driving involves extraordinary situations. This can give you insurance-style confidence, to know that even if you aren’t sure you handle every unusual situation, the overall total risk is low.

One place this can be useful is in dealing with equipment failures. Today, a car driving only with vision and radar is not safe enough; a LIDAR is needed for the full safety level. If the LIDAR fails, however, the car is not blind, it is just a bit less safe. While you would not drive for miles with the LIDAR off, you might judge that the risk of driving to a safe spot to pull off on just camera and radar is acceptable, simply because the amount of driving in that mode will be very small. We do the same thing with physical hardware — driving with a blown out tire is riskier, but we can usually get off the road. We don’t insist every car have 5 wheels to handle that situation.

Learning from humans

It is also worth noting that the human race is capable of learning from accidents. In a sense, every traffic safety rule, most road signs, and many elements of road engineering are the result of learning from accidents and improving safety. While the fact that one human misjudges a left turn doesn’t stop other humans from doing so, if it causes a “no left turn” sign to go up, we do. Ironically, the robot does not need the no left turn sign — it will never misjudge the speed of oncoming vehicles and make the turn at the wrong time.

Car design also is guided a lot from lessons of past accidents. That’s everything from features like crumple zones which don’t affect human behaviour, to the blindspot warning system which does.

Pure neural network approaches

There are a few teams who hope to make a car drive with only a neural network. That is to say the neural network outputs steering controls and takes in sensor data. Such a system is more akin to humans in that a flaw found in that approach might be found again in other similarly designed vehicles. After an accident, such a car would have its network retrained so that it never made that precise mistake again (nor any of the other potential mistakes in its training library.) This might be a very narrow, retraining however.

This is one reason that only smaller teams are trying this approach. Larger teams like Waymo are making very extensive use of neural networks, but primarily in the area of improving perception, not in making driving decisions. If a perception error is discovered, the network retraining to fix it will ideally be quite extensive, to avoid related errors. Neural network perception errors also tend to be intermittent — ie. the network fails to see something in one frame, but sees it in a later frame. The QA effort is to make it see things sooner and more reliably.

Sharing crash reports

This raises the interesting question of sharing crash reports. Developers are now spending huge amounts of money racking up test miles on their cars. They want to encounter lots of strange situations and learn from them. Waymo’s 4 million miles of testing hardly came for free. This makes them and others highly reluctant to release to the public all that hard-won information.

The 2016 NHTSA car regulations, though highly flawed, included a requirement that vendors share all the sensor data on any significant incident. As you might expect, they resisted that, and the requirement was gone from the 2017 proposed regulations (along with almost all the others.)

Vendors are unlikely to want to share every incident, and they need incentives to do lots of test driving, but it might be reasonable to talk about sharing for any actual accident. It is not out of the question that this could be forced by law, though it is an open question on how far this should go. If an accident is the system’s fault, and there is a lawsuit, the details will certainly come out, but companies don’t want to air their dirty laundry. Most are so afraid of accidents that the additional burden of having their mistake broadcast to all won’t change the incentives.

Generally, companies all want as much test data as they can get their hands on. They might be convinced that while unpleasant, sharing all accident data helps the whole industry. Only a player who was so dominant that they had more data than everybody else combined (which is Waymo at this time) would not gain more from sharing than they lose.




Brad Templeton, Robocars.com is an EFF board member, Singularity U faculty, a self-driving car consultant, and entrepreneur.
Brad Templeton, Robocars.com is an EFF board member, Singularity U faculty, a self-driving car consultant, and entrepreneur.





Related posts :



Open Robotics Launches the Open Source Robotics Alliance

The Open Source Robotics Foundation (OSRF) is pleased to announce the creation of the Open Source Robotics Alliance (OSRA), a new initiative to strengthen the governance of our open-source robotics so...

Robot Talk Episode 77 – Patricia Shaw

In the latest episode of the Robot Talk podcast, Claire chatted to Patricia Shaw from Aberystwyth University all about home assistance robots, and robot learning and development.
18 March 2024, by

Robot Talk Episode 64 – Rav Chunilal

In the latest episode of the Robot Talk podcast, Claire chatted to Rav Chunilal from Sellafield all about robotics and AI for nuclear decommissioning.
31 December 2023, by

AI holidays 2023

Thanks to those that sent and suggested AI and robotics-themed holiday videos, images, and stories. Here’s a sample to get you into the spirit this season....
31 December 2023, by and

Faced with dwindling bee colonies, scientists are arming queens with robots and smart hives

By Farshad Arvin, Martin Stefanec, and Tomas Krajnik Be it the news or the dwindling number of creatures hitting your windscreens, it will not have evaded you that the insect world in bad shape. ...
31 December 2023, by

Robot Talk Episode 63 – Ayse Kucukyilmaz

In the latest episode of the Robot Talk podcast, Claire chatted to Ayse Kucukyilmaz from the University of Nottingham about collaboration, conflict and failure in human-robot interactions.
31 December 2023, by





Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association