Robohub.org
 

An alternative to specific regulations for robocars: A liability doubling


by
02 June 2016



share this:
An image of some connected autonomous cars

An image of some connected autonomous cars

Can our emotional fear of machines, and the call for premature regulation, be mollified by a temporary increase in liability which takes the place of specific regulations to keep people safe?

So far, most new automotive technologies, especially ones that control driving such as autopilot, forward collision avoidance, lane keeping, anti-lock brakes, stability control and adaptive cruise control, have not been covered by specific regulations. They were developed and released by vendors, sold for years or decades, and when (and if) they got specific regulations, those often took the form of ‘electronic stability control is so useful, we will now require all cars to have it.’ It has worked reasonably well.

Just because there are no specific regulations for these things does not mean they are unregulated. There are rafts of general safety regulations on cars, and the biggest deterrent to the deployment of unsafe technology is the liability system and the huge cost of recalls. As a result, while there are exceptions, most carmakers are safety paranoid to a rather high degree — just because of liability. At the same time, they are free to experiment and develop new technologies. Specific regulations tend to come into play when it becomes clear that automakers are doing something dangerous, and that they won’t stop doing it because of the liability. In part, this is because today it’s easy to assign blame for accidents to drivers, and often harder to assign it to a manufacturing defect, or to a deliberate design decision.

The exceptions, like GM’s famous ignition switch problem, arise because of the huge cost of doing a recall for a defect that will have rare effects. Companies are afraid of having to replace parts in every car they made when they know they will fail — even fatally — just one time in a million. The one person killed or injured does not feel like one in a million, and our system pushes the car maker (and thus all customers) to bear that cost.

I wrote an article on regulating Robocar Safety in 2015, and this post expands on some of those ideas.

Robocars change some of this equation. First of all, in robocar accidents, the car manufacturer (or driving system) is going to be liable by default. Nobody else really makes sense, and indeed, some companies, like Volvo, Mercedes and Google, have already accepted that. Some governments are talking about declaring it, but frankly, it could never be any other way. Making the owner or passenger liable is technically possible, however, do you want to ride in an Uber where you have to pay if it crashes for reasons having nothing to do with you?

Due to this, the fear of liability is even stronger for robocar makers.

Robocar failures will almost all be software issues. As such, once fixed, they can be deployed for free. The logistics of the “recall” will cost nothing. GM would have no reason not to send out a software update once they found a problem; they would be crazy not to. Instead, there is the difficult question of what to do between the time a problem is discovered and a fix has been declared safe to deploy. Shutting down the whole fleet is not a workable answer, it would kill deployment of robocars if several times a year all robocars stopped working.

In spite of all this history and the prospect of it getting even better, a number of people — including government regulators — think they need to start writing robocar safety regulations today, rather than 10-20 years after the cars are on the road as has been traditional. This desire is well-meaning and understandable, but it’s actually dangerous because it will significantly slow down the deployment of safety technologies that will save many lives by making the world’s 2nd most dangerous consumer product safer. Regulations and standards generally codify existing practice and conventional wisdom. They are bad ideas with emerging technologies, where developers are coming up with entirely new ways to do things and entirely new ways to be safe. The last thing you want is to tell vendors is that they must apply old-world thinking when they can come up with much better ways of thinking.

Sadly, there are groups who love old-world thinking, namely the established players. Big companies start out hating regulation but eventually come to crave it, because it mandates the way they do things and understand the law. This stops upstarts from figuring out how to do it better, and established players love that.

The fear of machines is strong, so it may be that something else needs to be done to satisfy all desires: The desire of the public to feel the government is working to keep these scary new robots from being unsafe and the need for unconstrained innovation. I don’t desire to satisfy the need to protect old ways of doing things.

Car-Accident-autonomous-crash

One option would be to propose a temporary rule: For accidents caused by robocar systems, the liability, if the system should be at fault, shall be double that if a similar accident were caused by driver error. (Punitive damages for willful negligence would not be governed by this rule.) We know the cost of accidents caused by humans. We all pay for it with our insurance premiums, at an average rate of about 6 cents/mile. This would double that cost, pushing vendors to make their systems at least twice as safe as the average human in order to match that insurance cost.

Victims of these accidents (including hapless passengers in the vehicles) would now be doubly compensated. Sometimes no compensation is enough, but for better or worse, we have set on values and doubling them is not a bad deal. Creators of systems would have a higher bar to reach, and the public would know it.

While doubling the cost is a high price, I think most system creators would accept this as part of the risk of a bold new venture. You expect those to cost extra as they get started. You invest to make the system sustainable.

Over time, the liability multiplier would reduce, and the rule would go away entirely. I suspect that might take about a decade. The multiplier does present a barrier to entry for small players and we don’t want something like that around for too long.



tags: ,


Brad Templeton, Robocars.com is an EFF board member, Singularity U faculty, a self-driving car consultant, and entrepreneur.
Brad Templeton, Robocars.com is an EFF board member, Singularity U faculty, a self-driving car consultant, and entrepreneur.





Related posts :



A flexible lens controlled by light-activated artificial muscles promises to let soft machines see

  30 Oct 2025
Researchers have designed an adaptive lens made of soft, light-responsive, tissue-like materials.

Social media round-up from #IROS2025

  27 Oct 2025
Take a look at what participants got up to at the IEEE/RSJ International Conference on Intelligent Robots and Systems.

Using generative AI to diversify virtual training grounds for robots

  24 Oct 2025
New tool from MIT CSAIL creates realistic virtual kitchens and living rooms where simulated robots can interact with models of real-world objects, scaling up training data for robot foundation models.

Robot Talk Episode 130 – Robots learning from humans, with Chad Jenkins

  24 Oct 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Chad Jenkins from University of Michigan about how robots can learn from people and assist us in our daily lives.

Robot Talk at the Smart City Robotics Competition

  22 Oct 2025
In a special bonus episode of the podcast, Claire chatted to competitors, exhibitors, and attendees at the Smart City Robotics Competition in Milton Keynes.

Robot Talk Episode 129 – Automating museum experiments, with Yuen Ting Chan

  17 Oct 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Yuen Ting Chan from Natural History Museum about using robots to automate molecular biology experiments.

What’s coming up at #IROS2025?

  15 Oct 2025
Find out what the International Conference on Intelligent Robots and Systems has in store.

From sea to space, this robot is on a roll

  13 Oct 2025
Graduate students in the aptly named "RAD Lab" are working to improve RoboBall, the robot in an airbag.



 

Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


 












©2025.05 - Association for the Understanding of Artificial Intelligence