Robohub.org
 

What does the VW scandal mean for robocars?


by
05 October 2015



share this:
Image: John Matthies/flickr

Image: John Matthies/flickr

Most of you will have heard about the revelations that Volkswagen put software in their cars to deliberately cheat on emissions tests in the USA and possibly other places. It’s very bad for VW, but what are the implications for robocars?

A lot has been written about the Volkswagen emissions violations but here’s a short summary. All modern cars have computer controlled ignition systems, and these can be tuned for different levels of performance, fuel economy and emissions. Cars have to pass emission tests, so most cars have to tune their systems in ways that reduce other things (like engine performance and fuel economy) in order to reduce their pollution. Most cars attempt to detect the style of driving going on, and tune the engine differently for the best results in that situation.

VW went far beyond that. Apparently their system was designed to detect when it was in an ignitions test. In these tests, the car is on rollers in a garage, and it follows certain patterns. VW set their diesel cars to look for this, and tune the engine to produce emissions below the permitted numbers. When the car saw it was in more regular driving situations, it switched the tuning to modes that gave it better performance and better mileage but, in some cases, vastly worse pollution. A commonly reported number is that, in some modes, 40 times the California limit for Nitrogen Oxides could be emitted. NOx are a major smog component and bad for your lungs. Even over a wide range of driving, it was as high as 20 times the California limit (about five times the European limit.)

It has not been revealed just who at VW did this, and whether other car companies have done this as well. (All companies do variable tuning and it’s “normal” to have modestly higher emissions in real driving compared to the test, but this was beyond the pale. The question everybody is asking is “What the hell were they thinking?”

That is indeed the question, because I think the central issue is why VW would do this. After all, having been caught, the cost is going to be immense, possibly even ruining one of the world’s great brands. Obviously they did not really consider that they might get caught.

Beyond that, they have seriously reduced the trust that customers and governments will place not just in VW, but in car makers in general, and in their software offerings in particular. VW will lose trust, but this will spread to all German carmakers and possibly all carmakers. This could result in reduced trust in the software in robocars.

2000px-Volkswagen_logo_2012What the hell were they thinking?

The motive is the key thing we want to understand. In the broad sense, they did it because they felt customers would like it, and that would lead to selling more cars. At a secondary level, it’s possible that those involved felt they would gain prestige (and compensation) if they pulled off the wizard’s trick of making a diesel car which was clean and also high performance, at a level that turns out to be impossible.

Why would customers want this?

Well, one answer is that there is an underground car-modding culture that already does this. Lots of people wish to hack their cars’ systems to boost performance, with little concern about the increased pollution. “Chip Tuning” is not always illegal, but often does increase emissions. They believe the emissions legislation just gets in the way of them of having a fun car.

But many more customers want performance and would not want to pollute the air. VW gave them a different magic solution — a better performing car and the illusion that they were not polluting. It’s not surprising that people might buy that. Another subset of customers will be genuinely upset that they were lied to and ended up hurting the environment. (Some research suggests that a number of deaths can be attributed to this extra smog.)

Who decided to do this?

We all want to know who decided this. It seems really unlikely that a lone rogue engineer would do it — what’s in it for her or him? Ditto for Bosch, the parts supplier. But engineers would have had to collude with any managers who decided to do this, and I even wonder what incentive they were given? Promotion? Bonus? Glory for creating an impossibly good engine? How did the managers decide to trust the colluding engineers with a company-risking secret?

How many levels of management knew?

It needs somebody high enough up that they win big by doing this. That means somebody who gets serious levels of bonus if they sell more cars, or whatever else this did for VW. That’s usually not a low level manager; it’s probably a manager for a large part of the car line. And, of course, did the top management plan or know this? It boggles the mind that they might have been so stupid, but it’s possible.

It’s also that possible high management asked the engine systems programmers to do this without informing the middle managers in the chain, but how? It seems hard to believe that lots of people would have conspired in this and could have fooled so many people, convinced they would not be caught. Getting caught means huge penalties, the end of not just your job, but your career. Possible jail time. Could the CEO have been involved? The board?

My best guess is a high level manager (high enough to benefit from increased sales in all the vehicles with this engine) but perhaps not the C-levels, who were somehow able to develop incentives for a key programmer. But we’ll find out eventually, I suspect.

For robocars…

It’s not too surprising that companies might cheat to improve the bottom line, especially when they convince themselves they won’t get caught. Where does that leave the  robocar maker?

My prediction is that robocar vendors will end up self-insuring their vehicle fleets, at least while the software is driving. Conventional insurance in PAYD mode may apply to miles driven with a human at the wheel. The vendors or fleet operators may purchase reinsurance to cover major liabilities, but will do so with a very specific contract with the underwriter which won’t protect them in the event of actual fraud.

If they self-insure, they have zero interest in cheating on safety. If they don’t make a car safe enough, they will be responsible for the cost of every accident. There will be nobody to cheat but themselves, though the pain of injury that goes beyond what a court awards still needs to be considered. One reason for self-insurance is that you will actually feel safer getting into a car, knowing it is the vendor who stands to lose if it is not safe enough.

Of course, in the event of an accident, vendors will work as hard as possible to avoid liability, but this comes at a cost of its own.

Cheats are far more likely if they benefit customers and increase sales. Examples might be ways that cars can break traffic laws in order to get you to places faster. Cars might park (actually “stand”) where they should not. Already there are cars with a dial that lets the occupant/controller adjust the speed above the speed limit, and, in fact, these dials are necessary. There have been lots of recent discussion about other ways in which it is necessary to not strictly observe the law in order to drive well on US roads.

One can imagine a number of other tricks that are not specific to robocars. Cars might try to cheat you on the bill for a taxi ride (just as cab drivers are known to deliberately take bad routes to get more money sometimes.)

VW/Audi have had some decent robocar projects, and VW’s sponsorship of Stanford’s VAIL lab has provided a lot of that. Now we must downgrade VW as a vendor that customers will not trust. (There is some irony to that of course, since at this point, VW is probably the least likely company to cheat going forward.)

Would suppliers lie?

There may be more risk from suppliers of technology for robocars. Sensor manufacturers, for instance, may be untruthful about their abilities or, more likely, reliability. While the integrators will be inherently distrustful, as they will take the liability, one can see smaller vendors telling lies if they see it as the only way to get a big sale for their business.

While they would end up liable if caught, they might not have the resources to pay for that liability, and be more interested in making the big time in the hope of not being caught. This risk is why the car industry tends to only buy from huge suppliers known as “tier 1” companies. The smaller suppliers, in tiers 2 and 3, aren’t allowed to sell to the big auto OEMs, because big auto companies won’t bet a car line on a small company. Instead, the small companies have to partner with a tier 1 that takes on that responsibility — and of course a chunk of the profits.

On the plus side, robocar designs generally expect parts and sensors to all fail from time to time, and so a good car design plans for failure and negotiates these safely with the ability, at the very least, to safely pull off the road — another car will be on the way quickly.

However, most tools do not plan on how to deal with a sensor that might deliberately provide false information, other than in planning defence against computer intrusion, which might turn a component into a malicious and untrustworthy device. But people are thinking about this, which can give us some comfort in respect of fraud by a supplier.

Self-certification

This scandal will probably raise more questions about the popular (and still probably correct) approach of having vendors self-certify that they have attained functional safety goals for their systems. These are actually unrelated issues. VW was not self-certifying, it was going through a government certification process, and cheating on it. If anything it actually reduces trust in government certification approaches. However, in the public eye, the reduced trust in vendors will extend to everything they do, including self-certification.

Vendors (large, reputable ones, at least) have strong motives not to lie on self-certification, both because they are liable for the safety failures that are their fault, and because they will be extra liable with possible punitive damages if they deliberately lied.

I have a longer article with more debate on the issues around government regulation and certification of robocars.

This article originally appeared on robocars.com.

If you liked this article, you may also be interested in:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.



tags: ,


Brad Templeton, Robocars.com is an EFF board member, Singularity U faculty, a self-driving car consultant, and entrepreneur.
Brad Templeton, Robocars.com is an EFF board member, Singularity U faculty, a self-driving car consultant, and entrepreneur.





Related posts :



Robot Talk Episode 102 – Isabella Fiorello

  13 Dec 2024
In the latest episode of the Robot Talk podcast, Claire chatted to Isabella Fiorello from the University of Freiburg about bioinspired living materials for soft robotics.

Robot Talk Episode 101 – Christos Bergeles

  06 Dec 2024
In the latest episode of the Robot Talk podcast, Claire chatted to Christos Bergeles from King's College London about micro-surgical robots to deliver therapies deep inside the body.

Robot Talk Episode 100 – Mini Rai

  29 Nov 2024
In the latest episode of the Robot Talk podcast, Claire chatted to Mini Rai from Orbit Rise about orbital and planetary robots.

Robot Talk Episode 99 – Joe Wolfel

  22 Nov 2024
In the latest episode of the Robot Talk podcast, Claire chatted to Joe Wolfel from Terradepth about autonomous submersible robots for collecting ocean data.

Robot Talk Episode 98 – Gabriella Pizzuto

  15 Nov 2024
In the latest episode of the Robot Talk podcast, Claire chatted to Gabriella Pizzuto from the University of Liverpool about intelligent robotic manipulators for laboratory automation.

Online hands-on science communication training – sign up here!

  13 Nov 2024
Find out how to communicate about your work with experts from Robohub, AIhub, and IEEE Spectrum.

Robot Talk Episode 97 – Pratap Tokekar

  08 Nov 2024
In the latest episode of the Robot Talk podcast, Claire chatted to Pratap Tokekar from the University of Maryland about how teams of robots with different capabilities can work together.

Robot Talk Episode 96 – Maria Elena Giannaccini

  01 Nov 2024
In the latest episode of the Robot Talk podcast, Claire chatted to Maria Elena Giannaccini from the University of Aberdeen about soft and bioinspired robotics for healthcare and beyond.





Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association