Uber and Volvo announced an agreement where Uber will buy, in time, up to 24,000 specially built Volvo XC90s which will run Uber’s self-driving software and, presumably, offer rides to Uber customers. While the rides are some time away, people have made note of this for several reasons.
I’m not clear who originally said it — I first heard it from Marc Andreesen — but “the truest form of a partnership is called a purchase order.” In spite of the scores of announced partnerships and joint ventures announced to get PR in the robocar space, this is a big deal, but it’s a sign of the sort of deal car makers have been afraid of. Volvo will be primarily a contract manufacturer here, and Uber will own the special sauce that makes the vehicle work, and it will own the customer. You want to be Uber in this deal, But what company can refuse a $1B order?
It also represents a big shift for Uber. Uber is often the poster child for the company that replaced assets with software. It owns no cars and yet provides the most rides. Now, Uber is going to move to the capital intensive model of owning the cars, and not having to pay drivers. There will be much debate over whether it should make such a shift. As noted, it goes against everything Uber represented in the past, but there is not really much choice.
First of all, to do things the “Uber” way would require that a large number of independent parties bought and operated robocars and then contracted out to Uber to bring them riders when not being used by their owners. Like UberX without having to drive the car. The problem is, that world is still a long way away. Car companies have put their focus on cars that can’t drive unmanned — much or at all — because that’s OK for the private car buyer. They are also far behind companies like Waymo and Uber in producing taxi capable vehicles.
If Uber waited for the pool of available private cars to get large enough, it would miss the boat. Other companies would have moved into its territory and undercut it with cheaper and cooler robotaxi service.
Secondly, you really want to be very sure about the vehicles you deploy in your first round. You want to have tested them, and you need to certify their safety because you are going to be liable in accidents no matter what you do. You can get the private owners to sign a contract taking liability but you will get sued anyway as the deep pocket if you do. This means you want to control the whole experience.
The truth is, capital is pretty cheap for companies like Uber. Even cheaper for companies like Apple and Google that have the world’s largest pools of spare capital sitting around. The main risk is that these custom robocars may not have any resale value if you bet wrong on how to build them. Fortunately, taxis wear out in about 5 years of heavy use.
Uber continues to have no fear of telling the millions of drivers who work “for” them that they will be rid of them some day. Uber driver is an unusual job, and nobody thinks of it as a career, so they can get away with this.
Academic ethicists, when defending discussions of the Trolley Problem claim that while they understand the problems are not real, they are still valuable teaching tools to examine real questions.
The problem is the public doesn’t understand this, and is morbidly fascinated beyond all rationality with the idea of machines deciding who lives or dies. This has led Barack Obama to ask about it in his first statement on robocars, and many other declarations that we must figure out this nonsense question before we deploy robocars on the road. The now-revoked proposed NHTSA guidelines of 2016 included a theoretically voluntary requirement that vendors outline their solutions to this “problem.”
This almost got more real last week when a proposed UK bill would have demanded trolley solutions. The bill was amended at the last minute, and a bullet dodged that would have delayed the deployment of life saving technology while trying to resolve truly academic questions.
It is time for ethical ethicists to renounce the trolley problem. Even if, inside, they still think it’s got value, that value is far outweighed by the irrational fears and actions it triggers in public debate. Real people are dying every day on the roads, and we should not delay saving them to figure out how to do the “right” thing in hypothetical situations that are actually extremely rare to nonexistent. Figuring out the right thing is the wrong thing. Save solving trolley problems for version 4, and get to work on version 0.2.
There is real ethical work to be done, covering situations that happen every day. Real world safety tradeoffs and their morality. Driving on roads where breaking the vehicle code is the norm. Contrasting cost with safety. These are the places where ethical expertise can be valuable.
For a long time I have promoted the idea of an open source simulator. Now two projects are underway.
The first is the project Apollo simulator from Baidu and a new entrant called Carla is also in the game.
This is good to see, but I hope the two simulators also work together. One real strength of an open platform simulator is that people all around the world can contribute scenarios to it, and then every car developer can test their system in those scenarios. We want every car tested in every scenario that anybody can think of.
Waymo has developed its own simulator, and fed it with every strange thing their cars have encountered in 5M kilometers of real world driving. It’s one of the things that gives them an edge. They’ve also loaded the simulator with everything their team members can think of. This way, their driving system has the experience of seeing and trying out every odd situation that will be encountered in many lifetimes of human driving, and eventually on every type of road.
That’s great, but no one company can really build it all. This is one of the great things to crowdsource. Let all the small developers, all the academics, and even all the hobbyists build simulations of dangerous scenarios. Let people record and build scenarios for driving in every city of the world, in every situation. No one company can do that but the crowd can. This can give us the confidence that any car has at least at some level encountered far more than any human driver ever could, and handled it well.
Some auto vendors have proposed a liability rule for privately owned robocars that will protect them from some liability. The rule would declare that if you bought a robocar from them, and you didn’t maintain it according to the required maintenance schedule then the car vendor would not be liable for any accident it had.
It’s easy to see why automakers would want this rule. They are scared of liability and anything that can reduce it is a plus for them.
At the same time, this will often not make sense. Just because somebody didn’t change the oil or rotate the tires should not remove liability for a mistake by the driving system that had no relation to those factors.
What’s particularly odd here is that robocars should always be very well maintained. That’s because they will be full of sensors to measure everything that’s going on, and they will also be able to constantly test every system that can be tested.
Consider the brakes, for example. Every time a robocar brakes, it can measure that the braking is happening correctly. It can measure the temperature of the brake discs. It can listen to the sound or detect vibrations. It can even, when unmanned, find itself on an empty street and hit the brakes hard to see what happens.
In other words, unexpected brake failure should be close to impossible (particularly since robocars are being designed with 2 or 3 redundant braking systems.)
More to the point, a robocar will take itself in for service. When your car is not being used, it will run over for an oil change or any other maintenance it needs. You would have to deliberately stop it to prevent it from being maintained to schedule. Certainly no car in a taxi fleet will remain unmaintained except through deliberate negligence.