Robohub.org
 

Why Tesla’s Autopilot and Google’s car are entirely different animals

by
28 October 2015



share this:

Google_Tesla_robocar_In the buzz over the Tesla autopilot update, a lot of commentary has appeared comparing this Autopilot with Google’s car effort and other efforts and what I would call a “real” robocar — one that can operate unmanned or with a passenger who is not paying attention to the road. We’ve seen claims that “Tesla has beaten Google to the punch,” but while the Tesla release is a worthwhile step forward, the two should not be confused as all that similar.

Tesla’s autopilot isn’t even particularly new. Several car makers have had similar products in their labs for several years, and some have released it to the public, at first in a “traffic jam assist” mode, but reportedly in full highway cruise mode outside the USA. The first companies to announce it were Cadillac with the “Super Cruise” and VW’s “Temporary Autopilot” — but they delayed that until much later.

Remarkably, Honda showed off a car ten years ago doing this sort of basic autopilot (without lane change) and sold only in the UK. They decided to discontinue the project, however. That this was actually promoted as an active product ten years ago will give you some clue as to how different this was from the bigger efforts.

Cruise products like these require constant human supervision. With regular cruise control, you could take your feet off the pedals, but might have to intervene fairly often either by using the speed adjust buttons or full control. Interventions could be several times a minute. Later came “Adaptive Cruise Control”, which still required you to steer and fully supervise, but would rarely require intervention on the pedals while driving on the highway — a few times an hour might be acceptable.

The new autopilot systems allow you to take your hands off the wheel but demand full attention. Users report needing to intervene rarely on some highways, but frequently on other roads. Once again, if you only need to intervene once an hour, the product could make your drive more relaxing.

Now consider a car that drives without supervision …

Human drivers have minor accidents about every 2,500 to 6,000 hours, depending on what figures you are using — that would be about once every 10 to 20 years of driving. A fatal accident takes place every 2,000,000 hours of driving — about once every 10,000 years for the typical driver, thankfully a much longer span than a person’s lifetime.

If a full robocar needs human intervention, logic tells you that it’s going to have an accident because there is nobody there to intervene. Just like with human drivers, most of the errors that would cause an accident are minor: running off the road, fender benders. Not every mistake that could cause a crash or a fatality causes one. Indeed, humans make mistakes that might cause a fatality far more often than every 2,000,000 hours, because we “get away” with many of them.

Even so, the difference is staggering. A cruise autopilot (such as Tesla’s) is a workable product if you have to correct it a few times an hour, whereas a full robocar product is only workable if you need to correct it only after decades or even lifetimes of driving. This is not a difference of degree, it is a difference of kind. It is why there is probably not an evolutionary path from the cruise/autopilot systems based on existing ADAS technologies to a real robocar. Doing many thousands times better will not be done by incremental improvement. It almost surely requires a radically different approach, and probably very different sensors.

To top it all off, a full robocar doesn’t just need to be great at avoiding accidents. If it’s running unmanned, with no human to help it at all, it needs a lot of other features and capabilities too.

The mistaken belief in an evolutionary path also explains why some people imagine robocars are many decades away. If you wanted evolutionary approaches to take you to 100,000x better, you would expect to wait a long time. When an entirely different approach is required, what you learn from the old approach doesn’t help you predict how the other approaches — including unknown ones — will do.

It does teach you something, though. By simply being on the road, Tesla will encounter all sorts of interesting situations its developers weren’t expecting, and they will use this data to train new generations of software that do better. They will learn things that help them make the revolutionary unmanned product they hope to build in the 2020s. This is a good thing.

Google and others have also been out learning that, and soon more teams will.

This post originally appeared on Robocars.com.


If you liked this article, you may also be interested in:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.



tags: , , , , , ,


Brad Templeton, Robocars.com is an EFF board member, Singularity U faculty, a self-driving car consultant, and entrepreneur.
Brad Templeton, Robocars.com is an EFF board member, Singularity U faculty, a self-driving car consultant, and entrepreneur.





Related posts :



Robot Talk Episode 35 – Interview with Emily S. Cross

In this week's episode of the Robot Talk podcast, host Claire Asher chatted to Professor Emily S. Cross from the University of Glasgow and Western Sydney University all about neuroscience, social learning, and human-robot interaction.
03 February 2023, by

Sea creatures inspire marine robots which can operate in extra-terrestrial oceans

Scientists at the University of Bristol have drawn on the design and life of a mysterious zooplankton to develop underwater robots.
02 February 2023, by

Our future could be full of undying, self-repairing robots – here’s how

Could it be that future AI systems will need robotic “bodies” to interact with the world? If so, will nightmarish ideas like the self-repairing, shape-shifting T-1000 robot from the Terminator 2 movie come to fruition? And could a robot be created that could “live” forever?
01 February 2023, by

Sensing with purpose

Fadel Adib uses wireless technologies to sense the world in new ways, taking aim at sweeping problems such as food insecurity, climate change, and access to health care.
29 January 2023, by

Robot Talk Episode 34 – Interview with Sabine Hauert

In this week's episode of the Robot Talk podcast, host Claire Asher chatted to Dr Sabine Hauert from the University of Bristol all about swarm robotics, nanorobots, and environmental monitoring.
28 January 2023, by

Special drone collects environmental DNA from trees

Researchers at ETH Zurich and the Swiss Federal research institute WSL have developed a flying device that can land on tree branches to take samples. This opens up a new dimension for scientists previously reserved for biodiversity researchers.
27 January 2023, by





©2021 - ROBOTS Association


 












©2021 - ROBOTS Association