Robohub.org
 

No, a Tesla didn’t predict an accident and brake for it


by
11 January 2017



share this:
tesla-auto-crash

You may have seen a lot of press around a dashcam video of a car accident in the Netherlands. It shows a Tesla in AutoPilot hitting the brakes around 1.4 seconds before a red car crashes hard into a black SUV that isn’t visible from the viewpoint of the dashcam. Many press have reported that the Tesla predicted that the two cars would hit, and because of the imminent accident, it hit the brakes to protect its occupants.

The accident is brutal but apparently nobody was hurt.

https://www.youtube.com/watch?v=lqqN5iRrAiM

The press speculation is incorrect. It got some fuel because Elon Musk himself retweeted the report linked to, but Telsa has in fact confirmed the alternate and more probable story which does not involve any prediction of the future accident. In fact, the red car plays little to no role in what took place.

Tesla’s autopilot uses radar as a key sensor. One great thing about radar is that it tells you how fast every radar target is going, as well as how far away it is. Radar for cars doesn’t tell you very accurately where the target is (roughly it can tell you what lane a target is in.) Radar beams bounce off many things, including the road. That means a radar beam can bounce off the road under a car that is in front of you, and then hit a car in front of it, even if you can’t see the car. Because the radar tells you “I see something in your lane 40m ahead going 20mph and something else 30m ahead going 60mph” you know it’s two different things.

The Tesla radar saw just that — the black SUV was hitting the brakes (possibly for a dirt patch that appears to show on the video) and the red car wasn’t. Regardless of the red car being there, the autopilot knew that if another car ahead was braking hard, it should also brake hard, and it did. Yes, it’s possible that it could also calculate that the red car, if it keeps going, will hit the black car, but that’s not entirely relevant — it’s clear that the Tesla should stop, regardless of what the red car is going to do. Tesla reported in their blog about how they were doing more with the radar, including tracking hidden vehicles with it. The ability of automotive radar to do this has been known for some time, and I have always presumed most teams have taken advantage of it. You don’t always get returns from hidden cars, but it’s worth using them if you do.

In the future, we will see robocar systems predicting accidents, but I am not aware of this being announced by any team. All robocars are tracking all objects ahead of them, for position and velocity, and they are extrapolating their velocity and predicting where they will go. Those predictions would also include detecting that vehicles might hit (if they continue their current course) and also if they could not avoid hitting at a certain point. If an imminent accident is predicted, it would make sense to know that and also react to it in advance. A car might even be able to predict a bit of what will happen after the accident, though that is chaotic.

A system like that would outperform the autopilot or any automatic emergency braking system. Presently, those systems largely track objects in their lane. They don’t brake because cars are stopped in adjacent lanes, because that would mean they could not work in traffic jams or carpool lanes when there are lanes at different speeds, and they could not deal with stalled cars on the side of the road.

However, if you saw what the Tesla saw from the lane to the right, it would still be a very smart thing to brake. Tesla has not commented on this, but I presume its system would not have braked if it had been in that lane, at least not braked before the accident. It might brake because other cars like the red car immediately moved into the right lane.


If you liked this post on robocars, you’ll also enjoy these articles:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter



tags: , , , , ,


Brad Templeton, Robocars.com is an EFF board member, Singularity U faculty, a self-driving car consultant, and entrepreneur.
Brad Templeton, Robocars.com is an EFF board member, Singularity U faculty, a self-driving car consultant, and entrepreneur.

            AUAI is supported by:



Subscribe to Robohub newsletter on substack



Related posts :

Robot Talk Episode 153 – Origami-inspired robots, with Chenying Liu

  24 Apr 2026
In the latest episode of the Robot Talk podcast, Claire chatted to Chenying Liu from University of Oxford about how a robot's physical form can actively contribute to sensing, processing, decision-making, and movement.

Sony AI table tennis robot outplays elite human players

  22 Apr 2026
New robot and AI system has beaten professional and elite table tennis players.

AI system learns to keep warehouse robot traffic running smoothly

  20 Apr 2026
This new approach adapts to decide which robots should get the right of way at every moment, avoiding congestion and increasing throughput.

Robot Talk Episode 152 – Dexterous robot hands, with Rich Walker

  17 Apr 2026
In the latest episode of the Robot Talk podcast, Claire chatted to Rich Walker from Shadow Robot Company about their advanced robotic hands for research and industry.

What I’ve learned from 25 years of automated science, and what the future holds: an interview with Ross King

and   14 Apr 2026
Ross King created the first robot scientist back in 2009. He spoke to us about the nature of scientific discovery, the role AI has to play, and his recent work in DNA computing.

Robot Talk Episode 151 – Robots to study the ocean, with Simona Aracri

  10 Apr 2026
In the latest episode of the Robot Talk podcast, Claire chatted to Simona Aracri from National Research Council of Italy about innovative robot designs for oceanography and environmental monitoring.

Generative AI improves a wireless vision system that sees through obstructions

  08 Apr 2026
With this new technique, a robot could more accurately detect hidden objects or understand an indoor scene using reflected Wi-Fi signals.

Resource-constrained image generation and visual understanding: an interview with Aniket Roy

  07 Apr 2026
Aniket tells us about his research exploring how modern generative models can be adapted to operate efficiently while maintaining strong performance.



AUAI is supported by:







Subscribe to Robohub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence