Robohub.org
 

Stanford’s self-driving Delorean goes drifting for Back to the Future Day


by
21 October 2015



share this:
delorean

Last night, I attended Stanford’s unveiling of their newest research vehicle for self-driving. An old Delorean has been heavily modified in order to perform drifting experiments – where you let the rear wheels skid freely.

Stanford managed to get Jamie Hyneman of Mythbusters to host the event so there was a good crowd. He asked “Why a Delorean?” What they should have said was:

“The way I see it, if you’re going to build a self-driving drifting car, why not do it with some style?”

But, instead, they got into the technical reasons for choosing a Delorean.

They called the car Marty and it was launched the day before “Back to the Future Day” — Oct 21, 2015, the day in the second movie where Marty travels into the future.

But back to the present. This car, with rear wheel drive and central engine mount, is not a great car to drive. The engineers have removed the engine and replaced it with dual electric motors from Renovo, creating a car able to drive the two rear wheels independently. This means the software is able to spin the wheels at different rates, and do things that no human driver could ever do, including special types of drifting. The car is already able to turn tighter doughnuts (circles) than a human could.

Normally, drifting is a bad idea. It means a loss of control and a loss of power – the connection of the tires and the road is the sole tool you have to drive and control the car. You would only give it up if you absolutely had to. Perhaps the research will show that there are times where you might want to.

Drifting is usually done for show — it will rarely help you in a race — but Stanford’s team wants to discover whether the robot’s ability to do inhuman driving might offer more “outs” in a dangerous situation, like trying to avoid a collision. A car might twist its wheels (perhaps some day all of its wheels) and spin them at different speeds to enable it to take a path which could avoid an accident.

In effect, it’s like making a vehicle that can drive like a Hollywood stunt car. In movies, stunt drivers often make fairly improbable and impossible moves to avoid accidents. A classic Hollywood scene involves a car titling two wheels to get through a tiny gap. The Stanford team did not propose this, and it’s a pretty hard thing to do, but it’s one way to envisage the general idea.

Up to now, research on accident avoidance has been fairly low-key. After all, the main task is to be able to drive safely in the lane you are supposed to be in. But eventually, teams will focus on what to do when things go wrong. For now, though, the priority is to make sure things don’t go wrong. Someday, they may even focus on the infamous trolley problem.

Generally, drift or not, robots should become very good at avoiding accidents. They will have detailed knowledge of the physics of their tires, they will calculate without panic and will be able to drive with full confidence, missing obstacles by very thin margins while staying safe. A human can’t navigate a space only a few inches wider than the car with confidence, but a robot could. A robot will always use the optimal combination of steering and braking, which humans need a lot of training to achieve. Your tires can give you braking force or steering force, but you must reduce one to get more of the other, so often the best strategy is to brake first and then steer, though the human instinct is to do both.

Stanford’s car is not super autonomous. It is meant to do test algorithms in private open spaces. So it won’t be avoiding obstacles or plotting lanes on a highway, it will be testing how a computer can get the most use from the car’s tires.

This article originally appeared on robocars.com.



tags: , , ,


Brad Templeton, Robocars.com is an EFF board member, Singularity U faculty, a self-driving car consultant, and entrepreneur.
Brad Templeton, Robocars.com is an EFF board member, Singularity U faculty, a self-driving car consultant, and entrepreneur.





Related posts :



Robot Talk Episode 131 – Empowering game-changing robotics research, with Edith-Clare Hall

  31 Oct 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Edith-Clare Hall from the Advanced Research and Invention Agency about accelerating scientific and technological breakthroughs.

A flexible lens controlled by light-activated artificial muscles promises to let soft machines see

  30 Oct 2025
Researchers have designed an adaptive lens made of soft, light-responsive, tissue-like materials.

Social media round-up from #IROS2025

  27 Oct 2025
Take a look at what participants got up to at the IEEE/RSJ International Conference on Intelligent Robots and Systems.

Using generative AI to diversify virtual training grounds for robots

  24 Oct 2025
New tool from MIT CSAIL creates realistic virtual kitchens and living rooms where simulated robots can interact with models of real-world objects, scaling up training data for robot foundation models.

Robot Talk Episode 130 – Robots learning from humans, with Chad Jenkins

  24 Oct 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Chad Jenkins from University of Michigan about how robots can learn from people and assist us in our daily lives.

Robot Talk at the Smart City Robotics Competition

  22 Oct 2025
In a special bonus episode of the podcast, Claire chatted to competitors, exhibitors, and attendees at the Smart City Robotics Competition in Milton Keynes.

Robot Talk Episode 129 – Automating museum experiments, with Yuen Ting Chan

  17 Oct 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Yuen Ting Chan from Natural History Museum about using robots to automate molecular biology experiments.

What’s coming up at #IROS2025?

  15 Oct 2025
Find out what the International Conference on Intelligent Robots and Systems has in store.



 

Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


 












©2025.05 - Association for the Understanding of Artificial Intelligence