Robohub.org
 

Robocars at CES: Supervised traffic jam assist


by
08 January 2015



share this:
Delphi's car bristling with sensors -- 6 LIDARS and even more radars.

Delphi’s car bristling with sensors — 6 LIDARS and even more radars. Photo credit: Brad Templeton.

After a short day looking at robocars at CES, a more full day was full of the usual equipment — cameras, TVs, audio and the like and visits to several car booths.

I’ve expanded my gallery of notable things with captions with cars and other technology.

Lots of people were making demonstrations of traffic jam assist at CES — simple self-driving at low speeds among other cars. All the demos were of a supervised traffic jam assist. This style of product (as well as supervised highway cruising) is the first thing that car companies are delivering (though they are also delivering various parking assist and valet parking systems).

This makes sense as it’s an easy problem to solve. So easy, in fact, that many of them now admit they are working on making a real traffic jam assist, which will drive the jam for you while you do e-mail or read a book. This is a readily solvable problem today — you really just have to follow the other cars, and you are going slow enough that, short of a catastrophic error like going full throttle, you aren’t going to hurt people no matter what you do … at least on a highway where there are no pedestrians or cyclists. As such, a full auto traffic jam assist should be the first product we see form car companies.

None of them will say when they might do this. The barrier is not so much technological as corporate — concern about liability and image. It’s a shame, because frankly the supervised cruise and traffic jam assist products are just in the “pleasant extra feature” category. They may help you relax a bit (if you trust them) as cruise control does, but they give you little else. A “read a book” level system would give people back time, and signal the true dawn of robocars. It would probably sell for lots more money, too.

The most impressive car is Delphi’s, a collaboration with folks out of CMU. The Delphi car, a modified Audi SUV, has no fewer than 6 4-plane LIDARs and an even larger number of radars. It helps if you make the radars, as otherwise this is an expensive bill of materials. With all the radars, the vehicle can look left and right, and back left and back right, as well as forward, which is what you need for dealing with intersections where cross traffic doesn’t stop, and for changing lanes at high speed.

As a refresher: Radar gives you great information, including speed on moving objects, and sucks on stationary ones. It goes very far and sees through all weather. It has terrible resolution. LIDAR has more resolution but does not see as far, and does not directly give you speed. Together they do great stuff.

For notes and photos, browse the gallery.

A version of this article originally appeared on robocars.com.



tags: , , , ,


Brad Templeton, Robocars.com is an EFF board member, Singularity U faculty, a self-driving car consultant, and entrepreneur.
Brad Templeton, Robocars.com is an EFF board member, Singularity U faculty, a self-driving car consultant, and entrepreneur.





Related posts :



Robot Talk Episode 120 – Evolving robots to explore other planets, with Emma Hart

  09 May 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Emma Hart from Edinburgh Napier University about algorithms that 'evolve' better robot designs and control systems.

Robot Talk Episode 119 – Robotics for small manufacturers, with Will Kinghorn

  02 May 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Will Kinghorn from Made Smarter about how to increase adoption of new tech by small manufacturers.

Multi-agent path finding in continuous environments

  01 May 2025
How can a group of agents minimise their journey length whilst avoiding collisions?

Interview with Yuki Mitsufuji: Improving AI image generation

  29 Apr 2025
Find out about two pieces of research tackling different aspects of image generation.

Robot Talk Episode 118 – Soft robotics and electronic skin, with Miranda Lowther

  25 Apr 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Miranda Lowther from the University of Bristol about soft, sensitive electronic skin for prosthetic limbs.

Interview with Amina Mević: Machine learning applied to semiconductor manufacturing

  17 Apr 2025
Find out how Amina is using machine learning to develop an explainable multi-output virtual metrology system.

Robot Talk Episode 117 – Robots in orbit, with Jeremy Hadall

  11 Apr 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Jeremy Hadall from the Satellite Applications Catapult about robotic systems for in-orbit servicing, assembly, and manufacturing.

Robot Talk Episode 116 – Evolved behaviour for robot teams, with Tanja Kaiser

  04 Apr 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Tanja Katharina Kaiser from the University of Technology Nuremberg about how applying evolutionary principles can help robot teams make better decisions.



 

Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


©2025.05 - Association for the Understanding of Artificial Intelligence


 












©2025.05 - Association for the Understanding of Artificial Intelligence