Robohub.org
 

Waymo’s left turns frustrate other drivers


by
29 August 2018



share this:

This week’s hot story was again from Amir at The Information and there is even more detail in the author’s Twitter thread.

The short summary: Amir was able to find a fair number of Waymo’s neighbours in Chandler, Arizona who are getting frustrated by the over-cautious drive patterns of the Waymo vans. Several used the words, “I hate them.”

A lot of the problems involve over-hesitation at an unprotected left turn near the Waymo HQ. The car is just not certain when it can turn. There is also additional confirmation of what I reported earlier, that operation with no safety driver is still very rare, and on limited streets.

Unprotected turns, especially left ones, have always been one of the more challenging elements of day-to-day driving. You must contend with oncoming traffic and pedestrians who may be crossing as well. You may have to do this against traffic that is moving at high speed.

While I am generally not surprised that these intersections can be a problem, I am a little surprised they are one for Waymo. Waymo is the only operating car which features a steerable, high resolution long range LIDAR on board. This means it can see out 200m or more (in a narrower field of view.) This lets it get a good look at traffic coming at it in such situations. (Radar also sees such vehicles but with much less resolution.)

For Waymo, this is not a problem of sensors but one of being too timid. One reason they are operating in Phoenix is it’s a pretty easy area to be timid in. The instinct of all teams is to avoid early risks that lead to early accidents. That became even stronger in Arizona after the Uber fatality, which used up a lot of the public’s tolerance for such errors. As such, the “better safe than sorry” philosophy which was already present in Waymo and most teams has been strengthened.

The problem is, it needs to eventually be weakened. Timid drivers won’t make it in the real world. They won’t make it far at all in the non-tame places like Boston. Thus you look at a nasty trade-off:

  • The more timid you are, the more problems you have and the more people driving behind you that you annoy.
  • The less timid you are, the greater the risk of a mistake and an accident.

That leaves only a few options. The other drivers must adapt better to a timid driver, or the public must get more tolerant of a slightly higher (but still better than human) accident risk.

Hype around self-driving cars has pushed some into the public to expect computerized perfection. That’s not coming.

Changing how people react?

Believe it or not, it is not as impossible to change the behaviour of other drivers as you might think. In Manhattan not that may years ago, two things were very common. Endless honking, and gridlock. Both were the result of well ingrained aggressive habits of New York drivers. The city decided to crack down on both, and got aggressive with fines. It worked, and those things are both vastly reduced.

It’s less possible to adjust the patterns of pedestrians. Some recent articles have gotten a lot of attention from people suggesting this must happen. I think some of it will happen, but much less than hoped, and that will be the subject of another article.

A robocar should eventually get very good at driving decisions which involve physics, because computers are very good at physics. Unlike humans which have difficulty precisely timing when an oncoming car will get to them, robots could in theory do it with a very frightening precision, making turns through gaps that seem tiny to humans. This would frighten other drivers and cause problems, so it’s not something we’ll see today.

This isn’t the last story we’ll see of robocars frustrating other drivers. That’s particularly true if we unwisely keep them at the speed limit. My long term belief is that most of the traffic code should be eliminate for robocars and be replaced by, “it’s legal if you can do it safely and without unfairly impeding traffic.” The whole idea of a traffic code only makes sense for humans. With robots, since there will never be more than a few dozen different software stacks on the road, you can just get all the designers together in a room and work out what should be done and what is safe. Traffic codes and other laws are there to deal with humans who can’t be trusted to know the rules or even obey the rules they know. While companies can’t be trusted to do anything but look after their own interests, you can easily make it in their interests to follow the rules.




Brad Templeton, Robocars.com is an EFF board member, Singularity U faculty, a self-driving car consultant, and entrepreneur.
Brad Templeton, Robocars.com is an EFF board member, Singularity U faculty, a self-driving car consultant, and entrepreneur.





Related posts :



Robot Talk Episode 115 – Robot dogs working in industry, with Benjamin Mottis

  28 Mar 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Benjamin Mottis from ANYbotics about deploying their four-legged ANYmal robot in a variety of industries.

Robot Talk Episode 114 – Reducing waste with robotics, with Josie Gotz

  21 Mar 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Josie Gotz from the Manufacturing Technology Centre about robotics for material recovery, reuse and recycling.

Robot Talk Episode 113 – Soft robotic hands, with Kaspar Althoefer

  14 Mar 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Kaspar Althoefer from Queen Mary University of London about soft robotic manipulators for healthcare and manufacturing.

Robot Talk Episode 112 – Getting creative with robotics, with Vali Lalioti

  07 Mar 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Vali Lalioti from the University of the Arts London about how art, culture and robotics interact.

Robot Talk Episode 111 – Robots for climate action, with Patrick Meier

  28 Feb 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Patrick Meier from the Climate Robotics Network about how robots can help scale action on climate change.

Robot Talk Episode 110 – Designing ethical robots, with Catherine Menon

  21 Feb 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Catherine Menon from the University of Hertfordshire about designing home assistance robots with ethics in mind.

Robot Talk Episode 109 – Building robots at home, with Dan Nicholson

  14 Feb 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Dan Nicholson from MakerForge.tech about creating open source robotics projects you can do at home.

Robot Talk Episode 108 – Giving robots the sense of touch, with Anuradha Ranasinghe

  07 Feb 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Anuradha Ranasinghe from Liverpool Hope University about haptic sensors for wearable tech and robotics.





Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association