Robohub.org
 

Self-driving cars for country roads


by
07 May 2018



share this:


A team of MIT researchers tested MapLite on a Toyota Prius outfitted with a range of LIDAR and IMU sensors.
Photo courtesy of CSAIL.


By Adam Conner-Simons | Rachel Gordon

Uber’s recent self-driving car fatality underscores the fact that the technology is still not ready for widespread adoption. The reality is that there aren’t many places where today’s self-driving cars can actually reliably drive. Companies like Google only test their fleets in major cities, where they’ve spent countless hours meticulously labeling the exact 3-D positions of lanes, curbs, and stop signs.

“The cars use these maps to know where they are and what to do in the presence of new obstacles like pedestrians and other cars,” says Daniela Rus, director of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). “The need for dense 3-D maps limits the places where self-driving cars can operate.”

Indeed, if you live along the millions of miles of U.S. roads that are unpaved, unlit, or unreliably marked, you’re out of luck. Such streets are often much more complicated to map, and get a lot less traffic, so companies aren’t incentivized to develop 3-D maps for them anytime soon. From California’s Mojave Desert to Vermont’s White Mountains, there are huge swaths of America that self-driving cars simply aren’t ready for.

One way around this is to create systems advanced enough to navigate without these maps. In an important first step, Rus and colleagues at CSAIL have developed MapLite, a framework that allows self-driving cars to drive on roads they’ve never been on before without 3-D maps.

MapLite combines simple GPS data that you’d find on Google Maps with a series of sensors that observe the road conditions. In tandem, these two elements allowed the team to autonomously drive on multiple unpaved country roads in Devens, Massachusetts, and reliably detect the road more than 100 feet in advance. (As part of a collaboration with the Toyota Research Institute, researchers used a Toyota Prius that they outfitted with a range of LIDAR and IMU sensors.)

“The reason this kind of ‘map-less’ approach hasn’t really been done before is because it is generally much harder to reach the same accuracy and reliability as with detailed maps,” says CSAIL graduate student Teddy Ort, who was a lead author on a related paper about the system. “A system like this that can navigate just with on-board sensors shows the potential of self-driving cars being able to actually handle roads beyond the small number that tech companies have mapped.”

The paper, which will be presented in May at the International Conference on Robotics and Automation (ICRA) in Brisbane, Australia, was co-written by Ort, Rus, and PhD graduate Liam Paull, who is now an assistant professor at the University of Montreal.

For all the progress that has been made with self-driving cars, their navigation skills still pale in comparison to humans’. Consider how you yourself get around: If you’re trying to get to a specific location, you probably plug an address into your phone and then consult it occasionally along the way, like when you approach intersections or highway exits.

However, if you were to move through the world like most self-driving cars, you’d essentially be staring at your phone the whole time you’re walking. Existing systems still rely heavily on maps, only using sensors and vision algorithms to avoid dynamic objects like pedestrians and other cars.

In contrast, MapLite uses sensors for all aspects of navigation, relying on GPS data only to obtain a rough estimate of the car’s location. The system first sets both a final destination and what researchers call a “local navigation goal,” which has to be within view of the car. Its perception sensors then generate a path to get to that point, using LIDAR to estimate the location of the road’s edges. MapLite can do this without physical road markings by making basic assumptions about how the road will be relatively more flat than the surrounding areas.

“Our minimalist approach to mapping enables autonomous driving on country roads using local appearance and semantic features such as the presence of a parking spot or a side road,” says Rus.

The team developed a system of models that are “parameterized,” which means that they describe multiple situations that are somewhat similar. For example, one model might be broad enough to determine what to do at intersections, or what to do on a specific type of road.

MapLite differs from other map-less driving approaches that rely more on machine learning by training on data from one set of roads and then being tested on other ones.

“At the end of the day we want to be able to ask the car questions like ‘how many roads are merging at this intersection?’” says Ort. “By using modeling techniques, if the system doesn’t work or is involved in an accident, we can better understand why.”

MapLite still has some limitations. For example, it isn’t yet reliable enough for mountain roads, since it doesn’t account for dramatic changes in elevation. As a next step, the team hopes to expand the variety of roads that the vehicle can handle. Ultimately they aspire to have their system reach comparable levels of performance and reliability as mapped systems but with a much wider range.

“I imagine that the self-driving cars of the future will always make some use of 3-D maps in urban areas,” says Ort. “But when called upon to take a trip off the beaten path, these vehicles will need to be as good as humans at driving on unfamiliar roads they have never seen before. We hope our work is a step in that direction.”

This project was supported, in part, by the National Science Foundation and the Toyota Research Initiative.




MIT News





Related posts :



Robot Talk Episode 126 – Why are we building humanoid robots?

  20 Jun 2025
In this special live recording at Imperial College London, Claire chatted to Ben Russell, Maryam Banitalebi Dehkordi, and Petar Kormushev about humanoid robotics.

Gearing up for RoboCupJunior: Interview with Ana Patrícia Magalhães

and   18 Jun 2025
We hear from the organiser of RoboCupJunior 2025 and find out how the preparations are going for the event.

Robot Talk Episode 125 – Chatting with robots, with Gabriel Skantze

  13 Jun 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Gabriel Skantze from KTH Royal Institute of Technology about having natural face-to-face conversations with robots.

Preparing for kick-off at RoboCup2025: an interview with General Chair Marco Simões

and   12 Jun 2025
We caught up with Marco to find out what exciting events are in store at this year's RoboCup.

Interview with Amar Halilovic: Explainable AI for robotics

  10 Jun 2025
Find out about Amar's research investigating the generation of explanations for robot actions.

Robot Talk Episode 124 – Robots in the performing arts, with Amy LaViers

  06 Jun 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Amy LaViers from the Robotics, Automation, and Dance Lab about the creative relationship between humans and machines.

Robot Talk Episode 123 – Standardising robot programming, with Nick Thompson

  30 May 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Nick Thompson from BOW about software that makes robots easier to program.

Congratulations to the #AAMAS2025 best paper, best demo, and distinguished dissertation award winners

  29 May 2025
Find out who won the awards presented at the International Conference on Autonomous Agents and Multiagent Systems last week.



 

Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


©2025.05 - Association for the Understanding of Artificial Intelligence


 












©2025.05 - Association for the Understanding of Artificial Intelligence