Robohub.org
 

How do self-driving cars work?

by
03 June 2014



share this:
Nissan_Autonomous_Vehicle_08x
Nissan’s autonomous car prototype – 2013, photo:Nissan Global

Tesla CEO, Elon Musk, recently announced that the car manufacturer will produce self-driving cars within three years. Nissan has announced that it will have a self-driving car available by 2020, Google has said it will do so by 2018. Over the past decade, the conversation around self-driving cars has evolved from futuristic police chase sequences in Minority Report to figuring out which auto manufacturer will be first to launch a commercially viable self-driving vehicle. Daimler AG, maker of Mercedes Benz, recently announced that an S-class sedan had completed a 62-mile journey in the streets of Germany without a driver. Audi’s self-driving car successfully navigated 156 turns of the 12-mile Hill Climb course in Colorado’s Pikes Peak. Car manufacturers see self-driving cars as a way to eliminate road deaths caused by human error, reduce traffic, and free up time spent commuting – but how do these vehicles work?

Self-driving cars in a nutshell

A self-driving car is capable of sensing its environment and navigating without human input. To accomplish this task, each vehicle is usually outfitted with a GPS unit, an inertial navigation system, and a range of sensors including laser rangefinders, radar, and video.  The vehicle uses positional information from the GPS and inertial navigation system to localize itself and sensor data to refine its position estimate as well as to build a three-dimensional image of its environment.

Data from each sensor is filtered to remove noise and often fused with other data sources to augment the original image. How the vehicle subsequently uses this data to make navigation decisions is determined by its control system.

The majority of self-driving vehicle control systems implement a deliberative architecture, meaning that they are capable of making intelligent decisions by 1) maintaining an internal map of their world and 2) using that map to find an optimal path to their destination that avoids obstacles (e.g. road structures, pedestrians and other vehicles) from a set of possible paths. Once the vehicle determines the best path to take, the decision is dissected into commands, which are fed to the vehicle’s actuators. These actuators control the vehicle’s steering, braking and throttle.

This process of localization, mapping, obstacle avoidance and path planning is repeated multiple times each second on powerful on-board processors until the vehicle reaches its destination.
Audi-TTS-Autonomous-Pikes-Peak-13

On board computers of STanford/Audi autonomous TTS

The next section focuses on the technical components of each process: mapping and localization, obstacle avoidance and path planning. Although car manufacturers use different sensor suites and algorithms depending on their unique cost and operational constraints, the processes across vehicles are similar. The descriptions below most closely mirror their implementation in state-of-the-art self-driving military vehicles.

 

Breaking Down the Technicals

Mapping and Localization

Prior to making any navigation decisions, the vehicle must first build a map of its environment and precisely localize itself within that map. The most frequently used sensors for map building are laser rangefinders and cameras. A laser rangefinder scans the environment using swaths of laser beams and calculates the distance to nearby objects by measuring the time it takes for each laser beam to travel to the object and back. Where video from camera is ideal for extracting scene color, an advantage of laser rangefinders is that depth information is readily available to the vehicle for building a three-dimensional map. Because laser beams diverge as they travel through space, it is difficult to obtain accurate distance readings greater than 100m away using most state-of-the-art laser rangefinders, which limits the amount of reliable data that can be captured in the map. The vehicle filters and discretizes data collected from each sensor and often aggregates the information to create a comprehensive map, which can then be used for path planning.

What_Google_Car_Sees_LIDAR.
An example of a Google car’s internal map at an intersection, tweeted by Idealab founder Bill Gross. Gross claims that Google’s Self-Driving Car gathers almost 1 GB of data per second.

For the vehicle to know where it is in relation to other objects in the map, it must use its GPS, inertial navigation unit, and sensors to precisely localize itself. GPS estimates can be off by many meters due to signal delays caused by changes in the atmosphere and reflections off of buildings and surrounding terrain, and inertial navigation units accumulate position errors overtime. Therefore localization algorithms will often incorporate map or sensor data previously collected from the same location to reduce uncertainty. As the vehicle moves, new positional information and sensor data are used to update the vehicle’s internal map.

Obstacle Avoidance

A vehicle’s internal map includes the current and predicted location of all static (e.g. buildings, traffic lights, stop signs) and moving (e.g. other vehicles and pedestrians) obstacles in its vicinity. Obstacles are categorized depending on how well they match up with a library of pre-determined shape and motion descriptors. The vehicle uses a probabilistic model to track the predicted future path of moving objects based on its shape and prior trajectory. For example, if a two-wheeled object is traveling at 40 mph versus 10 mph, it is most likely a motorcycle and not a bicycle and will get categorized as such by the vehicle. This process allows the vehicle to make more intelligent decisions when approaching crosswalks or busy intersections. The previous, current and predicted future locations of all obstacles in the vehicle’s vicinity are incorporated into its internal map, which the vehicle then uses to plan its path.

Path Planning

The goal of path planning is to use the information captured in the vehicle’s map to safely direct the vehicle to its destination while avoiding obstacles and following the rules of the road. Although manufacturers’ planning algorithms will be different based on their navigation objectives and sensors used, the following describes a general path planning algorithm which has been used on military ground vehicles.

This algorithm determines a rough long-range plan for the vehicle to follow while continuously refining a short-range plan (e.g. change lanes, drive forward 10m, turn right). It starts from a set of short-range paths that the vehicle would be dynamically capable of completing given its speed, direction and angular position, and removes all those that would either cross an obstacle or come too close to the predicted path of a moving one.  For example, a vehicle traveling at 50 mph would not be able to safely complete a right turn 5 meters ahead, therefore that path would be eliminated from the feasible set.  Remaining paths are evaluated based on safety, speed, and any time requirements. Once the best path has been identified, a set of throttle, brake and steering commands, are passed on to the vehicle’s on-board processors and actuators. Altogether, this process takes on average 50ms, although it can be longer or shorter depending on the amount of collected data, available processing power, and complexity of the path planning algorithm.

The process of localization, mapping, obstacle detection, and path planning is repeated until the vehicle reaches its destination.

 

The Road Ahead

Car manufacturers have made significant advances in the past decade towards making self-driving cars a reality; however, there still remain a number of technological barriers that manufacturers must overcome before self-driving vehicles are safe enough for road use. GPS can be unreliable, computer vision systems have limitations to understanding road scenes, and variable weather conditions can adversely affect the ability of on-board processors to adequately identify or track moving objects. Self-driving vehicles have also yet to demonstrate the same capability as human drivers in understanding and navigating unstructured environments such as construction zones and accident areas.

These barriers though are not insurmountable. The amount of road and traffic data available to these vehicles is increasing, newer range sensors are capturing more data, and the algorithms for interpreting road scenes are evolving. The transition from human-operated vehicles to fully self-driving cars will be gradual, with vehicles at first performing only a subset of driving tasks such as parking and driving in stop-and-go traffic autonomously. As the technology improves, more driving tasks can be reliably outsourced to the vehicle.

The technology for self-driving cars is not quite ready, but I am looking forward to the self-driving cars of Minority Report becoming a reality.

If you liked this article, you may also be interested in:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.



tags: , , , , , , , ,


Shima Rayej





Related posts :



Open Robotics Launches the Open Source Robotics Alliance

The Open Source Robotics Foundation (OSRF) is pleased to announce the creation of the Open Source Robotics Alliance (OSRA), a new initiative to strengthen the governance of our open-source robotics so...

Robot Talk Episode 77 – Patricia Shaw

In the latest episode of the Robot Talk podcast, Claire chatted to Patricia Shaw from Aberystwyth University all about home assistance robots, and robot learning and development.
18 March 2024, by

Robot Talk Episode 64 – Rav Chunilal

In the latest episode of the Robot Talk podcast, Claire chatted to Rav Chunilal from Sellafield all about robotics and AI for nuclear decommissioning.
31 December 2023, by

AI holidays 2023

Thanks to those that sent and suggested AI and robotics-themed holiday videos, images, and stories. Here’s a sample to get you into the spirit this season....
31 December 2023, by and

Faced with dwindling bee colonies, scientists are arming queens with robots and smart hives

By Farshad Arvin, Martin Stefanec, and Tomas Krajnik Be it the news or the dwindling number of creatures hitting your windscreens, it will not have evaded you that the insect world in bad shape. ...
31 December 2023, by

Robot Talk Episode 63 – Ayse Kucukyilmaz

In the latest episode of the Robot Talk podcast, Claire chatted to Ayse Kucukyilmaz from the University of Nottingham about collaboration, conflict and failure in human-robot interactions.
31 December 2023, by





Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association