Sensors for Autonomous Driving with Christoph Stiller

20 February 2015

share this:


In this episode, Audrow Nash interviews Christoph Stiller from the Karlsruhe Institute of Technology. Stiller speaks about the sensors required for various level of autonomous driving, as well as the ethics of autonomous cars, and his experience in the Defense Advanced Research Projects Agency (DARPA) Grand Challenge.

Christoph Stiller

ChristophStillerChristoph Stiller studied Electrical Engineering in Aachen, Germany and Trondheim, Norway, and received the Diploma degree and the Dr.-Ing. degree (Ph.D.) from Aachen University of Technology in 1988 and 1994, respectively. He worked with INRS-Telecommunications in Montreal, Canada for a post-doctoral year as Member of the Scientific Staff in 1994/1995. In 1995 he joined the Corporate Research and Advanced Development of Robert Bosch GmbH, Germany. In 2001 he became chaired professor and director of the Institute for Measurement and Control Systems at Karlsruhe Institute of Technology, Germany.

Dr. Stiller serves as immediate Past President of the IEEE Intelligent Transportation Systems Society, Associate Editor for the IEEE Transactions on Intelligent Transportation Systems (2004-ongoing), IEEE Transactions on Image Processing (1999-2003) and for the IEEE Intelligent Transportation Systems Magazine (2012-ongoing). He served as Editor-in-Chief of the IEEE Intelligent Transportation Systems Magazine (2009-2011). He has been program chair of the IEEE Intelligent Vehicles Symposium 2004 in Italy and General Chair of the IEEE Intelligent Vehicles Symposium 2011 in Germany. His automated driving team AnnieWAY has been finalist in the Darpa Urban Challenge 2007 and winner of the Grand Cooperative Driving Challenge in 2011.



Audrow Nash:  Hi, welcome to Robots Podcast. Can you introduce yourself?

Christoph Stiller:  My name’s Christoph Stiller. I’m a professor in Mechanical Engineering at Karlsruhe Institute of Technology in Germany.

Audrow Nash:  Can you tell me the goal and motivation behind your research?

Christoph Stiller:  We’re working on autonomous cars, and in particular we focus on computer vision for vehicles (cars that see their environment), and on trajectory planning for vehicles (the decision on where to drive, given the world that the car sees).

Audrow Nash:  What kind of sensors do you use on your vehicles?

Christoph Stiller:  Our main sensors are video cameras, but we also do experiments with LIDAR and radar sensors.

Audrow Nash:  You have a car named Bertha, and it does not have roof-mounted sensors. Can you tell me a bit about that?

Christoph Stiller:  Bertha was developed together with Mercedes Benz and the goal was to use only close-to-market sensors. In particular, we only used cameras that were looking around the vehicle, and radar sensors that came from serious production levels. No expensive GPS, no expensive LIDAR sensors, and in particular, nothing on the roof.

Audrow Nash:  What is the motivation behind not using roof sensors?

Christoph Stiller:  Our other vehicle was driving with roof sensors and those sensors were quite expensive; we had LIDAR sensors that cost us about $70,000 each, and we had a high precision GPS and with an inertial measurement unit and that cost us about the same amount of money on top. That’s far from serious production.

Audrow Nash:  Are there other disadvantages to using roof sensors? For example they get dirty, was that a concern?

Christoph Stiller:  Yes, of course. If a roof sensor is not cleaned it will get dirty, just like how the windshield on your car gets so dirty that you can’t look out anymore after a few hundred kilometers of driving. The same happens to a camera lens: if you drive with your camera lens for a long time, or with LIDAR sensors (which are also optical systems), the lens of that system gets contaminated with dirt or insects, and so the sensor gets blind. In the best case you notice that you can’t drive autonomously anymore.

Audrow Nash:  If your sensors are not on the roof, where are they in the car?

Christoph Stiller:  [Human drivers look out the window. In our system] they’re very close to the roof behind the windshield. When the driver gets irritated by a dirty window, and he engages the wipers, and then our cameras are also cleaned because it’s in the wiper area.

Audrow Nash:  What other sensors do you have in addition to the vision system that watches what’s happening ahead of the car?

Christoph Stiller:  We also have radar sensors. Many cars already have one radar sensor, but we equipped our vehicle with three radar sensors. These came from serious production because we needed the larger viewing angle for those sensors so that we could look into the traffic area and see whether [there was] an oncoming vehicle that we would need to consider.

We also use a standard GPS unit, but it has a precision of about 20 meters, similar to what you have in your smartphone. It’s not an expensive unit, it’s only a low precision unit that gives us a very coarse positioning. We prerecord that tell us where to drive, which lanes we’re supposed to take, what possibilities we have, who has precedence at what intersection, where traffic lights are … all this information is stored in maps.

Audrow Nash:  You mentioned in your abstract that these sensors, the ones you’re using now on Bertha, are close to market. What do you mean by this?

Christoph Stiller:  [The radar sensors are in the market, but not in high numbers.] The video cameras are in the market already, but ours of course require particular algorithms to analyze the data and that’s done on a personal computer in the trunk and of course we would need to bring that down to some embedded system hardware for the processing. That’s something that could be done in a reasonable amount of time.

Audrow Nash:  What are some of the major barriers to bringing these sensors to market?

Christoph Stiller:  The sensors could be brought to market, but the main barriers to bring the whole system to market would be that right now we need a safety driver who could intervene in case of very rare situations that were not predicted. For example if you have an emergency vehicle coming on to you or some other situation that is very difficult to detect, you need human intelligence to keep the vehicle in a safe state.

Audrow Nash:  Can you tell me a bit about the real-time decision-making that your systems do?

Christoph Stiller:  The vehicle first analyses all the sensor signals. It looks first of all where it is on the map with high precision; it uses GPS for a coarse 20-meter precise position, and then it does visual localization, looking for landmarks in the environment such as a window of a house or a tree. Then, like humans do, the system is able to localize itself, except that our visual localization system is very precise: we get a precision of about five-centimeter accuracy, so we know with high accuracy where we are. Then we look at the map to see what lane we are in, the lane boundaries and any obstacle that is in our driving corridor (such as other cars, pedestrians, bicyclists, parked cars or any other static obstacles), and then we plan a path that does not collide with any static or moving obstacles if possible.

We first try to stay in our lane, but if that’s not possible we have a decision unit that allows us to move to another lane. This might be an adjacent lane that moves in the same direction if there’s a gap, and if that’s not possible either, we could even decide to go into the [opposing] lane if there is no vehicle coming in that direction. If none of this is possible, of course, we’ll stay and wait until the situation resolves.

Audrow Nash:  In the real world, the maps that you correlate all of your sensor data with are sometimes unreliable. Can you talk about those challenges?

Christoph Stiller:  Right now if our map is wrong, for example a new construction zone is built in the area, our supervising driver has to react. We have a safety driver onboard who doesn’t do anything, except in an unexpected situation like when the map is wrong, or when we have an emergency vehicle. Of course [in the marketplace] you can’t have a safety driver delivered with every car, and so the long-term goal is then to keep the map up to date, and that will be done with community mapping. There will be a whole crowd of cars that are equipped with sensors that communicate slight changes in the map to an infrastructure server, and the infrastructure server would accumulate that information. If many vehicles [report a change to the map], let’s say lane has moved slightly due to a construction, then the map will signal that to the vehicles that follow.

Each vehicle has an accurate map and an up-to-date map of the environment.

Audrow Nash:  So it’s essentially crowdsourcing the development of the map for all the cars to pull from?

Christoph Stiller:  Yes, it’s a crowdsourcing method.

Audrow Nash:  What was your experience with the grand cooperative driving challenges, the series of them?

Christoph Stiller:  There’s a large series that was started in the US by DARPA in 2004 and 2005. These were grand challenges where the goal was to drive through the desert autonomously. Then in 2007 there was an urban challenge that was through an area with low houses, almost no trees, very few traffic lights, very few traffic signs, and no bicycles. So it was not a fully normal traffic situation but more a mock-up of that.

In 2011 there was the first cooperative driving challenge, which took place in Holland. The goal there was to drive in a tight platoon of vehicles, keeping a very short distance – 6 meters – [between the cars]. If you are driving 100km/h that is about 34 meters per second, and that means it’s 0.2 seconds [between you and] your predecessor. Of course it’s impossible for a human driver to react in such a short time, so these vehicles could only be driven autonomously.

The first vehicle would [break hard], and all the other vehicles then of course had to react in real-time to avoid collision. The vehicles needed to communicate – that’s why it was called cooperative challenge. They had to communicate their position and velocity, and if they wanted to, they could also communicate their acceleration, which of course is highly beneficial to following vehicles.

Audrow Nash:  What were some of the major challenges you encountered in the DARPA cooperative driving challenge?

Christoph Stiller:  The major challenges were the short reaction time, and the fact that the teams were heterogeneous. It was not a system approach, developing one control law for all the vehicles, which would then have a guaranteed stability for the whole platoon. We didn’t know the control laws of the other [vehicles]. The challenge was to have a very robust system that could drive safe without knowing what strategy other [vehicles] have.

Audrow Nash:  What did you learn from this?

Christoph Stiller:  We learned that it was possible in such scenarios to damp very severe oscillations, allowing the first car to [break almost to a full stop] and have all the vehicles in the platoon (from small cars to trucks) avoid a collision through timely braking and communication. We learned that communication could help [prevent] what we call shockwaves in traffic. [This can happen] when one car brakes in a platoon, and each following car has to brake stronger than its predecessor and eventually either one car can’t break as strong as it should, so there’s a collision, or it comes to full stop and then you have traffic congestion.

Audrow Nash:  I’m interested in your perspective on the ethics of driving autonomous vehicles. What would you think about the delegation of responsibility between the driver and the manufacturers of the autonomous car sensors?

Christoph Stiller:  That will shift. Right now for most accidents, the car driver is responsible, so most accidents in traffic are due to human fault. A very small number of traffic accidents come from the responsibility of construction fault by the car manufacturer. Of course, the more autonomous maneuvers introduced into the market, the greater the likelihood that one of those maneuvers is ill-designed and could cause a crash. In that case of course the car manufacturer would be liable. If in the long term the vehicle is fully autonomous, so that [a person] could read a newspaper, then of course the car manufacturer is responsible for all the traffic accidents that happens while he’s driving autonomously.

Audrow Nash:  So, while the user is driving, they’re responsible if the car gets in an accident, but when when the autonomous car takes over, then it’s me manufacturer’s liability?

Christoph Stiller:  Yes, that’s my understanding of liability. That you’re only liable for what you really do. If the car manufacturer automates a car, then of course the driver to my understanding cannot be responsible. There maybe a situation, with highly automated cars where the driver is allowed to read a newspaper but before some situation comes that cannot be handled by the car like a construction site, the vehicle tells the driver, “You have to take over in 10 seconds,”  … then of course for the portion where the vehicle drives autonomously, the car manufacturer is responsible, and for the [portion] that the driver drives manually, the driver is responsible for his driving. And in between, at the times when the vehicle says you have to take over, this is a responsibility of the driver to be ready to take over; he can’t sleep in the rear seat or be drunk. He has to be ready to take over in such a situation.

Audrow Nash:  What kind of timeline do you believe we will have for autonomous cars?

Christoph Stiller:  This is a very difficult question. We do have some autonomous functions in the market already. Most [high end] cars already do automated emergency braking before an imminent collision, and for pedestrians they even do automated evasive steering. Those functions will merge. Today there are very rare situations where the vehicle is certain that it can handle the situation better than the driver; the number of these situations will grow, the vehicle will get into more and more situations and understand them better than the driver, and if it understands that the situation is critical, at least then it will take over and avoid a collision, or at least minimize the velocity of the collision and therefore the severity of the collision. Eventually all those functions will come to the point where the car can drive autonomously everywhere and any time and the driver doesn’t even need to be in the car, but that will take a long time.

Audrow Nash:  I know there are many major challenges before autonomous cars are integrated into society. What do you believe is one of the big ones?

Christoph Stiller:  In my experience, the most difficult thing is to understand the situation correctly. An example would be to consider a pedestrian walking to the curb and stopping for a moment. A human driver can predict, with very high precision, whether or not the pedestrian will walk on the road or will stop and wait for the vehicle to pass. You look at whether a pedestrian looks towards you, you consider their age, and you consider whether the person is wearing earphones and not paying attention. Taken together, all those small factors together tell you whether you should break, blow your horn, or just drive through. To do the same would be very difficult with the current state-of-the-art autonomous vehicle.

Audrow Nash:  Wrapping up, what do you think is the future of robotics?

Christoph Stiller:  As far as cars are concerned, I’m certain that autonomy will come, and in parallel, we will see cooperation, with vehicles communicating with each other and using that to harmonize their trajectories. Cars will follow each other at a closer distance and have maneuvers that are communicated with each other, so they will know exactly what the others are doing. That would harmonize traffic flow, and of course improve safety. In the very long term, I expect that traffic would then be harmonized so much that the flow of traffic would look like a fish swarm movement rather than the chaotic movement that we have on the roads today.


*Transcripts are edited for clarity with great care. However, we cannot assume any responsibility for their accuracy.



tags: , , , , , , , , , ,

Audrow Nash is a Software Engineer at Open Robotics and the host of the Sense Think Act Podcast
Audrow Nash is a Software Engineer at Open Robotics and the host of the Sense Think Act Podcast

Related posts :

Robot Talk Episode 90 – Robotically Augmented People

In this special live recording at the Victoria and Albert Museum, Claire chatted to Milia Helena Hasbani, Benjamin Metcalfe, and Dani Clode about robotic prosthetics and human augmentation.
21 June 2024, by

Robot Talk Episode 89 – Simone Schuerle

In the latest episode of the Robot Talk podcast, Claire chatted to Simone Schuerle from ETH Zürich all about microrobots, medicine and science.
14 June 2024, by

Robot Talk Episode 88 – Lord Ara Darzi

In the latest episode of the Robot Talk podcast, Claire chatted to Lord Ara Darzi from Imperial College London all about robotic surgery - past, present and future.
07 June 2024, by

Robot Talk Episode 87 – Isabelle Ormerod

In the latest episode of the Robot Talk podcast, Claire chatted to Isabelle Ormerod from the University of Bristol all about human-centred design and women in robotics.
31 May 2024, by

Robot Talk Episode 86 – Mario Di Castro

In the latest episode of the Robot Talk podcast, Claire chatted to Mario Di Castro from CERN all about robotic inspection and maintenance in hazardous environments.
24 May 2024, by

Congratulations to the #ICRA2024 best paper winners

The winners and finalists in the different categories have been announced.
20 May 2024, by

Robohub is supported by:

Would you like to learn how to tell impactful stories about your robot or AI system?

training the next generation of science communicators in robotics & AI

©2024 - Association for the Understanding of Artificial Intelligence


©2021 - ROBOTS Association