news    views    talk    learn    |    about    contribute     republish     crowdfunding     archives     events
podcast archive     

Robots Podcast #170: Mobility Transformation Facility, with Edwin Olson

University of Michigan         


interview by
November 28, 2014


NEW: Full transcript below.

In this episode, Audrow Nash speaks with Edwin Olson, an Associate Professor at the University of Michigan, about the University’s 32-acre testing environment for autonomous cars and the future of driverless vehicles.

The testing environment, called the “Mobility Transformation Facility,” has been designed to provide a simulation of circumstances that an autonomous car would experience driving on real-world streets. The Transformation Facility features “one of everything,” says Edwin Olson, including a four-lane highway, road signs, stoplights, intersections, roundabouts, a railroad crossing, building facades, and even, mechanical cyclists and pedestrians.

Edwin Olson
Edwin OlsonEdwin Olson is an Associate Professor of Computer Science and Engineering and the University of Michigan. He is the director of the APRIL robotics lab, which studies Autonomy, Perception, Robotics, Interfaces, and Learning. His active research projects include applications to explosive ordinance disposal, search and rescue, multi-robot communication, railway safety, and automobile autonomy and safety.

In 2010, he led the winning team in the MAGIC 2010 competition by developing a collective of 14 robots that semi-autonomously explored and mapped a large-scale urban environment. For winning, the U.S. Department of Defense awarded him $750,000. He was named one of Popular Science’s “Brilliant Ten” in September, 2012. In 2013, he was awarded a DARPA Young Faculty Award.

He received a PhD from the Massachusetts Institute of Technology in 2008 for his work in robust robot mapping. During his time as a PhD student, he was a core member of their DARPA Urban Challenge Team which finished the race in 4th place. His work on autonomous cars continues in cooperation with Ford Motor Company on the Next Generation Vehicle project.

 

Links:


Transcript


Audrow: Hi, welcome to Robots Podcast.

Edwin: Hi, nice to be here.

Audrow: Can you introduce yourself?

Edwin: I’m Edwin Olson. I’m an Associate Professor at the University of Michigan in the Computer Science Department. My main research areas are artificial intelligence, computer science and robotics.

Audrow: What is your experience with driverless cars?

Edwin: Well back in 2007, I was on the MIT team working up towards the DARPA Urban Challenge. I was one of a number of students who did a lot of the software development for that project. Ultimately we took our vehicle to the DARPA Urban Challenge and came in fourth. I’ve gotten to play with a lot of the different aspects of an autonomous system, ranging from the physical sensors to modifying the vehicle for drive-by wire control to path planning and obstacle detention.

Audrow: How are you related to the construction of the fake city that’s going on at the University of Michigan?

Edwin: We call it the Mobility Transformation Facility and it’s a 32-acre facility that’s going to be coming online later this fall. This test center is under the umbrella of what we call the Mobility Transformation Center, which is under the umbrella of the University of Michigan. I’m on the Faculty Steering Committee for that, and we’re also likely to be one of the main users of that test facility as we work on our autonomous car.

Audrow: Can you tell us more about this city?

Edwin: It’s really a mock urban environment. If you’re imagining an oval where people are driving around in circles – it’s not that at all. It’s a network of roads with stop signs and roataries and traffic lights. One of the things that’s interesting about it is that the test facility itself is really a robot. We anticipate that we’ll be writing a huge amount of code for the test facility and that it will be an active participant in testing our autonomous car. One of the things you can do, for example, is to allow the test facility to know where the car is and to trigger a traffic light to turn red at just the wrong moment. The same thing goes for the artificial pedestrians and the other instrumentation on the facility. These aren’t passive things – these are things that we’re going to be writing code for to try to make the testing much more interesting and challenging.

Audrow: Can you talk a bit more about the test cases? You mentioned that [you’ll be triggering events] at exactly the wrong time … are you trying to probe the limits of the software in the autonomous cars?

Edwin: [Because we are constrained by budget], we’ve tried to make the test facility have ‘one of everything’. [For example], there are dirt roads modeled after country roads and there are urban roads that are two lanes. There are rotaries, four-way intersections, and there’s even a section with four or five lanes because that’s a testing environment that we need to model. There’s a section with freeway, so we can handle merging and higher speeds. We’ve got a variety of different types of crosswalk paint, we’ve got a range of the dash lines that separate lanes. We’ve got some that are brand new, some that are going to be deliberately degraded.

We really want to test everything that we can in this facility, and this is to overcome a huge recurring problem in developing autonomous cars, which is that testing on real roadways is dangerous. You’re putting other people at risk.

One of the challenges that you have in developing an autonomous car is getting enough task coverage to believe that that vehicle is really working. You can go on real roadways, but real roads have two problems. The first is that you’re testing with other people who haven’t had the opportunity to provide informed consent – you’re experimenting unwittingly and there are ethical issues that arise from that. The second issue is that accumulating a representative set of different experiences, [and to get all the miles] you need to see the full variety of road paint, crossings and traffic light styles, would require thousands and thousands of road miles. By concentrating that, we’d really be able to accelerate the development of the technology.

Audrow: With this new test environment, are you going to focus on conquering environmental challenges such as snow?

Edwin: I think one of the great things about being in Michigan is that we do have lousy weather, and that makes things a lot more interesting. For example, one of the first things that we found when we fired up our car in the last year or so was that the tailpipe condensation was triggering our obstacle detector, and so the car thought that it was being eaten by a gigantic monster coming out of its tailpipe.

We’ve had the opportunity by seeing these weird things that happen in bad weather to try to build a more robust and more capable system. [For example,] if you’re driving down in fresh fallen snow what do you do? You try to follow the tracks of the guy who went before you. If you can’t see the lane markings anyway, you just follow their tire ruts – [but] this is not what current [autonomous] vehicles would do. They would get hopelessly lost if they couldn’t see the road paint, and if they could figure out where they are using GPS or whatever else, they would try desperately to stay in the legal lanes – but that’s not what people would do. Now our vehicle can’t handle that situation yet either, but I think being in Michigan helps us think about some of these difficult cases that are going to be really important.

Audrow: Can you talk a bit about your vehicle?

Edwin: Our vehicle is based on a 2014 Ford Fusion Hybrid. It’s a beautiful vehicle. If you sit in it, you would hardly know that it is autonomous, except that there is an emergency stop button embedded near the coffee holder. On the outside it looks a little bit different – it kind of looks like a reindeer. It’s got two little antlers sticking up from the middle of the roof, and these antlers are 32-beam LiDARs. We have a total of 42 laser range finders, which we LiDARs, for a total of 128 range finder beams. These sensors are really our primary way of understanding what’s around the car. We use them for obstacle detection, we use them for recognizing road paint, and they collect about 2.5 million 3D points per second for us to process.

The back trunk is mostly filled with computers and inertia measurement units and things like that. One of the things about our vehicle is that it doubles as both a survey vehicle and the autonomous car. You might have a more expensive vehicle that would go and map the roadway initially, and then a cheaper vehicle could go through and actually localize and do the driving later on. We only have the one vehicle (just for sanity). We try to remind people is that this is not what we’ll be thinking about in terms of a consumer vehicle.

Audrow: Being a survey vehicle, do you think that it has the onboard sensing capabilities to, for example, follow tracks in snow

Edwin: That’s a tough question. In principle, yes, humans can look out of a Mark I Eyeball and clearly figure out where to drive. The question is: can the robot do this reliably [enough] to be useful? That’s a question we’re not going to be able to answer until we sit down and give it a go.

Trying to follow that slushy rut curved out by the car in front of you is going to be pretty hard because of course there’s not [usually] just one rut. You’ve got a history of other ruts left by all the cars that came before you. There’s still a judgment call of which rut should you follow and God help you if one of those people slipped across the center line – you don’t want to follow that rut! There’s a lot of judgment and decision making that has to go into this on top of the basic sensing.

Audrow: Going back to the mock urban city, what has been the reaction of other universities?

Edwin: A lot of people are really excited by the facility. If you look at some of the concept artwork, you can immediately tell that this is not your run-of-the mill test center. We’ve got a lot of interest from other people who want to come and use our facility.

In terms of other universities, one of the unfortunate consequences of Google being so prominent in the field is that a lot of the academic work has dried up. What used to be a rather large field of universities working on autonomous cars has dwindled down to just a handful. I think with the Mobility Transformation Facility though, the University of Michigan is going to be one of the leaders as we move forward. Not just because of the University of Michigan, but because of all of the industrial activity that is also coming to Michigan (or started off in Michigan, as the case may be), that really forms a nexus for next generation vehicle development.

Audrow: What kind of major problems do you seek to address in this test facility?

Edwin: There are a few things that we want to ultimately achieve. The first one is the pragmatic aspect of developing an autonomous car: where are we going to test this thing safely, and how are we going to develop some confidence that when we say it’s safe, that it really is safe? Having a mock city where we don’t have civilians potentially in danger is a real part of that strategy.

In the longer view, there are a lot of technologies that will be coming out of autonomous car research, things like connected vehicles. There may even maybe regulatory aspects [to consider]… suppose car maker X proposes an autonomous driving system. Should there be some testing procedure in place? Maybe a federally mandated testing procedure to validate it? We’re not saying that the Mobility Transformation Facility will be that test site, but we do have a lot of faculty members who are interested in trying to experiment with what these protocols might be, and the test facility is going to be a great laboratory for developing those testing procedures.

Audrow: I want to get your perspectives on a few issues relating to autonomous cars. The first one is cyber security … people are worried about their car being hacked into and controlled or tracked.

Edwin: They should be concerned.

We have some computer security researchers here in the Computer Science Department, and they ran a electronic voting experiment [that showed how easily electronic voting machines could be hacked]. An electronic voting machine is putatively designed to be hack proof, because our very democracy relies on these things being reliable and hack proof, but they’re easily compromised. An autonomous car needs to be similarly hack proof. It’s not acceptable if someone can send a packet to your car and that car decides to leap off the roadway. There’s a real opportunity for mayhem if the security is not handled correctly. Unfortunately the system that we’re talking about here – an autonomous car – is a lot more complicated than a voting machine.

That autonomous car is collecting data from a huge number of data sources … sensors both locally and potentially remotely off the car. It’s combining these things with user input devices from a human driver, like a steering wheel, and it’s trying to make decisions and send these things, possibly over radio, back out to the actuators. From a security perspective, the surface area of this target is enormous. There are hundreds of places where you could potentially attack this system, and securing it is a real point of concern.

That’s one of the reasons why our own research focuses on individual vehicle competence. We think that a really good way to handle the security issue is to make sure that the vehicle is safe on its own, and that it doesn’t require messages from other vehicles in order to know where they are and whether a dangerous situation is coming up. If the vehicle is competent internally, then it doesn’t have to expose nearly so much to the outside world – and that’s something we think is really important.

Take the issue of digital short-range communication radios, where there’s been a lot of activity. [If] the cars are all going to be talking to each other, then there’s a fundamental question of whether these radios are going to be participating in safety functions. Will my car send your car a message saying, “Hey, quick, slam on the brakes…”? Or will it be involved in more traffic optimization type of communication like, “Hey, traffic lights, I’m going to be near you in 30 seconds… It’d just be peachy if the light was green…”? The latter could still provide a lot of benefit, but doesn’t expose nearly the safety and security risk as the former.

Audrow: If an autonomous causes damage or casualties, who is liable for that damage? Is this a big issue that autonomous cars are facing?

Edwin: I don’t think anybody really knows how the liability thing is going to resolve. Some of the companies that are partners in the Mobility Transformation Centers are insurance companies, and they’re joining exactly to try to understand what the future might be, but I don’t think anyone really knows right now.

Audrow: I’ve also heard of worry about having autonomous cars take up too much of the radio frequency spectrum for their communication and that being an issue.

Edwin: The DSRC radios sit at about 5.9 GHz – that’s about a 75MHz wide channel. You could do a lot with that frequency if you didn’t need it. On the other hand, there’s a lot of social good that could come out of these radios working properly and contributing to a safer driving environment. We’ve got 32,000 people dying in US roadways every year, and it’s a leading cause of death for a lot of people, for a lot of different age groups. I think giving up a little bit of bandwidth to support that is a worthwhile exchange.

Audrow: In December 2013, Michigan legalized driverless cars. How does that affect your research?

Edwin: When Michigan legalized driverless cars, what they effectively did was they created an explicit carve-out for manufactures of automated technology to operate on roadways. Now something that may be unique to Michigan is that because it is the home of major automotive manufacturers, there’s already been a mechanism for testing vehicles on roadways before they were commercialisable products. Basically, if you had a manufacturer’s plate on your car, you could get away with quite a lot.

Now there’s a legal framework that explicitly enumerates autonomous vehicle testing as one of the things you can do. The reality is it didn’t change a whole lot, but it certainly helped spread the message that you should come to Michigan to test your cars because there’s an explicit carve-out for that. An interesting sidenote is that universities in Michigan qualify for these manufacturer plates.

Audrow: Google is saying that in 2018 they want their systems available to car manufacturers, and the University of Michigan is saying that they want the vehicles in the street by 2021 … what are your thoughts on this? 

Edwin: I think that a lot of the date guessing is really just that, it’s a guess. When will the technology be ready? It’s hard to say … it depends on exactly what problem you’re aiming to solve. If you’re talking about an autonomous vehicle that can handle the full spectrum of driving tasks that a human can handle, and that means everything from urban to freeway with other human drivers and crazy pedestrians jaywalking, we’re talking a long time from now. We’re not talking 2018, or 2020… we’re talking decades. But there’s a lot of good that we can achieve in the shorter term by carving out simpler problem domains and trying to get those to work.

Google has talked a lot about their new small vehicle that lacks a steering wheel. This is something that we can potentially do. If you limit the speed of a vehicle to 25 miles per hour or so, there’s a whole lot less kinetic energy involved, and so the risk of actually killing someone drops dramatically. If the risk of bodily harm goes down, then you can really start thinking about all of the social good that can come out of an autonomous transportation system.

We’re not just talking about driver convenience – like reading the newspaper while going to work – we’re also talking about people who are unable to drive because maybe they’re old or they have failing eyesight. We want to connect those people.

In the United States it’s very hard to be someone who can’t get around in a car, we are sort of a car culture. If we can scope down some of these problems and achieve a lot of societal good at 25 miles per hour in a small little car made out of foam, then we should do that. I think it’s going to be an ongoing challenge for us to figure out where the technology can make an impact earlier rather than later, because we don’t want to wait decades to start addressing the number of traffic deaths.

The other thing I’ll add is that there’s really a dichotomy in strategies for autonomous cars. There are basically two ways to go: you can go for full autonomy, where there’s no steering wheel and the human is not engaged at all, or you can go in the direction where a human is nominally sitting in the driver’s seat.

Recently there’s been a few videos circulating where the person who was supposed to be sitting in the driver’s seat crawled in the back seat ( … by the way don’t do that, that’s a terrible, terrible idea.) This is really symptomatic of a human factors issue. If the human perceives that the vehicle is getting the job done, they’re going to cognitively disengage from the driving process. That means that when the vehicle does need the help of the human driver, [that person] isn’t going to be ready and able to render that help.

If you’re going to build a vehicle that has a human at the steering wheel, [then] the human [has] to remain cognitively engaged in the driving process. As long as we’re in the realm where the human needs to intervene, we can’t allow them to have super cruise control where they can take their hands off the wheel and their hands off the pedals, because then they’re not going to be ready or able to help when an incident actually arrives.

The alternative is that it’s kind of like going bumper bowling – you’ve got your hand on the steering wheel and you’re actually driving the car, but the car prevents you from doing dumb things, which is a different way of viewing autonomy. The car could drive, but it’s choosing not to in order to address this human factors issue of how to keep the human cognitively engaged in the driving process.

Audrow: Cars already do that to some extent. Some of the higher-end brands have cars that make sure that you stay in your lane … they’ll veer you back if you deviate, as well as assisting you in braking should cars be stopped ahead.

Edwin: I think the industry is really experimenting with how much the human should be informed of versus how much the vehicle should just do. For example, there is the forward collision mitigation [scenario] where we’ve got [another car] right in front of the vehicle. If the car thinks it’s closing in too quickly on the vehicle in front, should it beep at the driver so that they know that there’s an issue, or should it apply brakes?

There’s a study done recently where they looked at this capability, and if you just beep and draw the human driver to the fact that there’s a dangerous situation emerging, you get a 15% reduction in the overall property damaged claims. If you allow the vehicle to add the brakes automatically, that same figure only went up to 16%. So you get a 1% additional benefit by having the vehicle brake autonomously. I don’t know if that’s noise or signal, but it certainly seems to be the case that if you can keep humans engaged in driving, humans are great drivers.

Audrow: Would you say that it’s a race, or a collaboration, between Google and universities?

Edwin: I think that almost everyone involved in this game is keeping their cards pretty close to their chest, and that really reflects the fact that it’s an incredibly competitive industry to play in, and there are a lot of dollars at play. Really, the future of our transportation system hangs in the balance, and so what you don’t see is a lot of collaboration. Google doesn’t really talk about their technology very much … they’ll give demos but they’re certainly not publishing their source or writing a lot of academic papers. The same thing is true of the OEMs and Tier Ones that are all working on this as well. Everybody is playing it close to their chest.

Audrow: What do you expect for legalization of driverless cars? California, Nevada, Michigan, and as of recently, Florida and Iowa, as well as the UK, have legalized driverless cars. Do you see the rest of the States and many countries following?

Edwin: I think that a lot of the legalization that we’ve seen so far are really government carve-outs for autonomous car research, and I think that’s going to spread very rapidly because no state wants to be known as the state of luddites that doesn’t want to support research and development.

As far as extending this notion of legal autonomous cars to commercial vehicles, I think the states are appropriately hesitant to start passing legislation on this. Until we have some understanding for what the liability situation is going to be like, who’s responsible for an accident … and until we have some understanding of what the impacts on licensing for the vehicle operator should be, I think it’s really appropriate for the states to say that they will deal with the legalization of commercialisable products that an end user can buy when the technology’s a little bit closer and we can ask the right questions.

Audrow: Do you have any advice for young researchers and aspiring roboticists?

Edwin: I think robotics is a pretty special discipline, in that it is inherently multidisciplinary. You need to be able to code, you benefit a lot from being able to build your own robots, being able to write firmware, lay out circuit boards, and do the mechanical engineering. A robot is not working at peak form unless you understand how the whole system works together. My advice for someone who wants to get involved in robots is to go out there and start building robots, and play with all aspects of the system and get a really well rounded background. At some point you might need to decide that you’re going to focus on robot perception, or you’re going to focus on path planning. But having that background understanding how the whole system fits together is incredibly useful.

Audrow: What advice do you wish someone had given you when you were 20 years old?

Edwin: When I was 20 years old, I would have just been an undergrad, and at that point I was studying computer science and electrical engineering and I didn’t really know what I wanted to be. In fact, before that I thought I wanted to go into aerospace. I started off doing research in software systems and then I played around with micro-architecture, and I think it just never occurred to me to go and be a roboticist. I’ve been building robots for a long time and I was running robotics competitions at MIT, but for some reason it just never connected in my brain that I could be a roboticist. I wish someone would have gone back and said, “Go and be a roboticist. That’s what you love to do and you’re good at it, so go and do it.”

I guess maybe the advice here that might be applicable to other people would be, look at your hobbies very carefully and ask yourself if there’s a career in that for you. If you can get a job that you love, then that’s a wonderful thing. I just had a blind spot to the fact that I could pursue robotics as a career. 

Audrow: Wrapping up, what do you think is the future of robotics?

Edwin: That’s a broad question. I think we always have this sort of aggressive timeline where there are going to be Androids butlers working around … and all that could eventually come to be. But I think the near term future – and I’m talking our life spans here – is going to be a little bit more modest. I think a lot of the robotics technologies that we have are already enhancing our lives in one way or another, whether it’s a face recognition algorithm running on Google Glass, or just your car being able to tell where it is in its lane by combining GPS data with an atlas of the buildings around it.

There are basic components in robotics, it’s often times decomposed into sensing, thinking and acting. Combining all three of those into an autonomous car might be a long way out, but the sensing is definitely there, the thinking is definitely there, and the planning is maybe not quite there. But you already have things like electronic stability control and antilock breaking systems, where the vehicle is interpreting what you meant to do and is commanding the breaks and the steering wheel to do something different anyway.

I think the future is going to have robotics everywhere, but they’re not going to look like what you thought they were going to look like in the Jetson age. It’s not so much that there are going to be actual robots driving around … it’s going to be mostly all those technologies embedded into the products that we’re already accustomed to.

Audrow: Thank you.

Edwin: Thank you.

All audio interviews are transcribed and edited for clarity with great care, however, we cannot assume responsibility for their accuracy.


comments powered by Disqus


follow Robots Podcast:

Recent episodes: