Robohub.org
 

Letting policymakers handle the trolley problem

by
10 June 2016



share this:
self_driving_Robocar_autonomous_car_spedometer_accelerate_needle_speed

When I give talks on robocars the most common question asked is the one known as the “trolley problem” question. That is: “what will the car do if it has to choose between killing one person or another” or other related dilemmas. I have written frequently about how this is a low priority question in reality and is much more interesting to philosophy classes. It is a super-rare event and there are more important everyday ethical questions that self-driving car developers have to solve long before they will tackle this one.

In spite of this, the question persists in the public mind. We are fascinated and afraid of the idea of machines making life or death decisions. The tiny number of humans faced with such dilemmas don’t have a detailed ethical debate in their minds, they would go with their gut or very simple and quick reasoning. We are troubled because machines don’t have a difference between instant and carefully pondered reactions. The one time in a billion miles(*) that a machine faces such a question it would presumably make a calculated decision based on its programming. That’s foreign to our nature, and indeed, not a task desired by programmers or vendors of robocars.

There have been calls to come up with “ethical calculus” algorithms and put them in the cars. As a programmer, I could imagine coding such an algorithm, but I certainly would not want to, nor would I want to be held accountable for what it does, because, by definition, it’s going to do something bad. The programmer’s job is to make driving safer. On their own, I think most builders of robocars would try to punt the decision if they can. The simplest way to punt the decision is to program the car to follow the law, which generally means to stay in its right-of-way. Yes, that means running over people who ran into the road, as opposed to, veering onto the sidewalk to run over Hitler, or veering onto oncoming traffic to hit an unmanned car — possibly killing the veering car’s passenger. Staying in our lane is what the law says to do, it also strongly forbids going onto the sidewalk or another lane to deliberately hit something.

We might not like the law, but we do have the ability to change it.

Thus, I propose the following: Driving regulators should create a special panel which can rule on driving ethics questions. If a robocar developer sees a question which requires some sort of ethical calculation whose answer is unclear, they can submit that question to the panel. The panel can deliberate and provide an answer. If the developer conforms to the ruling, they are absolved of responsibility. They did the right thing.

Source: xkcd.com

Source: xkcd.com

The panel would, of course, have people with technical skill, to make sure rulings are reasonable and can be implemented. Petitioners could also appeal rulings that would impede development, though they would probably suggest answers and describe their difficulty to the panel in any petition.

The panel would not simply be presented with questions like, “How do you choose between hitting two adults, or one child?” It might make more sense to propose formulae for evaluating multiple different situations. In the end, it would need to be reduced to something you can do with code.

Very important to the rulings would be an understanding of how certain requirements could slow down robocar development or raise costs. For example, a ruling that car must make a decision based on the number of pedestrians it might hit demands it be able to count pedestrians. Today’s robocars may often be unsure whether a blob is 2 or 3 pedestrians, and nobody cares because generally the result is the same — you don’t want to hit any number of pedestrians. Likeways, requirements to know the age of people on the road demands a great deal more of the car’s perception system than anybody would normally develop, particularly if you imagine you will ask it to tell a dwarf adult from a child.

Writers in this space have proposed questions like “How do you choose between one motorcyclist wearing a helmet and another not wearing one?” (You are less likely to kill the helmet wearer, but the bareheaded rider is the one who accepted greater risk and broke the helmet law.) Hidden in this question is the idea that the car would need to be able to tell whether somebody is wearing a helmet or not — a much bigger challenge for a computer than for a human. If a ruling demanded the car be able to figure this out, it makes developing the car harder just to solve an extremely rare problem.

This invokes the “meta-trolley problem.” In this case, we see a life-saving technology like robocars made more difficult, and thus delayed, to solve a rare philosophical problem. That delay means more people die because the robocar technology which could have saved them was not yet available. The panels would be expected to consider this. As such, problems sent to them would not be expressed in absolutes. You might ask, “If the system assigns an 80% probability that rider 1 is wearing a helmet, do I do X or Y” after you have determined that that level of confidence is technically doable.

This is important because a lot of the “trolley problem” questions involve the car departing its right-of-way to save the life of somebody in that path. 99% of the effort going into developing robocars is devoted to making them drive safely where they are supposed to be. There will always be less effort put into making sure the car can do a good job veering off the road and onto the sidewalk. It will not be as well trained and tested identifying obstacles and hazards of the sidewalk. Its maps will not be designed for driving there. Any move out of normal driving situations increases the risk and the difficulty of the driving task. People are “general purpose” thinking machines, we can adapt to what we have never done before. Robots are not.

I believe vendors would embrace this idea because they don’t want to be making these decisions themselves, and they don’t want to be held accountable for them if they turn out to be wrong (or even if they turn out to be right.) Society is quite uncomfortable with machines deliberately hurting anybody, even if it’s to save others. Even the panel members would not be thrilled with the job, but they would not have personal responsibility.

Neural Networks

It must be noted that all these ideas (and all other conventional ideas on ethical calculus for robots) are turned upside-down if cars are driven by neural networks trained by machine learning. Some developers hope to run the whole process this way. Some may wish to only do the “judgment on where to go” part that way. Almost everybody will use them in perception and classification. You don’t program neural networks, and you don’t know why they do what they do — you only know that when you test them, they perform well, and they also are often better and dealing with unforeseen situations than traditional approaches.

As such, you can’t easily program a rule (including a ruling from the panel) into such a car. You can show it examples of humans following the rule as you train it, but that’s about it. Because many of the situations above involve dangerous and even fatal situations, you clearly can’t show it real world examples easily, though there are some tricks you can do with robotic inflatable dummies and radio controlled cars. You may need to train it in simulation (which is useful but runs the risk of it latching onto the artifacts of simulation not seen in the real world.)

Neural network systems are currently the AI technology most capable of human-like behaviour. As such, it is suggested they could be a good choice for ethical decisions, though it is sure they would surprise everybody in certain situations, and not always in a good way. They will sometimes do things that are quite inhuman.

It has been theorized they have a perverse advantage in the legal system because they are not understood. If you can’t point to a specific reason the car did something (such as running over a group of 2 people instead of a single person) you can’t easily show the developers were negligent in court. The vehicle “went with its gut” just like a human being.

Everyday ethical situations and the vehicle code

The panels would actually be far more useful not in solving the very rare questions, but the common questions. Real driving today in most countries involves constantly breaking or bending the rules. Most people speed. People constantly cut other people off. It is often impossible to get through traffic without technically cutting people off, which is to say moving into their path and expecting them to brake. Google caused its first accident by moving into the path of a bus it thought would brake and let them into the lane. In some of the more chaotic places of the world, a driver adhering strictly to the law would never get out of their driveway.

The panels could be asked questions like this.

  • “If 80% of cars are going 10mph over the speed limit, can we do that?” I think that yes would be a good answer here.
  • “If a stalled car is blocking the lane, can we go slightly over the double-yellow line to get around that car if the oncoming lane is sufficiently clear?” Again, we need the cars to know that the answer is also yes.
  • “If nobody will let me into a lane, when can I aggressively push my way in even though the car I move in front of would hit me if it maintains its speed?”
  • “If I decide, one time in 100, to keep going and gently bump somebody who cuts me off capriciously in order to stop drivers from treating me like I’m not there, is that OK?”
  • “If I need to make a quick 3 point turn to turn around, how much delay can I cause for oncoming traffic?”
  • “If it allows left turn only on a green arrow, but my sensors give me 99.99999% confidence the turn is safe, can I make it anyway?” (This actually makes a lot of sense.)
  • “Is it OK for me to park in front of a hydrant, knowing I will leave the spot at the first sound, sight or electronic message about fire crews?”
  • “Can I make a rolling stop at a stop sign if my systems can do it with 99.999999% safety at that sign?”

There are plenty more of such situations. Cars need answers to these today because they will encounter these problems every day. The existing vehicle code was written with a strong presumption that human drivers are unreliable. We see many places where things like left turns are prohibited even though they would almost always be safe, but humans can’t be trusted to have highly reliable judgement. In some cases, the code has to assume human drivers will be greedy and obstruct traffic if they are not forbidden from certain activities, where robocars can be trusted to promise better behaviour. In fact, in many ways, the entire vehicle code is wrong for robocars and should be completely replaced, but since that won’t happen for a long time, the panels could rule on reasonable exceptions which promote robocars and improve traffic.

How often for the “big” questions?

Above, I put a (*) next to the statement that the “who do I kill?” question comes up once in a billion miles. I don’t actually know how often it comes up, but I know it’s very rare, probably much more rare than this. For example, human drivers only kill 12 people total in a billion miles of driving. Most fatalities are single-vehicle accident (the car ran off the road, often because the driver fell asleep or was drunk.) If I had to guess, I would suspect real “who do I kill?” questions come up after more like 100 billion miles, which is to say, 200,000 lifetimes of human driving — a typical person will drive around 500,000 miles in their life. But at 100 billion miles it would still mean it happens 30 times/year in the USA, and frankly you don’t see this on the news or in fatality reports very often.

There are arguments that put the number at a more frequent level when you consider an unmanned car’s ability to do something a human driven car can’t do — namely, drive off the road and crash without hurting anybody. In that case, I don’t think the programmers need a lot of guidance — the path with zero injuries is generally an easy one, though driving off the road is never risk-free. It’s also true that robocars would find themselves able to make these decisions in places where we never would imagine a human doing so, or even being able to do so.

Jurisdictions

These panels would probably exist at many levels. Rules of the road are a state matter in the USA, but safety standards for car hardware and software are a federal matter. Certainly it’s easier for developers to have only national rulings to worry about, but it’s also not tremendously hard to load in different modules when you move from one state to another. As is the case in many other areas of law, states and countries have ways to get together to normalize the laws for practical reasons like this. It’s not nearly as much of a problem when there are hardware requirements in the cars. (Though it’s not out of the question a panel might want to indirectly demand a superior sensor to help a car make its determinations.)



tags: , ,


Brad Templeton, Robocars.com is an EFF board member, Singularity U faculty, a self-driving car consultant, and entrepreneur.
Brad Templeton, Robocars.com is an EFF board member, Singularity U faculty, a self-driving car consultant, and entrepreneur.





Related posts :



Robot Talk Episode 98 – Gabriella Pizzuto

In the latest episode of the Robot Talk podcast, Claire chatted to Gabriella Pizzuto from the University of Liverpool about intelligent robotic manipulators for laboratory automation.
15 November 2024, by

Online hands-on science communication training – sign up here!

Find out how to communicate about your work with experts from Robohub, AIhub, and IEEE Spectrum.
13 November 2024, by

Robot Talk Episode 97 – Pratap Tokekar

In the latest episode of the Robot Talk podcast, Claire chatted to Pratap Tokekar from the University of Maryland about how teams of robots with different capabilities can work together.
08 November 2024, by

Robot Talk Episode 96 – Maria Elena Giannaccini

In the latest episode of the Robot Talk podcast, Claire chatted to Maria Elena Giannaccini from the University of Aberdeen about soft and bioinspired robotics for healthcare and beyond.
01 November 2024, by

Robot Talk Episode 95 – Jonathan Walker

In the latest episode of the Robot Talk podcast, Claire chatted to Jonathan Walker from Innovate UK about translating robotics research into the commercial sector.
25 October 2024, by

Robot Talk Episode 94 – Esyin Chew

In the latest episode of the Robot Talk podcast, Claire chatted to Esyin Chew from Cardiff Metropolitan University about service and social humanoid robots in healthcare and education.
18 October 2024, by





Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association