Robohub.org
 

Law-abiding robots? What should the legal status of robots be?

by
18 July 2016



share this:
NAO-humanoid-robot-robots-Aldebaran

News media are reporting that the EU is considering turning robots into electronic persons with rights and apparently industry spokespeople are concerned that Brussels’ overzealousness could hinder innovation.

The report is far more sedate. It is a draft report, not a bill, with a mixed bag of recommendations to the Commission on Civil Law Rules on Robotics in the European Parliament. It will be years before anything is decided.

Nevertheless, it is interesting reading when considering how society should adapt to increasingly capable autonomous machines: what should the legal and moral status of robots be? How do we distribute responsibility?

A remarkable opening

The report begins its general principles with an eyebrow-raising paragraph:

whereas, until such time, if ever, that robots become or are made self-aware, Asimov’s Laws must be regarded as being directed at the designers, producers and operators of robots, since those laws cannot be converted into machine code;

It is remarkable because first it alludes to self-aware robots, presumably moral agents – a pretty extreme and currently distant possibility – then brings up Isaac Asimov’s famous but fictional laws of robotics and makes a simultaneously insightful and wrong-headed claim.

It is making the rhetorical blunder of invoking self-aware robots and Asimov, inviting strange journalistic readings of the text.

Asimov made his laws post hoc to tell interesting stories, writing a number of delightful “robot mysteries” where the mystery typically was how a robot could act in a certain unwanted way despite following the laws. Although fictional, these stories show that rigid rules are no guarantee of predictable behaviour, especially in agents that can learn, reason and interact with the real world. Actual AI safety research begins where these stories end, trying to solve the hard problem of safe and beneficial AI given our limited ethical, programming, and predictive abilities.

The EU report notes that laws, as they are commonly stated, cannot be programmed (an important insight far too many naïve critics of AI safety research don’t understand). It also recognises that until we have solved all the hard problems, it is up to humans to make sure robots do not harm people, do what they are told (or should), and avoid breaking expensive things, like themselves.

This is insightful: the moral and legal onus is all on humans right now. Yet it is also missing that we can embody parts of our moral and legal codes in the software of our machinery, or at least make them compatible with our codes.

Liability

The real core of the report is robot civil law liability. If my cleaning robot breaks your window to take cleaning supplies, do you sue me, the robot maker, or the robot?

Law currently distinguishes two kinds of entities: subjects and objects. Legal subjects are recognised by law as having rights, duties and other capacities, and hence get legal personhood, even if they happen to be things like corporations rather than physical persons. In a sense, subjects are entities that understand (legal) issues. Legal objects may have economic value, but they do not have legal rights or duties, cannot do legal and commercial transactions, and they do not understand anything. Examples include physical things, pets and animals, human acts, reputations, and intellectual property. You can sue subjects, but not objects.

As long as robots are objects the question is just which human or corporation to sue. This can be problematic enough since the code may come from many sources (the “problem of many hands”) and emergent features show up that nobody could foresee.

Automation blurs the lines between legal subjects and objects. A robotic car can be programmed to try to follow traffic rules: in some sense it understands the law, although it is pretty unreflective. Distributed autonomous organisations (DAOs) are organisations run by rules encoded in computer programs; it is very unclear what legal status a business DAO would have. AI software can learn and adapt, responding to incentives and experiences in ways that make its behaviour less determined by the original code and more by the data it has acquired.

The report proposes an EU legal instrument handling robot/AI liability and related questions. In particular, the mere fact that a robot did something bad should not in principle restrict the kinds of damages or compensation the aggrieved party could get. More importantly:

the future legislative instrument should provide for the application of strict liability as a rule, thus requiring only proof that damage has occurred and the establishment of a causal link between the harmful behaviour of the robot and the damage suffered by the injured party;

Strict liability means that there is no mens rea requirement: the fact that nobody intended the harm does not mean that there is not going to be a civil or criminal case. But, who is responsible?

in principle, once the ultimately responsible parties have been identified, their liability would be proportionate to the actual level of instructions given to the robot and of its autonomy, so that the greater a robot’s learning capability or autonomy is, the lower other parties’ responsibility should be, and the longer a robot’s ‘education’ has lasted, the greater the responsibility of its ‘teacher’ should be; notes, in particular, that skills resulting from ‘education’ given to a robot should be not confused with skills depending strictly on its self-learning abilities when seeking to identify the person to whom the robot’s harmful behaviour is actually due;

That is, the responsibility of my robot’s misbehaviour might be split between how much of the (causally relevant) behaviour it learned from me or from the robot maker. This makes sense, although figuring out what instructions contributed to what action might be a superhuman task even if the robot can helpfully print out all the relevant thought-processes: we humans tend to get confused by logic-trees just three or four layers deep, yet there is no reason for the robot to be anywhere that simple. Also, note the problem of autonomous robots maybe learning misbehaviours on their own – if my robot on its own discovered that it could break windows to reach things (for example due to a few accidents when cleaning my house) we humans may be off the hook. These problems matter, but are missing from the report.

The report tries to fix these complexities by suggesting an obligatory insurance scheme and compensation fund, so that strict liability doesn’t become a major hindrance for societal and business use of robotics. This might work, but foists off much of the responsibility to the insurance industry. The industry is already concerned about the huge correlated risks autonomous cars could imply and struggles with cyber-insurance: most likely the solution would have to be coarse-grained. Yet insurance may be better than compensation for setting incentives to encourage AI safety work. There is clearly a need for deep and interdisciplinary legal, computer science and insurance research here – and the results will be needed in a not too distant future.

There is also a problematic asymmetry in the report between embodied robots versus software AI. Most of the text deals with robots. Yet it is the software that makes them autonomous and problematic, and the software can reside in ill-defined places like the cloud, remote jurisdictions or the blockchain. In order to get robot liability to work we may have to have stricter software liability, or ensure that the software is tied more strongly to the embodied robot. One is a tough legal and governance challenge, the other one a legal-technical trade-off.

Third existence, domesticity, and law-abidingness

Are there other ways of handling these problems? I had an excellent conversation with Dr. Yueh-Hsuan Weng from Peking University (and co-founder of ROBOLAW.ASIA) on this topic.

Weng argues that we should maybe give robots a special legal status, “third existence” with partial rights and obligations. This can help to create a legal space for robots of high degrees of autonomy. This is particularly relevant when robots become part of our social space. While one might mandate that they have a “Turing red flag” to indicate that they are autonomous devices outside human control and might be mistaken for humans, we likely need to develop richer forms of co-existence.

To some extent, we already treat animals as between legal subjects and objects: they are objects, but it is recognised that they can do things their owners cannot foresee. Dangerous animals are put down, but sometimes owner and/or animal are sentenced to further training. How much blame we assign depends on whether the owner behaved appropriately for the animal and could foresee its behaviour. In common law, actions of domestic animals are treated differently from wild animals: the risks from domestic animals are more foreseeable, but there is strict liability for wild animals since the owner should know they do not know them. It is not implausible to handle robots in the same way, but we have the problem that we have far less intuition and experience in the behaviour of AI than fellow animals. More transparent “domesticated” AIs might be handled differently from “wild” AIs.

Weng also pointed out that even if robots lack legal subjecthood (and more importantly moral patienthood and agency) they can be designed to be more or less amenable to law. When a group of artists programmed a shopping bot to buy things randomly on a darknet, they got some illegal goods – clearly as intended, in order to cause debate. Conversely, robots can be equipped with “black boxes” that document their actions and help attribute fault when accidents occur. They can also be designed so that their behaviour is ethically or legally verifiable: the action planning occurs in such a way that it is possible to prove that certain principles are obeyed, essentially with an internal monitor or “conscience” that can be interrogated in a human-accessible way.

This kind of law-abiding design is not something that happens naturally: the robots have to be designed for the law, and it means declining possible approaches that cannot be verified (e.g. deep neural networks). There is a clear cost. That might be entirely acceptable for most uses of robots in society: we accept limitations in the capabilities of machines to keep them safe, make crimes harder, or even embody societal values. Nevertheless, there will be a tension between law-abiding and capable design: we should recognise that sometimes, the incentives may tilt so strongly that we need legal and ethical mechanisms to keep robot designers doing the right thing – or principles for when we accept unverifiable but useful autonomy.

Codifying ethics

While we may have a hard time coding ethics into our machines (or laws), we also have a hard time legislating ethics for humans. Law and regulations can legislate certain behaviours, including training and testing for having ethical competence. But they cannot easily legislate people to be moral. Morality is generally regarded as being above the law (it is morally right to disobey an unjust law, but not possible to legislate so that an action becomes morally right). More importantly, it is hard to specify what moral code to follow. Not just in the sense that we have important disagreement about what the right thing to do is, fundamental freedoms of thought and belief encoded as basic right, but that most moralities have profound complexities and nuances that do not lend themselves to the strict specification laws require.

The report suggests that a guiding ethical framework for the design, production and use of robots is needed as a complement to legal recommendations. In an appendix it suggests a code of conduct in robotics, a code for ethics review processes, a licence of designers and even a user licence. It is a fascinating mixture of standard EU mom-and-apple pie phrases about dignity, freedom and justice, and actual ethical principles, and concrete ideas such as mandating opt-out mechanisms and traceability. There is much here that can and should be developed more fully – there seems to be little over-arching theory.

In the end, there is tremendous potential for responsible innovation and law-making when it comes to robotics. There is also potential to instead codify rigid rules into directives and mechanisms that give the appearance of responsibility, just as Asimov-style laws may look to the naïve as useful ethical rules. This draft is not the final code that will run on the EU legislation robot; it is not even a design specification yet. It is a first napkin sketch.

Published with permission from Oxford Martin School

For more information, please visit TLC Forum for Robots & Society


If you enjoyed this article, you may also want to read:



tags: , , , , , ,


Anders Sandberg is a research fellow at Future of Humanity Institute, University of Oxford.
Anders Sandberg is a research fellow at Future of Humanity Institute, University of Oxford.





Related posts :



Open Robotics Launches the Open Source Robotics Alliance

The Open Source Robotics Foundation (OSRF) is pleased to announce the creation of the Open Source Robotics Alliance (OSRA), a new initiative to strengthen the governance of our open-source robotics so...

Robot Talk Episode 77 – Patricia Shaw

In the latest episode of the Robot Talk podcast, Claire chatted to Patricia Shaw from Aberystwyth University all about home assistance robots, and robot learning and development.
18 March 2024, by

Robot Talk Episode 64 – Rav Chunilal

In the latest episode of the Robot Talk podcast, Claire chatted to Rav Chunilal from Sellafield all about robotics and AI for nuclear decommissioning.
31 December 2023, by

AI holidays 2023

Thanks to those that sent and suggested AI and robotics-themed holiday videos, images, and stories. Here’s a sample to get you into the spirit this season....
31 December 2023, by and

Faced with dwindling bee colonies, scientists are arming queens with robots and smart hives

By Farshad Arvin, Martin Stefanec, and Tomas Krajnik Be it the news or the dwindling number of creatures hitting your windscreens, it will not have evaded you that the insect world in bad shape. ...
31 December 2023, by

Robot Talk Episode 63 – Ayse Kucukyilmaz

In the latest episode of the Robot Talk podcast, Claire chatted to Ayse Kucukyilmaz from the University of Nottingham about collaboration, conflict and failure in human-robot interactions.
31 December 2023, by





Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association