Robohub.org
 

American law needs a reboot when it comes to robots

by
09 March 2016



share this:
Retro tin toy robot standing on a wooden bookshelf

Nearly 60 years of American case law indicate that while robot technology has been developing by leaps and bounds, the courts’ concept of robots is confused and largely stuck in the past. If we are to depend on our legal systems for clarity — especially as new technologies take us into uncharted territory — the courts will need partner closely with technology experts to develop a more nuanced understanding of robotics. Legal scholar Ryan Calo shows us the way.

new paper by Ryan Calo (University of Washington School of Law) suggests that judges and jurists have some catch up to do when it comes understanding what robots are, and what they can actually do.

Based on an analysis of hundreds of cases spanning roughly 60 years of American case law, the paper offers a clear and accessible account of how competing definitions of robot have blurred the line between people and machines, and have resulted inconsistent treatment of robots in the courts. Exacerbating this is the narrow conception judges and jurors tend of have of robots as “discretionless machines” that are incapable of spontaneity — even though the success of contemporary robotic systems hinges on their very ability to respond dynamically to their environment. Calo concludes that this growing “mismatch” will make it increasingly difficult for the judicial system to effectively grapple with advanced robotics in the future.

Don’t let the paper’s 44 pages intimidate you. Calo is a brilliant writer and keeps the pace moving — this is sure to be a classic reference in the legal scholarship on robotics.

…there is no doubt that the question ‘What is a robot?’ is at the heart of the issue.

Exciting new technology risks getting mired legal in battles precisely because it doesn’t fit the tight definitions our legal systems depend on. In all, the paper presents a straightforward and compelling case for American legal professionals— and policy makers in general — to get more subtle in their thinking on robots

It may seem like “just semantics”, but there is no doubt that the question ‘What is a robot?’ is at the heart of the issue. Indeed, in several of Calo’s case studies, the courts relied on dictionary definitions of robot to guide their decisions, even though these definitions often contradicted one another. For example, one case had jurors comparing the definition of robot in two different dictionaries …

… a 1958 Webster Dictionary, which defined a robot as:

Any automatic apparatus or device that performs functions ordinarily ascribed to human beings, or operates with what appears to be almost human intelligence. [emphasis mine]

… and a Funk & Wagnalls, which defined it as:

An automaton that performs all hard work; hence, one who works mechanically and heartlessly. [emphasis mine]

Note that Webster focused on what robots have in common with humans, whereas Funk & Wagnalls emphasized their difference.

The question of whether robots are extensions of humans — or mere tools for them to use — is central to many of the cases discussed in Calo’s paper. And while the state of the arts in robotics has developed a lot since 1958, dictionary definitions have not.

The reality of robots today is that they exist along a continuum, and the definition of robot is a can of worms even for robotics practitioners. Some robots, like those found in traditional assembly-line manufacturing really are best understood as tools. Some are better understood as extensions of our personhood, such the telepresence device that former-NSA-contractor-turned-whistleblower Edward Snowden used to beam into a TED conference. And some — like robotic surgical devices, robotic prosthetics, and the current crop of collaborative robots — fall into that murky place in between.

The question of whether robots are extensions of humans — or mere tools for them to use — is central to many of the cases discussed in Calo’s paper.

It all seems to depend on where you are standing.

Defining that murky place that exists between ourselves and our technological tools is a difficult process, but sociologists like Sherry Turkle and Clifford Nass have been delving into the broader question of our relationship with computers for over thirty years. In The Second Self (1984), Turkle explores the computer as both an external tool that exists in the world AND and extension of our selves. Nash, a media theorist at Stanford, pioneered our understanding of human-computer and human-robot interaction in his book The Media Equation (1996), where he advanced the idea that people perceive and interact with computers and robots as real social actors, even when they logically know that they are not.

This process of defining our relationship to technology is not just happening at a socio-anthropological level, but it’s taking shape in more practical ways for specific robotics technologies as well. For instance, in 2013 The National Highway Traffic Safety Administration (NHTSA) released a policy identifying five distinct levels of autonomy cars, and the following year SAE International (an engineering association focused on developing standards and best practices) released its own report recommending six distinct levels. Various efforts (ANSI, ISO) toward developing international safety standards for manufacturing robots likewise account for varying levels of risk to human operators, particularly as robots and humans come to work more closely together.

According to Calo’s paper, the courts, too, are grappling with this murky relationship.

Columbus-America Discovery Group, Inc. v. Abandoned Vessel, S.S. Central America

One hundred and thirty years after the S.S. Central American steamship sank in the Atlantic Ocean, the Columbus-America Discovery Group located the sunken ship and used an autonomous underwater robot equipped with cameras and actuators to explore and document the wreck. Because other search teams were actively looking for the Central American at the time, the Columbus-America Discovery Group submitted a request to the courts to prevent other teams from entering the salvage area. Under maritime law, first salvage rights would have entitled Columbus-America to a significant proportion of the value of the salvage, and the right to exclude other groups from salvaging the wreck.

From Calo’s paper:

The usual way for custody, possession, and control to be achieved at this time was by human divers approaching the vessel and either recovering property over time or, if safe, lifting the wreck out of the water. The salvage team in Columbus-America, however, was not able (or willing) to send anyone that far down — nearly one and one half miles below the surface. It sent down its robots instead.

The court decided that, in light of the conditions, sending the robots counted for purposes of effective control and possession. They were, after all, able to generate live images of the wreck and had the further “capability to manipulate the environment” at the direction of people.

The court fashioned a new test for effective possession through “telepossession,” consisting of four elements: (1) locating the wreckage, (2) real-time imaging, (3) placement of a robot near the wreckage with the ability to manipulate objects therein, and (4) intent to exercise control.

As maritime law scholar Barlow Burke, puts it: “This is as close as the court can come to creating a new legal basis for establishing possession without actually doing so. On the basis of the new test, which has been cited by other courts since, the court granted salvage rights to Columbus-America Group and enjoined its competitors.

This case shows the courts working through the human-robot relationship as a series of steps: note that the court decision seemed to hinge on the range of control the operators exerted over the robot. But Calo points out that there is yet more clarification needed: how should the range of physical risk the human puts themselves in influence decisions related to possessibility? — ie. a high seas tele-operated mission is inherently more dangerous than a fully autonomous mission conducted from land; should salvage rights apply in the former case but not in the latter?

Most importantly, this example serves to illustrate how cases involving novel technologies set precedence by clarifying language and developing legal tests.

Calo’s paper highlights the role of robots as both objects and subjects of American law. The first six case studies (based on an analysis of over 200 cases involving robots and their analogues) deal with robots as “things” that animate, perform and manipulate their environment as this relates to legal issues such as taxation, culpability, and ownership … as in Columbus-America Discovery Group, Inc. v. Abandoned Vessel, S.S. Central America described above, for example.

The second set of case studies examines robots as metaphors and mental analogies that drive the decisions of the courts, for example whether a witness who acts “robotically” during testimony can be trusted, or whether a defendant acting in subordination of another had free-will or whether they were being used a robotic puppet.

At first glance the leap from court cases involving real robots to court cases involving metaphoric ones may not seem relevant. After all, as Calo acknowledges, “a judge may invoke robots in one way but decide robot related cases in another”.

After all, as Calo acknowledges, “a judge may invoke robots in one way but decide robot related cases in another”.

However, Calo argues that it is reasonable that a judges’ mental concept of what a robot is (or is not) could come to bear on their decisions in cases involving robots. Perhaps more importantly, he points out that when it comes to determining how to treat new technologies, judges “rely specifically on metaphor and analogy when reasoning through the protection law should afford” them. Taken in this light, the metaphors and analogies of robots used by the judiciary become highly relevant.

It thus begs the question: When even cases involving very simple robot toys have the power to confound the courts — Calo’s paper describes a number of taxation-related cases where the courts were confused over whether the robot was a mechanical apparatus or a representation of a person (ie. a doll), or even more absurdly, whether the toy robot was a robot itself, or just a simulation of one — how will they cope with the much more complex multi-agent dynamic systems that are coming down the pipeline?

Robots are challenging precisely because they can’t be neatly defined by the legal structures we depend upon to keep us safe.

One thing for certain is that it will take visionary thinking to chart out the new territory robots are leading us into. Technologists are already developing more complex metaphors — think swarms — to improve their understanding of the technical possibilities and build better tools. The courts, too, will need to develop a new language and new metaphors for thinking about robots if they are to keep pace.

Robots are challenging precisely because they can’t be neatly defined by the legal structures we depend upon to keep us safe.The only answer we have is to clarify our definitions of robots and autonomy — but that requires a partnership between those who really understand the technology and those who really understand the law.

Thank goodness legal scholars like Ryan Calo are already on the case.



tags: , , ,


Hallie Siegel robotics editor-at-large
Hallie Siegel robotics editor-at-large





Related posts :



Open Robotics Launches the Open Source Robotics Alliance

The Open Source Robotics Foundation (OSRF) is pleased to announce the creation of the Open Source Robotics Alliance (OSRA), a new initiative to strengthen the governance of our open-source robotics so...

Robot Talk Episode 77 – Patricia Shaw

In the latest episode of the Robot Talk podcast, Claire chatted to Patricia Shaw from Aberystwyth University all about home assistance robots, and robot learning and development.
18 March 2024, by

Robot Talk Episode 64 – Rav Chunilal

In the latest episode of the Robot Talk podcast, Claire chatted to Rav Chunilal from Sellafield all about robotics and AI for nuclear decommissioning.
31 December 2023, by

AI holidays 2023

Thanks to those that sent and suggested AI and robotics-themed holiday videos, images, and stories. Here’s a sample to get you into the spirit this season....
31 December 2023, by and

Faced with dwindling bee colonies, scientists are arming queens with robots and smart hives

By Farshad Arvin, Martin Stefanec, and Tomas Krajnik Be it the news or the dwindling number of creatures hitting your windscreens, it will not have evaded you that the insect world in bad shape. ...
31 December 2023, by

Robot Talk Episode 63 – Ayse Kucukyilmaz

In the latest episode of the Robot Talk podcast, Claire chatted to Ayse Kucukyilmaz from the University of Nottingham about collaboration, conflict and failure in human-robot interactions.
31 December 2023, by





Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association