One day, robots will present difficult legal challenges. This seems to be the consensus among commentators. And who am I to disagree? I have myself argued, right here on the digital pages of Slate, that robotics will generate no fewer puzzles for the law than the last transformative technology of our time—the Internet. Future courts will have to decide, for instance, whether a home robot manufacturer is responsible for the apps that run on it and whether to hold anyone accountable for robot behavior no one intended or foresaw.
So I’m in agreement with the many scholars, journalists, and others that see interesting times ahead for robotics law and policy. It turns out, however, that there are just as interesting times behind.
Robots have been in American society for half a century. And, like most technologies, they have occasioned legal disputes. A small team of research assistants and I went back and looked at hundreds of cases involving robots in some way or another over the past six years. The cases span a wide variety of legal contexts, including criminal, maritime, tort, immigration, import, tax, and other law. Together they tell a fascinating story about the way courts think about an increasingly important technology. (You can read the full paper, “Robots in American Law,” here.)
In many of the cases I came across, the role of the robot was incidental: The case would likely have come out just the same way were it not a robot at issue. Some of these incidental cases were fascinating. Nannuzzi v. King et al. (1987) involved an injury on a movie set where a robotic lawnmower malfunctioned and injured a cameraman. The film, written and directed by Stephen King, was Maximum Overdrive—a film about machines coming alive and attacking people. Nevertheless, the legal issue presented by a falling stage light would have been basically the same.
In other cases, however, it really seemed to matter that a robot was at issue. In White v. Samsung (1993), for example, a federal appellate court had to decide whether a robot version of Vanna White in a Samsung print ad “represented” the game show hostess for purposes of the right to publicity. The majority thought it did. The dissent was adamant it did not. “One is Vanna White,” said the dissent, “The other is a robot. No one could reasonably confuse the two.” Just a few years later the same court encountered a second case of robots emulating people—in this instance, Cliff and Norm from the television shows Cheers. Judge Alex Kozinski’s eventual dissent from a decision not to rehear the case began with the words, “Robots again.”
In Comptroller of the Treasury v. Family Entertainment Centers (1987), a Maryland court had to decide whether life-size, animatronic puppets that dance and sing at Chuck E. Cheese restaurants trigger a state tax on food “where there is furnished a performance.” The court went on at length about the nature of the term performanceand why a robot could not display the requisite spontaneity. Whereas in Louis Marx & Co. and Gehrig Hoban & Co., Inc. v. United States (1958), a customs court had to decide whether a “mechanical walking robot” being imported represented an animate object (and therefore a doll), which is taxed at a lower rate. The court went on to draw a distinction between a robot—which represents a human—and the toy in question—which only represents a robot.
Read the full piece at Slate.
This post originally appeared in the Center for Internet and Society.