We Robot Conference: 2. Law as algorithm

03 May 2013

share this:

On April 8-9, Stanford Law School held the second annual robotics and law conference, We Robot. This year’s event focused on near-term policy issues in robotics and featured panels and papers by scholars, practitioners, and engineers on topics like intellectual property, tort liability, legal ethics, and privacy. The full program is here.

This post is part of Robohub’s We Robot coverage.

Woodrow Hartzog and Greg Conti presented their paper (co-authored with Lisa Shay and John Nelson of West Point) on implementing law in computer systems.

Paper: Do Robots Dream of Electric Laws? An Experiment in Law as Algorithm
Author: Greg Conti, Woodrow Hartzog
Moderator: Harry Surden

This was an especially interesting presentation, given that last year at We Robot in Miami, the same authors presented a paper on the problems of removing humans from the loop and turning law enforcement over to a computer (Confronting Automated Law Enforcement). This year, the authors took the question a step further. In an experiment, they look at what happens when you convert laws into algorithms. It turns out that even simple rules (like speeding limits) require unexpected consideration.

In the experiment, 52 programers were asked to automate the enforcement of traffic speed limits. They were given a set of driving data and each wrote a program that measured the number of speed limit violations and issued traffic tickets accordingly. Despite having exact data for both vehicle speed and speed limits, the number of issued tickets varied among the programs. The authors attribute the variance to the fact that the programers were faced with having to make assumptions and legal interpretations, for example whether to code according to the letter of the law (100% enforcement of every violation) or the intent of the law (tolerating minor infractions). The study indicates that there can be unanticipated degrees of freedom in the design of enforcement algorithms, even when dealing with seemingly straightforward legal rules.

As for the broader implications, the authors drew attention to the following problem areas:
Culpability We may not be able to automate laws that include culpability, because this is too difficult to determine automatically.
Objectivity of the wrongful conduct E.g. speed does not necessarily equal recklessness.
Identification With automated enforcement we would have to make particularly sure that someone’s identity can be verified.
Accessibility To what extent does this lead to a surveillance society? What kind of information is and should be accessible?

The results of the study also caution against outsourcing coding of the law to third-parties. They indicate that seemingly minor details can make a huge difference, creating more variance than assumed and requiring decisions by someone with actual rule-making authority. The authors also mentioned the need to consider potential second and third order effects, such as on traffic flow, etc.

Surden pointed out that there are already examples of laws being translated into computer code today that many people, including law-makers, are not fully aware of. For example, tax preparation software Turbotax inherently involves a lot of judgment and design decisions that have gone under the radar and are accepted more or less unquestioned by the Internal Revenue Service.

To my (personal, IP-nerdy) delight, an audience member mentioned the problems with content ID-ing and automatic copyright takedown notices on platforms like YouTube. Often these programs will remove completely legitimate content, causing troubles for fair-use activity. Surden said that one take-away for automated enforcement from this is that the appeal process should be just as easy as the take-down process.

The discussion also turned to the question of socially desirable enforcement. If we can embed laws in systems, we could potentially have a system of perfect monitoring and perfect enforcement. But are there costs to that perfection? Both authors argued that imperfect enforcement allows for a healthy amount of discretion and flexibility, and that bureaucracy left to its own devices could be dangerous. Even more problematic than perfect enforcement, said Hartzog, is perfect prevention. As we’ve seen with red light camera systems, citizens can become outraged by what they perceive as a lack of value judgment. A disconnect between law and social perception can go so far as to be counterproductive. Conti also postulated that the type of person who is willing to take on some risk to achieve a certain gain would be beat out by perfect law enforcement – behavior that we might want to otherwise maintain in our society.

The audience was also interested in the question whether our current system of human error, bias, and corruption in law enforcement is better or worse than a system that leaves no flexibility for human judgment. Can we build room for discretion and community norms into systems? What happens if a human bias becomes systematized? Interestingly, when the programmers were asked after the experiment whether they would want to drive on the roads with the programs they had built, their answer was uniformly “no.” (Although one programer said yes, conditioned on creating a backdoor exception for herself.)

In terms of policy recommendations and reducing the uncertainty of coding law, the authors suggested creating a committee or organization to set standards for automated enforcement systems, and also that the code of such systems be kept transparent and open to examination.

See all the We Robot coverage on Robohub

tags: , , , ,

Kate Darling

Related posts :

Robot Talk Episode 89 – Simone Schuerle

In the latest episode of the Robot Talk podcast, Claire chatted to Simone Schuerle from ETH Zürich all about microrobots, medicine and science.
14 June 2024, by

Robot Talk Episode 88 – Lord Ara Darzi

In the latest episode of the Robot Talk podcast, Claire chatted to Lord Ara Darzi from Imperial College London all about robotic surgery - past, present and future.
07 June 2024, by

Robot Talk Episode 87 – Isabelle Ormerod

In the latest episode of the Robot Talk podcast, Claire chatted to Isabelle Ormerod from the University of Bristol all about human-centred design and women in robotics.
31 May 2024, by

Robot Talk Episode 86 – Mario Di Castro

In the latest episode of the Robot Talk podcast, Claire chatted to Mario Di Castro from CERN all about robotic inspection and maintenance in hazardous environments.
24 May 2024, by

Congratulations to the #ICRA2024 best paper winners

The winners and finalists in the different categories have been announced.
20 May 2024, by

Robot Talk Episode 85 – Margarita Chli

In the latest episode of the Robot Talk podcast, Claire chatted to Margarita Chli from the University of Cyprus all about vision, navigation, and small aerial drones.
17 May 2024, by

Robohub is supported by:

Would you like to learn how to tell impactful stories about your robot or AI system?

training the next generation of science communicators in robotics & AI

©2024 - Association for the Understanding of Artificial Intelligence


©2021 - ROBOTS Association