Robohub.org
 

We Robot Conference: 2. Law as algorithm

by
03 May 2013



share this:

On April 8-9, Stanford Law School held the second annual robotics and law conference, We Robot. This year’s event focused on near-term policy issues in robotics and featured panels and papers by scholars, practitioners, and engineers on topics like intellectual property, tort liability, legal ethics, and privacy. The full program is here.

This post is part of Robohub’s We Robot coverage.

Woodrow Hartzog and Greg Conti presented their paper (co-authored with Lisa Shay and John Nelson of West Point) on implementing law in computer systems.

Paper: Do Robots Dream of Electric Laws? An Experiment in Law as Algorithm
Author: Greg Conti, Woodrow Hartzog
Moderator: Harry Surden

This was an especially interesting presentation, given that last year at We Robot in Miami, the same authors presented a paper on the problems of removing humans from the loop and turning law enforcement over to a computer (Confronting Automated Law Enforcement). This year, the authors took the question a step further. In an experiment, they look at what happens when you convert laws into algorithms. It turns out that even simple rules (like speeding limits) require unexpected consideration.

In the experiment, 52 programers were asked to automate the enforcement of traffic speed limits. They were given a set of driving data and each wrote a program that measured the number of speed limit violations and issued traffic tickets accordingly. Despite having exact data for both vehicle speed and speed limits, the number of issued tickets varied among the programs. The authors attribute the variance to the fact that the programers were faced with having to make assumptions and legal interpretations, for example whether to code according to the letter of the law (100% enforcement of every violation) or the intent of the law (tolerating minor infractions). The study indicates that there can be unanticipated degrees of freedom in the design of enforcement algorithms, even when dealing with seemingly straightforward legal rules.

As for the broader implications, the authors drew attention to the following problem areas:
Culpability We may not be able to automate laws that include culpability, because this is too difficult to determine automatically.
Objectivity of the wrongful conduct E.g. speed does not necessarily equal recklessness.
Identification With automated enforcement we would have to make particularly sure that someone’s identity can be verified.
Accessibility To what extent does this lead to a surveillance society? What kind of information is and should be accessible?

The results of the study also caution against outsourcing coding of the law to third-parties. They indicate that seemingly minor details can make a huge difference, creating more variance than assumed and requiring decisions by someone with actual rule-making authority. The authors also mentioned the need to consider potential second and third order effects, such as on traffic flow, etc.

Surden pointed out that there are already examples of laws being translated into computer code today that many people, including law-makers, are not fully aware of. For example, tax preparation software Turbotax inherently involves a lot of judgment and design decisions that have gone under the radar and are accepted more or less unquestioned by the Internal Revenue Service.

To my (personal, IP-nerdy) delight, an audience member mentioned the problems with content ID-ing and automatic copyright takedown notices on platforms like YouTube. Often these programs will remove completely legitimate content, causing troubles for fair-use activity. Surden said that one take-away for automated enforcement from this is that the appeal process should be just as easy as the take-down process.

The discussion also turned to the question of socially desirable enforcement. If we can embed laws in systems, we could potentially have a system of perfect monitoring and perfect enforcement. But are there costs to that perfection? Both authors argued that imperfect enforcement allows for a healthy amount of discretion and flexibility, and that bureaucracy left to its own devices could be dangerous. Even more problematic than perfect enforcement, said Hartzog, is perfect prevention. As we’ve seen with red light camera systems, citizens can become outraged by what they perceive as a lack of value judgment. A disconnect between law and social perception can go so far as to be counterproductive. Conti also postulated that the type of person who is willing to take on some risk to achieve a certain gain would be beat out by perfect law enforcement – behavior that we might want to otherwise maintain in our society.

The audience was also interested in the question whether our current system of human error, bias, and corruption in law enforcement is better or worse than a system that leaves no flexibility for human judgment. Can we build room for discretion and community norms into systems? What happens if a human bias becomes systematized? Interestingly, when the programmers were asked after the experiment whether they would want to drive on the roads with the programs they had built, their answer was uniformly “no.” (Although one programer said yes, conditioned on creating a backdoor exception for herself.)

In terms of policy recommendations and reducing the uncertainty of coding law, the authors suggested creating a committee or organization to set standards for automated enforcement systems, and also that the code of such systems be kept transparent and open to examination.

See all the We Robot coverage on Robohub



tags: , , , ,


Kate Darling





Related posts :



Open Robotics Launches the Open Source Robotics Alliance

The Open Source Robotics Foundation (OSRF) is pleased to announce the creation of the Open Source Robotics Alliance (OSRA), a new initiative to strengthen the governance of our open-source robotics so...

Robot Talk Episode 77 – Patricia Shaw

In the latest episode of the Robot Talk podcast, Claire chatted to Patricia Shaw from Aberystwyth University all about home assistance robots, and robot learning and development.
18 March 2024, by

Robot Talk Episode 64 – Rav Chunilal

In the latest episode of the Robot Talk podcast, Claire chatted to Rav Chunilal from Sellafield all about robotics and AI for nuclear decommissioning.
31 December 2023, by

AI holidays 2023

Thanks to those that sent and suggested AI and robotics-themed holiday videos, images, and stories. Here’s a sample to get you into the spirit this season....
31 December 2023, by and

Faced with dwindling bee colonies, scientists are arming queens with robots and smart hives

By Farshad Arvin, Martin Stefanec, and Tomas Krajnik Be it the news or the dwindling number of creatures hitting your windscreens, it will not have evaded you that the insect world in bad shape. ...
31 December 2023, by

Robot Talk Episode 63 – Ayse Kucukyilmaz

In the latest episode of the Robot Talk podcast, Claire chatted to Ayse Kucukyilmaz from the University of Nottingham about collaboration, conflict and failure in human-robot interactions.
31 December 2023, by





Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association