Robohub.org
 

We Robot Conference: 2. Law as algorithm


by
03 May 2013



share this:

On April 8-9, Stanford Law School held the second annual robotics and law conference, We Robot. This year’s event focused on near-term policy issues in robotics and featured panels and papers by scholars, practitioners, and engineers on topics like intellectual property, tort liability, legal ethics, and privacy. The full program is here.

This post is part of Robohub’s We Robot coverage.

Woodrow Hartzog and Greg Conti presented their paper (co-authored with Lisa Shay and John Nelson of West Point) on implementing law in computer systems.

Paper: Do Robots Dream of Electric Laws? An Experiment in Law as Algorithm
Author: Greg Conti, Woodrow Hartzog
Moderator: Harry Surden

This was an especially interesting presentation, given that last year at We Robot in Miami, the same authors presented a paper on the problems of removing humans from the loop and turning law enforcement over to a computer (Confronting Automated Law Enforcement). This year, the authors took the question a step further. In an experiment, they look at what happens when you convert laws into algorithms. It turns out that even simple rules (like speeding limits) require unexpected consideration.

In the experiment, 52 programers were asked to automate the enforcement of traffic speed limits. They were given a set of driving data and each wrote a program that measured the number of speed limit violations and issued traffic tickets accordingly. Despite having exact data for both vehicle speed and speed limits, the number of issued tickets varied among the programs. The authors attribute the variance to the fact that the programers were faced with having to make assumptions and legal interpretations, for example whether to code according to the letter of the law (100% enforcement of every violation) or the intent of the law (tolerating minor infractions). The study indicates that there can be unanticipated degrees of freedom in the design of enforcement algorithms, even when dealing with seemingly straightforward legal rules.

As for the broader implications, the authors drew attention to the following problem areas:
Culpability We may not be able to automate laws that include culpability, because this is too difficult to determine automatically.
Objectivity of the wrongful conduct E.g. speed does not necessarily equal recklessness.
Identification With automated enforcement we would have to make particularly sure that someone’s identity can be verified.
Accessibility To what extent does this lead to a surveillance society? What kind of information is and should be accessible?

The results of the study also caution against outsourcing coding of the law to third-parties. They indicate that seemingly minor details can make a huge difference, creating more variance than assumed and requiring decisions by someone with actual rule-making authority. The authors also mentioned the need to consider potential second and third order effects, such as on traffic flow, etc.

Surden pointed out that there are already examples of laws being translated into computer code today that many people, including law-makers, are not fully aware of. For example, tax preparation software Turbotax inherently involves a lot of judgment and design decisions that have gone under the radar and are accepted more or less unquestioned by the Internal Revenue Service.

To my (personal, IP-nerdy) delight, an audience member mentioned the problems with content ID-ing and automatic copyright takedown notices on platforms like YouTube. Often these programs will remove completely legitimate content, causing troubles for fair-use activity. Surden said that one take-away for automated enforcement from this is that the appeal process should be just as easy as the take-down process.

The discussion also turned to the question of socially desirable enforcement. If we can embed laws in systems, we could potentially have a system of perfect monitoring and perfect enforcement. But are there costs to that perfection? Both authors argued that imperfect enforcement allows for a healthy amount of discretion and flexibility, and that bureaucracy left to its own devices could be dangerous. Even more problematic than perfect enforcement, said Hartzog, is perfect prevention. As we’ve seen with red light camera systems, citizens can become outraged by what they perceive as a lack of value judgment. A disconnect between law and social perception can go so far as to be counterproductive. Conti also postulated that the type of person who is willing to take on some risk to achieve a certain gain would be beat out by perfect law enforcement – behavior that we might want to otherwise maintain in our society.

The audience was also interested in the question whether our current system of human error, bias, and corruption in law enforcement is better or worse than a system that leaves no flexibility for human judgment. Can we build room for discretion and community norms into systems? What happens if a human bias becomes systematized? Interestingly, when the programmers were asked after the experiment whether they would want to drive on the roads with the programs they had built, their answer was uniformly “no.” (Although one programer said yes, conditioned on creating a backdoor exception for herself.)

In terms of policy recommendations and reducing the uncertainty of coding law, the authors suggested creating a committee or organization to set standards for automated enforcement systems, and also that the code of such systems be kept transparent and open to examination.

See all the We Robot coverage on Robohub



tags: , , , ,


Kate Darling


Subscribe to Robohub newsletter on substack



Related posts :

Robot Talk Episode 147 – Miniature living robots, with Maria Guix

  06 Mar 2026
In the latest episode of the Robot Talk podcast, Claire chatted to Maria Guix from the University of Barcelona about combining electronics and biology to create biohybrid robots with emergent properties.

Developing an optical tactile sensor for tracking head motion during radiotherapy: an interview with Bhoomika Gandhi

  05 Mar 2026
Bhoomika Gandhi discusses her work on an optical sensor for medical robotics applications.

Humanoid home robots are on the market – but do we really want them?

  03 Mar 2026
Last year, Norwegian-US tech company 1X announced “the world’s first consumer-ready humanoid robot designed to transform life at home”.

Robot Talk Episode 146 – Embodied AI on the ISS, with Jamie Palmer

  27 Feb 2026
In the latest episode of the Robot Talk podcast, Claire chatted to Jamie Palmer from Icarus Robotics about building a robotic labour force to perform routine and risky tasks in orbit.

I developed an app that uses drone footage to track plastic litter on beaches

  26 Feb 2026
Plastic pollution is one of those problems everyone can see, yet few know how to tackle it effectively.

Translating music into light and motion with robots

  25 Feb 2026
Robots the size of a soccer ball create new visual art by trailing light that represents the “emotional essence” of music

Robot Talk Episode 145 – Robotics and automation in manufacturing, with Agata Suwala

  20 Feb 2026
In the latest episode of the Robot Talk podcast, Claire chatted to Agata Suwala from the Manufacturing Technology Centre about leveraging robotics to make manufacturing systems more sustainable.

Reversible, detachable robotic hand redefines dexterity

  19 Feb 2026
A robotic hand developed at EPFL has dual-thumbed, reversible-palm design that can detach from its robotic ‘arm’ to reach and grasp multiple objects.



Robohub is supported by:


Subscribe to Robohub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence