Robohub.org
 

We Robot Conference: 2. Law as algorithm


by
03 May 2013



share this:

On April 8-9, Stanford Law School held the second annual robotics and law conference, We Robot. This year’s event focused on near-term policy issues in robotics and featured panels and papers by scholars, practitioners, and engineers on topics like intellectual property, tort liability, legal ethics, and privacy. The full program is here.

This post is part of Robohub’s We Robot coverage.

Woodrow Hartzog and Greg Conti presented their paper (co-authored with Lisa Shay and John Nelson of West Point) on implementing law in computer systems.

Paper: Do Robots Dream of Electric Laws? An Experiment in Law as Algorithm
Author: Greg Conti, Woodrow Hartzog
Moderator: Harry Surden

This was an especially interesting presentation, given that last year at We Robot in Miami, the same authors presented a paper on the problems of removing humans from the loop and turning law enforcement over to a computer (Confronting Automated Law Enforcement). This year, the authors took the question a step further. In an experiment, they look at what happens when you convert laws into algorithms. It turns out that even simple rules (like speeding limits) require unexpected consideration.

In the experiment, 52 programers were asked to automate the enforcement of traffic speed limits. They were given a set of driving data and each wrote a program that measured the number of speed limit violations and issued traffic tickets accordingly. Despite having exact data for both vehicle speed and speed limits, the number of issued tickets varied among the programs. The authors attribute the variance to the fact that the programers were faced with having to make assumptions and legal interpretations, for example whether to code according to the letter of the law (100% enforcement of every violation) or the intent of the law (tolerating minor infractions). The study indicates that there can be unanticipated degrees of freedom in the design of enforcement algorithms, even when dealing with seemingly straightforward legal rules.

As for the broader implications, the authors drew attention to the following problem areas:
Culpability We may not be able to automate laws that include culpability, because this is too difficult to determine automatically.
Objectivity of the wrongful conduct E.g. speed does not necessarily equal recklessness.
Identification With automated enforcement we would have to make particularly sure that someone’s identity can be verified.
Accessibility To what extent does this lead to a surveillance society? What kind of information is and should be accessible?

The results of the study also caution against outsourcing coding of the law to third-parties. They indicate that seemingly minor details can make a huge difference, creating more variance than assumed and requiring decisions by someone with actual rule-making authority. The authors also mentioned the need to consider potential second and third order effects, such as on traffic flow, etc.

Surden pointed out that there are already examples of laws being translated into computer code today that many people, including law-makers, are not fully aware of. For example, tax preparation software Turbotax inherently involves a lot of judgment and design decisions that have gone under the radar and are accepted more or less unquestioned by the Internal Revenue Service.

To my (personal, IP-nerdy) delight, an audience member mentioned the problems with content ID-ing and automatic copyright takedown notices on platforms like YouTube. Often these programs will remove completely legitimate content, causing troubles for fair-use activity. Surden said that one take-away for automated enforcement from this is that the appeal process should be just as easy as the take-down process.

The discussion also turned to the question of socially desirable enforcement. If we can embed laws in systems, we could potentially have a system of perfect monitoring and perfect enforcement. But are there costs to that perfection? Both authors argued that imperfect enforcement allows for a healthy amount of discretion and flexibility, and that bureaucracy left to its own devices could be dangerous. Even more problematic than perfect enforcement, said Hartzog, is perfect prevention. As we’ve seen with red light camera systems, citizens can become outraged by what they perceive as a lack of value judgment. A disconnect between law and social perception can go so far as to be counterproductive. Conti also postulated that the type of person who is willing to take on some risk to achieve a certain gain would be beat out by perfect law enforcement – behavior that we might want to otherwise maintain in our society.

The audience was also interested in the question whether our current system of human error, bias, and corruption in law enforcement is better or worse than a system that leaves no flexibility for human judgment. Can we build room for discretion and community norms into systems? What happens if a human bias becomes systematized? Interestingly, when the programmers were asked after the experiment whether they would want to drive on the roads with the programs they had built, their answer was uniformly “no.” (Although one programer said yes, conditioned on creating a backdoor exception for herself.)

In terms of policy recommendations and reducing the uncertainty of coding law, the authors suggested creating a committee or organization to set standards for automated enforcement systems, and also that the code of such systems be kept transparent and open to examination.

See all the We Robot coverage on Robohub



tags: , , , ,


Kate Darling


Subscribe to Robohub newsletter on substack



Related posts :

Robot Talk Episode 151 – Robots to study the ocean, with Simona Aracri

  10 Apr 2026
In the latest episode of the Robot Talk podcast, Claire chatted to Simona Aracri from National Research Council of Italy about innovative robot designs for oceanography and environmental monitoring.

Generative AI improves a wireless vision system that sees through obstructions

  08 Apr 2026
With this new technique, a robot could more accurately detect hidden objects or understand an indoor scene using reflected Wi-Fi signals.

Resource-constrained image generation and visual understanding: an interview with Aniket Roy

  07 Apr 2026
Aniket tells us about his research exploring how modern generative models can be adapted to operate efficiently while maintaining strong performance.

Back to school: robots learn from factory workers

  02 Apr 2026
A Czech startup is making factory automation easier by letting workers teach robots new tasks through simple demonstrations instead of complex coding.

Resource-sharing boosts robotic resilience

  31 Mar 2026
When a modular robot shares power, sensing, and communication resources among its individual units, it is significantly more resistant to failure than traditional robotic systems.

Robot Talk Episode 150 – House building robots, with Vikas Enti

  27 Mar 2026
In the latest episode of the Robot Talk podcast, Claire chatted to Vikas Enti from Reframe Systems about using robotics and automation to build climate-resilient, high-performance homes.

A history of RoboCup with Manuela Veloso

and   24 Mar 2026
Find out how RoboCup got started and how the competition has evolved, from one of the co-founders.

Robot Talk Episode 149 – Robot safety and security, with Krystal Mattich

  20 Mar 2026
In the latest episode of the Robot Talk podcast, Claire chatted to Krystal Mattich from Brain Corp about trustworthy autonomous robots in public spaces.



Robohub is supported by:


Subscribe to Robohub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence