Robohub.org
 

Artificial people: How will the law adapt to intelligent systems?


by
31 March 2017



share this:

Robotics technology is no longer limited to industry. Climate Controls, 3D printers, surveillance robots, drones, household and even sex robots are entering the private market. The more autonomous they become, the more difficult it becomes to resolve conflicts, such as those between humans and software.

The law currently recognizes individuals, like you and me. Also companies, organizations and governments can negotiate agreements and liability. These non-natural persons are represented by real people (they should be controlled after all). But what about autonomous systems that take over tasks and make intelligent decisions that might be interpreted as a legal act?

Social robots will not just vacuum your house, they might pay bills, independently enter into contracts and, who knows, get in the car to get groceries for you. Civil law as it is written, however, sees the robot as an object, not as a subject with legal capacity. One issue is that reminiscent of Bicentennial Man, a science fiction film from 1999 based on a story by Isaac Asimov in 1977. The robot Andrew Martin wants to be recognized as a ‘natural person’—a request that the court rejects with the argument that a robot lives eternally. Years later, the robot asks for a review. An update enables him to die. Moreover, according to Andrew, the judge himself makes use of non-natural resources.

Industrial robots do not yet have legal capacity. They carry out instructions in a defined process. The need for legal personality arises only when participating in society. A social robot must make arrangements, for example, in his care function with physicians and suppliers. That cannot be without any acceptance of legal personality. But at what level will artificial intelligence be “equivalent” to that of natural persons?

The Turing test may have seemed definitive in 1950, but does it still count? According to the test, a machine must qualify for intelligence on a “human” level. The interrogator, for example, in a chat, has to get the impression that she is talking to a real person. The robot statements should be appreciated emotionally and morally and react to it appropriately. So, it has to have social intelligence. It also should respond appropriately to changing circumstances to be qualified as a dynamic action-intelligence.

Could you imagine a robot who is your colleague or boss? The Swedish series “Real Humans” has already dramatized the scenario. Can you dismiss individuals once their job is better done by a robot? Can we accept that a robot has control over an individual? Labour law does not give an answer. Topical is a situation in which a self-driving car crashes. According to road traffic laws, the driver is responsible. But what if the controller is dependent on road management, vehicle manufacturer, meteorological service, navigation systems and the algorithm that made the car self-learning?

Robot sustainable law is therefore particularly complicated in the area of liability. And what if a robot is guilty, does it make sense to punish him? Can we pull the plug, as in the movie I, Robot? At the same time, there has to be expected less conflict. Ninety percent of traffic accidents are caused by human error, the rest being due to circumstances such as a falling tree or flat tire.

And what is a natural person anyway? Is it a man or woman who was not born of another human being, but composed of an artificial heart, artificial kidney, artificial limbs, artificial brains, etc., and brought to life in a laboratory? What about discrimination? What if a human body has been upgraded with robotics, can we just switch them off?

Back to the distinction between a legal object and a subject of law. Add to that sui generis, a legal phenomenon that is the only one of its kind. Asimov already offered three laws:

  • A robot may not cause a human injury or by omission allow overtaking any human injuries.
  • A robot must obey the orders given it by human beings, except where such contracts are in conflict with the first law.
  • A robot must protect its own existence, as far as such protection does not conflict with the First or Second Law.

Good thoughts, but motivated by fear and the desire for human control. Adherence to these laws might actually put the brakes on the development of artificial intelligence. But what we want is progress, right?

It’s time to create a commission for this issue—an international multidisciplinary committee consisting of lawyers, philosophers, ethicists, computer scientists, political scientists and economists. Otherwise, the robots might in the long run provide themselves with a solution.

More adapted to the societal need are the proposed laws of Murphy and Woods:

  • A human may not deploy a robot without the human-robot work system meeting the highest legal and professional standards of safety and ethics.
  • A robot must respond to humans as appropriate for their roles.
  • A robot must be endowed with sufficient situated autonomy to protect its own existence as long as such protection provides smooth transfer of control which does not conflict with the First and Second Laws.

But there is a precondition that: A robot must have an added value in society by performing its task. I would add from a legal perspective:

  • An autonomous intelligent robot must be accepted as an equal partner in performing legal acts.

Legally acting robots must be certified as such based on preconditions as: >turingtest level with a socially acceptable dynamic intelligence and societal and legal understanding of moral and legal norms.


If you liked this article, you may also want to read:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.



tags: , , , ,


Rob van den Hoven van Genderen is director of the Center for Law and Internet of the Law Faculty of the VU University of Amsterdam
Rob van den Hoven van Genderen is director of the Center for Law and Internet of the Law Faculty of the VU University of Amsterdam





Related posts :



Social media round-up from #IROS2025

  27 Oct 2025
Take a look at what participants got up to at the IEEE/RSJ International Conference on Intelligent Robots and Systems.

Using generative AI to diversify virtual training grounds for robots

  24 Oct 2025
New tool from MIT CSAIL creates realistic virtual kitchens and living rooms where simulated robots can interact with models of real-world objects, scaling up training data for robot foundation models.

Robot Talk Episode 130 – Robots learning from humans, with Chad Jenkins

  24 Oct 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Chad Jenkins from University of Michigan about how robots can learn from people and assist us in our daily lives.

Robot Talk at the Smart City Robotics Competition

  22 Oct 2025
In a special bonus episode of the podcast, Claire chatted to competitors, exhibitors, and attendees at the Smart City Robotics Competition in Milton Keynes.

Robot Talk Episode 129 – Automating museum experiments, with Yuen Ting Chan

  17 Oct 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Yuen Ting Chan from Natural History Museum about using robots to automate molecular biology experiments.

What’s coming up at #IROS2025?

  15 Oct 2025
Find out what the International Conference on Intelligent Robots and Systems has in store.

From sea to space, this robot is on a roll

  13 Oct 2025
Graduate students in the aptly named "RAD Lab" are working to improve RoboBall, the robot in an airbag.

Robot Talk Episode 128 – Making microrobots move, with Ali K. Hoshiar

  10 Oct 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Ali K. Hoshiar from University of Essex about how microrobots move and work together.



 

Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


 












©2025.05 - Association for the Understanding of Artificial Intelligence