Robohub.org
 

Artificial people: How will the law adapt to intelligent systems?

by
31 March 2017



share this:

Robotics technology is no longer limited to industry. Climate Controls, 3D printers, surveillance robots, drones, household and even sex robots are entering the private market. The more autonomous they become, the more difficult it becomes to resolve conflicts, such as those between humans and software.

The law currently recognizes individuals, like you and me. Also companies, organizations and governments can negotiate agreements and liability. These non-natural persons are represented by real people (they should be controlled after all). But what about autonomous systems that take over tasks and make intelligent decisions that might be interpreted as a legal act?

Social robots will not just vacuum your house, they might pay bills, independently enter into contracts and, who knows, get in the car to get groceries for you. Civil law as it is written, however, sees the robot as an object, not as a subject with legal capacity. One issue is that reminiscent of Bicentennial Man, a science fiction film from 1999 based on a story by Isaac Asimov in 1977. The robot Andrew Martin wants to be recognized as a ‘natural person’—a request that the court rejects with the argument that a robot lives eternally. Years later, the robot asks for a review. An update enables him to die. Moreover, according to Andrew, the judge himself makes use of non-natural resources.

Industrial robots do not yet have legal capacity. They carry out instructions in a defined process. The need for legal personality arises only when participating in society. A social robot must make arrangements, for example, in his care function with physicians and suppliers. That cannot be without any acceptance of legal personality. But at what level will artificial intelligence be “equivalent” to that of natural persons?

The Turing test may have seemed definitive in 1950, but does it still count? According to the test, a machine must qualify for intelligence on a “human” level. The interrogator, for example, in a chat, has to get the impression that she is talking to a real person. The robot statements should be appreciated emotionally and morally and react to it appropriately. So, it has to have social intelligence. It also should respond appropriately to changing circumstances to be qualified as a dynamic action-intelligence.

Could you imagine a robot who is your colleague or boss? The Swedish series “Real Humans” has already dramatized the scenario. Can you dismiss individuals once their job is better done by a robot? Can we accept that a robot has control over an individual? Labour law does not give an answer. Topical is a situation in which a self-driving car crashes. According to road traffic laws, the driver is responsible. But what if the controller is dependent on road management, vehicle manufacturer, meteorological service, navigation systems and the algorithm that made the car self-learning?

Robot sustainable law is therefore particularly complicated in the area of liability. And what if a robot is guilty, does it make sense to punish him? Can we pull the plug, as in the movie I, Robot? At the same time, there has to be expected less conflict. Ninety percent of traffic accidents are caused by human error, the rest being due to circumstances such as a falling tree or flat tire.

And what is a natural person anyway? Is it a man or woman who was not born of another human being, but composed of an artificial heart, artificial kidney, artificial limbs, artificial brains, etc., and brought to life in a laboratory? What about discrimination? What if a human body has been upgraded with robotics, can we just switch them off?

Back to the distinction between a legal object and a subject of law. Add to that sui generis, a legal phenomenon that is the only one of its kind. Asimov already offered three laws:

  • A robot may not cause a human injury or by omission allow overtaking any human injuries.
  • A robot must obey the orders given it by human beings, except where such contracts are in conflict with the first law.
  • A robot must protect its own existence, as far as such protection does not conflict with the First or Second Law.

Good thoughts, but motivated by fear and the desire for human control. Adherence to these laws might actually put the brakes on the development of artificial intelligence. But what we want is progress, right?

It’s time to create a commission for this issue—an international multidisciplinary committee consisting of lawyers, philosophers, ethicists, computer scientists, political scientists and economists. Otherwise, the robots might in the long run provide themselves with a solution.

More adapted to the societal need are the proposed laws of Murphy and Woods:

  • A human may not deploy a robot without the human-robot work system meeting the highest legal and professional standards of safety and ethics.
  • A robot must respond to humans as appropriate for their roles.
  • A robot must be endowed with sufficient situated autonomy to protect its own existence as long as such protection provides smooth transfer of control which does not conflict with the First and Second Laws.

But there is a precondition that: A robot must have an added value in society by performing its task. I would add from a legal perspective:

  • An autonomous intelligent robot must be accepted as an equal partner in performing legal acts.

Legally acting robots must be certified as such based on preconditions as: >turingtest level with a socially acceptable dynamic intelligence and societal and legal understanding of moral and legal norms.


If you liked this article, you may also want to read:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.



tags: , , , ,


Rob van den Hoven van Genderen is director of the Center for Law and Internet of the Law Faculty of the VU University of Amsterdam
Rob van den Hoven van Genderen is director of the Center for Law and Internet of the Law Faculty of the VU University of Amsterdam





Related posts :



Open Robotics Launches the Open Source Robotics Alliance

The Open Source Robotics Foundation (OSRF) is pleased to announce the creation of the Open Source Robotics Alliance (OSRA), a new initiative to strengthen the governance of our open-source robotics so...

Robot Talk Episode 77 – Patricia Shaw

In the latest episode of the Robot Talk podcast, Claire chatted to Patricia Shaw from Aberystwyth University all about home assistance robots, and robot learning and development.
18 March 2024, by

Robot Talk Episode 64 – Rav Chunilal

In the latest episode of the Robot Talk podcast, Claire chatted to Rav Chunilal from Sellafield all about robotics and AI for nuclear decommissioning.
31 December 2023, by

AI holidays 2023

Thanks to those that sent and suggested AI and robotics-themed holiday videos, images, and stories. Here’s a sample to get you into the spirit this season....
31 December 2023, by and

Faced with dwindling bee colonies, scientists are arming queens with robots and smart hives

By Farshad Arvin, Martin Stefanec, and Tomas Krajnik Be it the news or the dwindling number of creatures hitting your windscreens, it will not have evaded you that the insect world in bad shape. ...
31 December 2023, by

Robot Talk Episode 63 – Ayse Kucukyilmaz

In the latest episode of the Robot Talk podcast, Claire chatted to Ayse Kucukyilmaz from the University of Nottingham about collaboration, conflict and failure in human-robot interactions.
31 December 2023, by





Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association