Robohub.org
 

Artificial people: How will the law adapt to intelligent systems?

by
31 March 2017



share this:

Robotics technology is no longer limited to industry. Climate Controls, 3D printers, surveillance robots, drones, household and even sex robots are entering the private market. The more autonomous they become, the more difficult it becomes to resolve conflicts, such as those between humans and software.

The law currently recognizes individuals, like you and me. Also companies, organizations and governments can negotiate agreements and liability. These non-natural persons are represented by real people (they should be controlled after all). But what about autonomous systems that take over tasks and make intelligent decisions that might be interpreted as a legal act?

Social robots will not just vacuum your house, they might pay bills, independently enter into contracts and, who knows, get in the car to get groceries for you. Civil law as it is written, however, sees the robot as an object, not as a subject with legal capacity. One issue is that reminiscent of Bicentennial Man, a science fiction film from 1999 based on a story by Isaac Asimov in 1977. The robot Andrew Martin wants to be recognized as a ‘natural person’—a request that the court rejects with the argument that a robot lives eternally. Years later, the robot asks for a review. An update enables him to die. Moreover, according to Andrew, the judge himself makes use of non-natural resources.

Industrial robots do not yet have legal capacity. They carry out instructions in a defined process. The need for legal personality arises only when participating in society. A social robot must make arrangements, for example, in his care function with physicians and suppliers. That cannot be without any acceptance of legal personality. But at what level will artificial intelligence be “equivalent” to that of natural persons?

The Turing test may have seemed definitive in 1950, but does it still count? According to the test, a machine must qualify for intelligence on a “human” level. The interrogator, for example, in a chat, has to get the impression that she is talking to a real person. The robot statements should be appreciated emotionally and morally and react to it appropriately. So, it has to have social intelligence. It also should respond appropriately to changing circumstances to be qualified as a dynamic action-intelligence.

Could you imagine a robot who is your colleague or boss? The Swedish series “Real Humans” has already dramatized the scenario. Can you dismiss individuals once their job is better done by a robot? Can we accept that a robot has control over an individual? Labour law does not give an answer. Topical is a situation in which a self-driving car crashes. According to road traffic laws, the driver is responsible. But what if the controller is dependent on road management, vehicle manufacturer, meteorological service, navigation systems and the algorithm that made the car self-learning?

Robot sustainable law is therefore particularly complicated in the area of liability. And what if a robot is guilty, does it make sense to punish him? Can we pull the plug, as in the movie I, Robot? At the same time, there has to be expected less conflict. Ninety percent of traffic accidents are caused by human error, the rest being due to circumstances such as a falling tree or flat tire.

And what is a natural person anyway? Is it a man or woman who was not born of another human being, but composed of an artificial heart, artificial kidney, artificial limbs, artificial brains, etc., and brought to life in a laboratory? What about discrimination? What if a human body has been upgraded with robotics, can we just switch them off?

Back to the distinction between a legal object and a subject of law. Add to that sui generis, a legal phenomenon that is the only one of its kind. Asimov already offered three laws:

  • A robot may not cause a human injury or by omission allow overtaking any human injuries.
  • A robot must obey the orders given it by human beings, except where such contracts are in conflict with the first law.
  • A robot must protect its own existence, as far as such protection does not conflict with the First or Second Law.

Good thoughts, but motivated by fear and the desire for human control. Adherence to these laws might actually put the brakes on the development of artificial intelligence. But what we want is progress, right?

It’s time to create a commission for this issue—an international multidisciplinary committee consisting of lawyers, philosophers, ethicists, computer scientists, political scientists and economists. Otherwise, the robots might in the long run provide themselves with a solution.

More adapted to the societal need are the proposed laws of Murphy and Woods:

  • A human may not deploy a robot without the human-robot work system meeting the highest legal and professional standards of safety and ethics.
  • A robot must respond to humans as appropriate for their roles.
  • A robot must be endowed with sufficient situated autonomy to protect its own existence as long as such protection provides smooth transfer of control which does not conflict with the First and Second Laws.

But there is a precondition that: A robot must have an added value in society by performing its task. I would add from a legal perspective:

  • An autonomous intelligent robot must be accepted as an equal partner in performing legal acts.

Legally acting robots must be certified as such based on preconditions as: >turingtest level with a socially acceptable dynamic intelligence and societal and legal understanding of moral and legal norms.


If you liked this article, you may also want to read:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.



tags: , , , ,


Rob van den Hoven van Genderen is director of the Center for Law and Internet of the Law Faculty of the VU University of Amsterdam
Rob van den Hoven van Genderen is director of the Center for Law and Internet of the Law Faculty of the VU University of Amsterdam





Related posts :



Robot Talk Episode 99 – Joe Wolfel

In the latest episode of the Robot Talk podcast, Claire chatted to Joe Wolfel from Terradepth about autonomous submersible robots for collecting ocean data.
22 November 2024, by

Robot Talk Episode 98 – Gabriella Pizzuto

In the latest episode of the Robot Talk podcast, Claire chatted to Gabriella Pizzuto from the University of Liverpool about intelligent robotic manipulators for laboratory automation.
15 November 2024, by

Online hands-on science communication training – sign up here!

Find out how to communicate about your work with experts from Robohub, AIhub, and IEEE Spectrum.
13 November 2024, by

Robot Talk Episode 97 – Pratap Tokekar

In the latest episode of the Robot Talk podcast, Claire chatted to Pratap Tokekar from the University of Maryland about how teams of robots with different capabilities can work together.
08 November 2024, by

Robot Talk Episode 96 – Maria Elena Giannaccini

In the latest episode of the Robot Talk podcast, Claire chatted to Maria Elena Giannaccini from the University of Aberdeen about soft and bioinspired robotics for healthcare and beyond.
01 November 2024, by

Robot Talk Episode 95 – Jonathan Walker

In the latest episode of the Robot Talk podcast, Claire chatted to Jonathan Walker from Innovate UK about translating robotics research into the commercial sector.
25 October 2024, by





Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association