Robohub.org
 

Artificial people: How will the law adapt to intelligent systems?


by
31 March 2017



share this:

Robotics technology is no longer limited to industry. Climate Controls, 3D printers, surveillance robots, drones, household and even sex robots are entering the private market. The more autonomous they become, the more difficult it becomes to resolve conflicts, such as those between humans and software.

The law currently recognizes individuals, like you and me. Also companies, organizations and governments can negotiate agreements and liability. These non-natural persons are represented by real people (they should be controlled after all). But what about autonomous systems that take over tasks and make intelligent decisions that might be interpreted as a legal act?

Social robots will not just vacuum your house, they might pay bills, independently enter into contracts and, who knows, get in the car to get groceries for you. Civil law as it is written, however, sees the robot as an object, not as a subject with legal capacity. One issue is that reminiscent of Bicentennial Man, a science fiction film from 1999 based on a story by Isaac Asimov in 1977. The robot Andrew Martin wants to be recognized as a ‘natural person’—a request that the court rejects with the argument that a robot lives eternally. Years later, the robot asks for a review. An update enables him to die. Moreover, according to Andrew, the judge himself makes use of non-natural resources.

Industrial robots do not yet have legal capacity. They carry out instructions in a defined process. The need for legal personality arises only when participating in society. A social robot must make arrangements, for example, in his care function with physicians and suppliers. That cannot be without any acceptance of legal personality. But at what level will artificial intelligence be “equivalent” to that of natural persons?

The Turing test may have seemed definitive in 1950, but does it still count? According to the test, a machine must qualify for intelligence on a “human” level. The interrogator, for example, in a chat, has to get the impression that she is talking to a real person. The robot statements should be appreciated emotionally and morally and react to it appropriately. So, it has to have social intelligence. It also should respond appropriately to changing circumstances to be qualified as a dynamic action-intelligence.

Could you imagine a robot who is your colleague or boss? The Swedish series “Real Humans” has already dramatized the scenario. Can you dismiss individuals once their job is better done by a robot? Can we accept that a robot has control over an individual? Labour law does not give an answer. Topical is a situation in which a self-driving car crashes. According to road traffic laws, the driver is responsible. But what if the controller is dependent on road management, vehicle manufacturer, meteorological service, navigation systems and the algorithm that made the car self-learning?

Robot sustainable law is therefore particularly complicated in the area of liability. And what if a robot is guilty, does it make sense to punish him? Can we pull the plug, as in the movie I, Robot? At the same time, there has to be expected less conflict. Ninety percent of traffic accidents are caused by human error, the rest being due to circumstances such as a falling tree or flat tire.

And what is a natural person anyway? Is it a man or woman who was not born of another human being, but composed of an artificial heart, artificial kidney, artificial limbs, artificial brains, etc., and brought to life in a laboratory? What about discrimination? What if a human body has been upgraded with robotics, can we just switch them off?

Back to the distinction between a legal object and a subject of law. Add to that sui generis, a legal phenomenon that is the only one of its kind. Asimov already offered three laws:

  • A robot may not cause a human injury or by omission allow overtaking any human injuries.
  • A robot must obey the orders given it by human beings, except where such contracts are in conflict with the first law.
  • A robot must protect its own existence, as far as such protection does not conflict with the First or Second Law.

Good thoughts, but motivated by fear and the desire for human control. Adherence to these laws might actually put the brakes on the development of artificial intelligence. But what we want is progress, right?

It’s time to create a commission for this issue—an international multidisciplinary committee consisting of lawyers, philosophers, ethicists, computer scientists, political scientists and economists. Otherwise, the robots might in the long run provide themselves with a solution.

More adapted to the societal need are the proposed laws of Murphy and Woods:

  • A human may not deploy a robot without the human-robot work system meeting the highest legal and professional standards of safety and ethics.
  • A robot must respond to humans as appropriate for their roles.
  • A robot must be endowed with sufficient situated autonomy to protect its own existence as long as such protection provides smooth transfer of control which does not conflict with the First and Second Laws.

But there is a precondition that: A robot must have an added value in society by performing its task. I would add from a legal perspective:

  • An autonomous intelligent robot must be accepted as an equal partner in performing legal acts.

Legally acting robots must be certified as such based on preconditions as: >turingtest level with a socially acceptable dynamic intelligence and societal and legal understanding of moral and legal norms.


If you liked this article, you may also want to read:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.



tags: , , , ,


Rob van den Hoven van Genderen is director of the Center for Law and Internet of the Law Faculty of the VU University of Amsterdam
Rob van den Hoven van Genderen is director of the Center for Law and Internet of the Law Faculty of the VU University of Amsterdam





Related posts :



Robot Talk Episode 119 – Robotics for small manufacturers, with Will Kinghorn

  02 May 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Will Kinghorn from Made Smarter about how to increase adoption of new tech by small manufacturers.

Multi-agent path finding in continuous environments

  01 May 2025
How can a group of agents minimise their journey length whilst avoiding collisions?

Interview with Yuki Mitsufuji: Improving AI image generation

  29 Apr 2025
Find out about two pieces of research tackling different aspects of image generation.

Robot Talk Episode 118 – Soft robotics and electronic skin, with Miranda Lowther

  25 Apr 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Miranda Lowther from the University of Bristol about soft, sensitive electronic skin for prosthetic limbs.

Interview with Amina Mević: Machine learning applied to semiconductor manufacturing

  17 Apr 2025
Find out how Amina is using machine learning to develop an explainable multi-output virtual metrology system.

Robot Talk Episode 117 – Robots in orbit, with Jeremy Hadall

  11 Apr 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Jeremy Hadall from the Satellite Applications Catapult about robotic systems for in-orbit servicing, assembly, and manufacturing.

Robot Talk Episode 116 – Evolved behaviour for robot teams, with Tanja Kaiser

  04 Apr 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Tanja Katharina Kaiser from the University of Technology Nuremberg about how applying evolutionary principles can help robot teams make better decisions.

AI can be a powerful tool for scientists. But it can also fuel research misconduct

  31 Mar 2025
While AI is allowing scientists to make technological breakthroughs, there’s also a darker side to the use of AI in science: scientific misconduct is on the rise.



 

Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


©2025.05 - Association for the Understanding of Artificial Intelligence


 












©2025.05 - Association for the Understanding of Artificial Intelligence