Robohub.org
 

The legal issues of robotics


by
06 April 2017



share this:

As human-robot interactions become commonplace, MEPs stress that EU-wide rules are needed to guarantee a standard level of safety and security.© AP Images/European Union – EP

Robots are the technology of the future. But the current legal system is incapable of handling them. This generic statement is often the premise for considerations about the possibility of awarding rights (and liabilities) to these machines at some, less-than clearly identified, point in time. Discussing the adequacy of existing regulation in accommodating new technologies is certainly necessary, but the ontological approach is incorrect. Instead a functional approach needs to be adopted, identifying:

  1. what rules can be applied to robots (as is);
  2. what incentives do such rules provide;
  3. whether those incentives are desirable.

The recent Resolution of the European Parliament (henceforth Resolution) has great political relevance and strategic importance in the development of a European Robotic Industry. Its considerations and conclusions will be taken into account in the current position paper.


Issues

The first issue when discussing regulation is that of definitions, for one cannot regulate something without firstly defining it. However, the term robot is technical and encompasses a wide range of applications that have very little in common. For this very reason, it is impossible to develop a unitary body of rules applicable to all kinds of robotic applications, rather different rules should apply to different classes of devices.

The major issue when discussing civil law rules on robotics is that of liability (for damages). Automation might, to some extent, challenge some of the existing paradigms; and increasing human-machine cooperation might cause different sets of existing rules to overlap, leading to uncertainty, thence increased litigation and difficulties in insuring new products.

Connected to the above is robot testing. A clear legal framework for robot testing outside the restricted environment of the laboratory is needed to assess the kind of dangers that might emerge with the use and their statistical frequency (also for insurance purposes). Similarly, standardization and the development of adequate, narrow-tailored technical standards for different kinds of robots is a major concern, both to ensure product safety and the adoption of possible alternatives to existing liability rules.

A possible non-issue when discussing rules for robotics is that of the attribution of personhood. This, if intended in an ontological way, is deprived of any reasonable grounding in both technical, philosophical and legal considerations. Instead, if understood in a purely functional way the attribution of legal personhood (like in the case for corporations) might be open for discussion (in some cases). Considering some more specific kinds of applications, in particular biorobotic devices and the issue of human enhancement, its regulation and management becomes of the greatest importance and quite likely the single most relevant bioethical issues of the nearby future, requiring ad-hoc regulation to be adopted.

Finally, privacy regulation, access to data and data use is of pivotal importance, not only for the development of a European Robotics industry but more broadly for a digital market. All the mentioned issues might fall under some – direct or indirect – competences of the EU and would certainly benefit from regulations adopted at a supranational (thence European) level.


Responses

The Resolution addresses all the above mentioned issues with consistent considerations, depicting an adequate framework for a technical – legal – debate about what narrow tailored sets of rules should be adopted at the EU level. Overall, it is of the greatest political and strategical importance for defining a modern legal system, favorable to the immersion of new technologies and the proliferation of new businesses.

More specifically:

Definitions: there is need for a definition of “robot” that needs to be inclusive. What needs to be avoided are nominalistic discussions which would inevitably emerge as soon as a regulation was adopted (should the notion of robot be too narrow). Debates about whether a robot requires to be autonomous or not, controlled or not, embodied or not are irrelevant from a legal point of view. Instead, such characteristics should allow to distinguish sub-classes of robots that might be regulated unitarily. Thence, next to a broader and all-encompassing definition of robot (that should include software and non-embodied AI), narrower definitions should be elaborated, pooling together those applications that show some relevant similarities and that can be regulated unitarily.

Liability: Human-machine cooperation will cause different sets of rules to overlap (namely product liability rules and traditional tort law principles). This will cause high levels of uncertainty and litigation, delaying innovation. With respect to compensation, it is, in many cases, sensible to separate the function of ensuring product safety from that of providing the victim with compensation. This might justify different alternative solutions to be adopted: liability exemptions for users and/or manufacturers; creation of automatic compensations funds (privately or publicly funded); compulsory insurance provisions. More broadly, the inadequacies of existing rules (in particular product liability rules) might suggest to radically replace a fault based rule with a risk-management approach (based on absolute liability rules) holding liable the party who is better placed to minimize the cost and acquire insurance (Resolution nn. 53, 55). A one-stop-shop approach might be sensible, preventing complex litigation to apportion liability among different players involved. Which solution is preferable depends on the class of applications considered, the market for such products and the possibility to address those risks through insurance (Resolution nn. 57-59).

Testing: a uniform set of rules allowing testing outside the laboratories and even in human environments should be adopted, defining clear standards (in particular with respect to safety, insurance and management of the experiment) thus reducing discretionary powers of local authorities (Resolution n. 23).

Standardization & European Robotics Agency: standards represent the most effective way to ensure high levels of product safety and provide certainty ex ante to manufacturers who conform to them (Resolution n. 22). However, the time required for the adoption of a new standard and its breadth is incompatible with the current pace of technological innovation. A European Robotic Agency, such as the one suggested by the Resolution (nn. 15-17), could have a strategic importance setting a supranational standard, that could be of use beyond European borders. Otherwise, other leading economies will attempt to do the same.

Electronic Personhood: set forth by the Resolution, this notion is purely functional and intends to facilitate the registration, insurance and management of some devices (in particular nonembodied AI) with a legal tool that is equivalent to that used for corporations (so called legal personhood), see Resolution n. 59, let. E) and F).

Human Enhancement: the use of robotics to overcome human limits might become problematic given the lack of a clear sets or rules and criteria that could help discern what kind of manipulations of the human body should be allowed. The constitutional principles of human dignity, equality, and freedom of self-determination, as understood today in the broader bioethical debate are per se insufficient, and narrower criteria ought to be adopted. The legal grounds to justify an intervention by the EU in this field are less evident than in all other matters mentioned, however they can be found in the freedom of movement of EU citizens, which would suggest, to some extent, a uniform framework. With respect to the content of such principles, human dignity ought to be understood as objective and external, limiting self-determination, and reversibility of the intervention onto the body should also be taken into consideration.

Privacy & Free Flow of Data: privacy cannot be granted simply through informed consent. Consent is hardly ever truly informed, and the very possibility to dissent is limited, should one want to use the service or device requiring the collection of personal data for its operation. On the one hand, the current EU Regulation setting forth the “-by design” principle, should be narrowed down through the adoption of specific standards, specifying what satisfies that criteria in different classes of applications (see Recommendation nn. 17-21). On the other hand, consumers should be compensated for allowing access and use of – private and anonymized – data through post-sale services, enriching after-sale duties imposed on the producer.



tags: , , , , , , , ,


Andrea Bertolini is an assistant professor of private law at the Dirpolis Institute of the Scuola Superiore Sant’Anna in Pisa...
Andrea Bertolini is an assistant professor of private law at the Dirpolis Institute of the Scuola Superiore Sant’Anna in Pisa...





Related posts :



Robot Talk Episode 103 – Keenan Wyrobek

  20 Dec 2024
In the latest episode of the Robot Talk podcast, Claire chatted to Keenan Wyrobek from Zipline about drones for delivering life-saving medicine to remote locations.

Robot Talk Episode 102 – Isabella Fiorello

  13 Dec 2024
In the latest episode of the Robot Talk podcast, Claire chatted to Isabella Fiorello from the University of Freiburg about bioinspired living materials for soft robotics.

Robot Talk Episode 101 – Christos Bergeles

  06 Dec 2024
In the latest episode of the Robot Talk podcast, Claire chatted to Christos Bergeles from King's College London about micro-surgical robots to deliver therapies deep inside the body.

Robot Talk Episode 100 – Mini Rai

  29 Nov 2024
In the latest episode of the Robot Talk podcast, Claire chatted to Mini Rai from Orbit Rise about orbital and planetary robots.

Robot Talk Episode 99 – Joe Wolfel

  22 Nov 2024
In the latest episode of the Robot Talk podcast, Claire chatted to Joe Wolfel from Terradepth about autonomous submersible robots for collecting ocean data.

Robot Talk Episode 98 – Gabriella Pizzuto

  15 Nov 2024
In the latest episode of the Robot Talk podcast, Claire chatted to Gabriella Pizzuto from the University of Liverpool about intelligent robotic manipulators for laboratory automation.

Online hands-on science communication training – sign up here!

  13 Nov 2024
Find out how to communicate about your work with experts from Robohub, AIhub, and IEEE Spectrum.

Robot Talk Episode 97 – Pratap Tokekar

  08 Nov 2024
In the latest episode of the Robot Talk podcast, Claire chatted to Pratap Tokekar from the University of Maryland about how teams of robots with different capabilities can work together.





Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association