Robohub.org
 

The legal issues of robotics

by
06 April 2017



share this:

As human-robot interactions become commonplace, MEPs stress that EU-wide rules are needed to guarantee a standard level of safety and security.© AP Images/European Union – EP

Robots are the technology of the future. But the current legal system is incapable of handling them. This generic statement is often the premise for considerations about the possibility of awarding rights (and liabilities) to these machines at some, less-than clearly identified, point in time. Discussing the adequacy of existing regulation in accommodating new technologies is certainly necessary, but the ontological approach is incorrect. Instead a functional approach needs to be adopted, identifying:

  1. what rules can be applied to robots (as is);
  2. what incentives do such rules provide;
  3. whether those incentives are desirable.

The recent Resolution of the European Parliament (henceforth Resolution) has great political relevance and strategic importance in the development of a European Robotic Industry. Its considerations and conclusions will be taken into account in the current position paper.


Issues

The first issue when discussing regulation is that of definitions, for one cannot regulate something without firstly defining it. However, the term robot is technical and encompasses a wide range of applications that have very little in common. For this very reason, it is impossible to develop a unitary body of rules applicable to all kinds of robotic applications, rather different rules should apply to different classes of devices.

The major issue when discussing civil law rules on robotics is that of liability (for damages). Automation might, to some extent, challenge some of the existing paradigms; and increasing human-machine cooperation might cause different sets of existing rules to overlap, leading to uncertainty, thence increased litigation and difficulties in insuring new products.

Connected to the above is robot testing. A clear legal framework for robot testing outside the restricted environment of the laboratory is needed to assess the kind of dangers that might emerge with the use and their statistical frequency (also for insurance purposes). Similarly, standardization and the development of adequate, narrow-tailored technical standards for different kinds of robots is a major concern, both to ensure product safety and the adoption of possible alternatives to existing liability rules.

A possible non-issue when discussing rules for robotics is that of the attribution of personhood. This, if intended in an ontological way, is deprived of any reasonable grounding in both technical, philosophical and legal considerations. Instead, if understood in a purely functional way the attribution of legal personhood (like in the case for corporations) might be open for discussion (in some cases). Considering some more specific kinds of applications, in particular biorobotic devices and the issue of human enhancement, its regulation and management becomes of the greatest importance and quite likely the single most relevant bioethical issues of the nearby future, requiring ad-hoc regulation to be adopted.

Finally, privacy regulation, access to data and data use is of pivotal importance, not only for the development of a European Robotics industry but more broadly for a digital market. All the mentioned issues might fall under some – direct or indirect – competences of the EU and would certainly benefit from regulations adopted at a supranational (thence European) level.


Responses

The Resolution addresses all the above mentioned issues with consistent considerations, depicting an adequate framework for a technical – legal – debate about what narrow tailored sets of rules should be adopted at the EU level. Overall, it is of the greatest political and strategical importance for defining a modern legal system, favorable to the immersion of new technologies and the proliferation of new businesses.

More specifically:

Definitions: there is need for a definition of “robot” that needs to be inclusive. What needs to be avoided are nominalistic discussions which would inevitably emerge as soon as a regulation was adopted (should the notion of robot be too narrow). Debates about whether a robot requires to be autonomous or not, controlled or not, embodied or not are irrelevant from a legal point of view. Instead, such characteristics should allow to distinguish sub-classes of robots that might be regulated unitarily. Thence, next to a broader and all-encompassing definition of robot (that should include software and non-embodied AI), narrower definitions should be elaborated, pooling together those applications that show some relevant similarities and that can be regulated unitarily.

Liability: Human-machine cooperation will cause different sets of rules to overlap (namely product liability rules and traditional tort law principles). This will cause high levels of uncertainty and litigation, delaying innovation. With respect to compensation, it is, in many cases, sensible to separate the function of ensuring product safety from that of providing the victim with compensation. This might justify different alternative solutions to be adopted: liability exemptions for users and/or manufacturers; creation of automatic compensations funds (privately or publicly funded); compulsory insurance provisions. More broadly, the inadequacies of existing rules (in particular product liability rules) might suggest to radically replace a fault based rule with a risk-management approach (based on absolute liability rules) holding liable the party who is better placed to minimize the cost and acquire insurance (Resolution nn. 53, 55). A one-stop-shop approach might be sensible, preventing complex litigation to apportion liability among different players involved. Which solution is preferable depends on the class of applications considered, the market for such products and the possibility to address those risks through insurance (Resolution nn. 57-59).

Testing: a uniform set of rules allowing testing outside the laboratories and even in human environments should be adopted, defining clear standards (in particular with respect to safety, insurance and management of the experiment) thus reducing discretionary powers of local authorities (Resolution n. 23).

Standardization & European Robotics Agency: standards represent the most effective way to ensure high levels of product safety and provide certainty ex ante to manufacturers who conform to them (Resolution n. 22). However, the time required for the adoption of a new standard and its breadth is incompatible with the current pace of technological innovation. A European Robotic Agency, such as the one suggested by the Resolution (nn. 15-17), could have a strategic importance setting a supranational standard, that could be of use beyond European borders. Otherwise, other leading economies will attempt to do the same.

Electronic Personhood: set forth by the Resolution, this notion is purely functional and intends to facilitate the registration, insurance and management of some devices (in particular nonembodied AI) with a legal tool that is equivalent to that used for corporations (so called legal personhood), see Resolution n. 59, let. E) and F).

Human Enhancement: the use of robotics to overcome human limits might become problematic given the lack of a clear sets or rules and criteria that could help discern what kind of manipulations of the human body should be allowed. The constitutional principles of human dignity, equality, and freedom of self-determination, as understood today in the broader bioethical debate are per se insufficient, and narrower criteria ought to be adopted. The legal grounds to justify an intervention by the EU in this field are less evident than in all other matters mentioned, however they can be found in the freedom of movement of EU citizens, which would suggest, to some extent, a uniform framework. With respect to the content of such principles, human dignity ought to be understood as objective and external, limiting self-determination, and reversibility of the intervention onto the body should also be taken into consideration.

Privacy & Free Flow of Data: privacy cannot be granted simply through informed consent. Consent is hardly ever truly informed, and the very possibility to dissent is limited, should one want to use the service or device requiring the collection of personal data for its operation. On the one hand, the current EU Regulation setting forth the “-by design” principle, should be narrowed down through the adoption of specific standards, specifying what satisfies that criteria in different classes of applications (see Recommendation nn. 17-21). On the other hand, consumers should be compensated for allowing access and use of – private and anonymized – data through post-sale services, enriching after-sale duties imposed on the producer.



tags: , , , , , , , , ,


Andrea Bertolini is an assistant professor of private law at the Dirpolis Institute of the Scuola Superiore Sant’Anna in Pisa...
Andrea Bertolini is an assistant professor of private law at the Dirpolis Institute of the Scuola Superiore Sant’Anna in Pisa...





Related posts :



#RoboCup2024 – daily digest: 21 July

In the last of our digests, we report on the closing day of competitions in Eindhoven.
21 July 2024, by and

#RoboCup2024 – daily digest: 20 July

In the second of our daily round-ups, we bring you a taste of the action from Eindhoven.
20 July 2024, by and

#RoboCup2024 – daily digest: 19 July

Welcome to the first of our daily round-ups from RoboCup2024 in Eindhoven.
19 July 2024, by and

Robot Talk Episode 90 – Robotically Augmented People

In this special live recording at the Victoria and Albert Museum, Claire chatted to Milia Helena Hasbani, Benjamin Metcalfe, and Dani Clode about robotic prosthetics and human augmentation.
21 June 2024, by

Robot Talk Episode 89 – Simone Schuerle

In the latest episode of the Robot Talk podcast, Claire chatted to Simone Schuerle from ETH Zürich all about microrobots, medicine and science.
14 June 2024, by

Robot Talk Episode 88 – Lord Ara Darzi

In the latest episode of the Robot Talk podcast, Claire chatted to Lord Ara Darzi from Imperial College London all about robotic surgery - past, present and future.
07 June 2024, by





Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association