RoboLaw: Why and how to regulate robotics
The issue is often raised whether robotics needs to be regulated. While some believe that there is no need to intervene because regulation may stifle innovation, others believe that indeed there is need to intervene since robotics may otherwise prove disruptive. However, both arguments are partial, and for this very reason wrong. Thanks to existing laws, a robot (like any other physical phenomenon) is already instantly regulated in the very moment materializes .
Contrary to popular belief, the law is faster than any technological development.
If a time machine was invented tomorrow and time travel became reality, every aspect of the machine would already be regulated before news of the device could be shared with the world. If the first time traveller did not come back from his or her trip to the past, that person’s spouse could, under existing legal frameworks, sue the inventors of the time machine and claim them liable for the accident.
Such a claim would subsequently be filed in a court somewhere in the world, and a judge would have to determine whether a rule of negligence or instead an objective standard of liability applies, if a valid contract was entered into between the traveller and the inventors of the machine, and whether the rules governing travel across space also apply to travel across time. Some people would deem the judge’s conclusions correct, and others would deem them wrong, but in any case the decision would still be based on some law and some form of legal reasoning, since the only unacceptable answer by a judge is non liquet.
Given that robotics are already regulated, and that any judge’s decisions on their use will produce incentives to future parties that find themselves in the same conditions, as lawmakers we must shift our question from whether we should regulate to how regulation can most fruitfully be structured.
Most current robotic applications could be considered as “products”, and pursuant to the European Defective Product directive or the US Restatement Third of Torts, could be straightforwardly addressed as such. While the consequences of applying those rules may in some cases lead to reasonable results and thus appear uncontroversial, in other cases they may be problematic, and indeed produce a technology chilling effect (e.g.: in the cases of driverless vehicles and robotic prostheses).
In the problematic cases, ad-hoc regulation may prove reasonable, necessary, and ultimately essential for the correct development of robotic technologies. Precisely because of that, opponents of robotic development would often be better served by legislative inertia rather than intervention. By avoiding the adoption of new rules, the negative incentives triggered by existing regulation would in fact deter the development of those applications.
While product liability rules are normally intended as a tool to incentivize high standards of safety in the design of products, in some cases no actual positive effect is achieved. For instance, after a liability exemption was introduced in the United States shielding the producers of commercial aircrafts from litigation (GARA), the number of accidents did not increase.
For these very reasons, sceptical technologists, engineers and roboticists may come to believe that a thoughtful legislative environment is essential to their work , their research and their business. When adequately pondered and rationally applied, regulation is therefor not an evil, but a useful tool.
How to regulate: The RoboLaw approach
Financed by the European Commission within FP7, the RoboLaw Project adopted a new approach to the problem of how to regulate robotics that is distinctly different from previous methodologies.
Firstly, the term “robot” is not a technical term from either an engineering or a legal perspective, but is derived from science fiction. As such, the RoboLaw project takes the position that there is no purpose in trying to develop an all-encompassing definition of the term “robot” . Rather than attempting to identify a common trait among such varied applications as robotic prostheses, driverless vehicles, softbots, industrial robots, robot companions, and automated vacuum cleaners, the RoboLaw project attempts to examine the peculiarities of each and point out differences between them.
For the purpose of examining the ethical, legal and social implications of robotics, this implies renouncing the idea of developing a uniform solution, a code or set of rules for robots as a single category. RoboLaw did not attempt to further elaborate, substitute, or overcome the laws of robots as Asimov thought of them.
Instead, the RoboLaw project determined that the best approach was to undertake a case-by-case analysis, addressing single kinds – or classes – of applications, pointing out the technical peculiarities of each, and through that, identifying both the ethical and legal implications that the emergence and diffusion of a similar technology may give raise to. Based on these considerations, the RoboLaw guidelines address four different application areas: driverless vehicles, robotic prostheses (and exoskeletons), surgical robots, and robot companions.
Each chapter, devoted to one single kind of device, starts with a technological analysis (conducted by engineers) that summarises the essential aspects of the technology, clarifies how it functions, and points out the most relevant challenges being faced for its advancement. Then, applying existing ethical approaches and theories, the issues are identified and discussed, and policy considerations provided. Finally, a legal analysis considers applicable rules, determines if the incentives provided are desirable, and offers alternatives where needed.
The chapters then formulate recommendations that a legislator could use to enact ad-hoc rules for the specific robot addressed.
Which technologies to choose
In choosing which robotic technologies to address, RoboLaw took into account the novelty of the application, its possible societal impact, and its relevance.
Three of the applications considered by RoboLaw – robotic prosthetics/exoskeletons, surgical robots and robot companions – fall under the area of healthcare and, according to a recent study by McKinsey, are among the robotics technologies that show the most promise in terms of their ability to improve the quality of life of the great numbers of people.
The fourth – driverless vehicles – could represent a ground-breaking innovation, reduce the number of fatal road accidents, and eventually reshape the concept of transportation, traffic and the way our cities are designed and function.
Because the analysis required a good understanding of the technologies in question, we also considered the specific competencies of the engineers in our research group. As such, several relevant devices were left out, including drones, which are certain to modify many aspects of our future lives, and military robots with their many complex political implications.
Nonetheless, RoboLaw’s methodology can be applied to any robotic device, and should the project receive further funding, other technologies will most certainly be studied with this approach.
When and how to intervene: A functional approach
The RoboLaw project adopted a functional perspective for deciding when and how to intervene.
It is not a robot’s intrinsic technical quality or characteristic alone that calls for regulation , and the ability of the robot to operate autonomously, and even its ability to learn and adapt its functioning, do not per se suffice in justifying a change in perspective.
Even a robot that can perform complex tasks without human supervision and take decisions towards that end may still not be deemed an agent in a philosophical sense, let alone a legal one. The robot is still an object, a product, a device, not bearing rights but meant to be used. What would justify a shift on a purely ontological basis (thus forcing us to consider the robot as a being provided with rights and duties) is what Gutman, Rathgeber and Syed call ‘strong autonomy’ – namely the ability to decide for one’s self and set one’s own goals. However, at present this belongs to the realm of science fiction, and it can be argued that this is not the direction we desire to take with robots in any case: we want robots to ease our lives, and therefore to do what we decide they should be doing. Should they be free to decide if, how and when to perform what we ask according to their own taste and preferences, then the purpose of developing robotic technologies would be defeated.
If society is to favour or pose some limits to technological development, then the technical aspects of the individual robotic device need to be taken into account, together with other elements, in order to decide when to regulate and in which direction. The market structure, size and condition (which may lead to market failures on one hand, and constitutional principles and fundamental rights on the other hand), do have a bearing in this respect.
In the case of robotic prostheses, which certainly qualify as products, the application of existing rules would most likely discourage their development. Considering the costs associated with research, and the limited market of potential users (at least in the early stages of product development), a producer may decide to never develop a technology in the first place if he is held strictly liable for all damages a prosthesis might cause.
Nonetheless, given the improvement in quality of life such technology could offer amputees, a policy argument could be made to conceive a better liability system where compensation is granted to the victim and the production of safe devices is ensured without holding the researchers and producer liable under all circumstances. Moreover, some legal grounds could be found to actually justify favouring their development (e.g.: art. 4 of the UN Convention on the Rights of People with Disabilities).
Adopting a functional perspective therefore entails analysing the laws that are currently applicable in light of the technological peculiarities and challenges of the device in order to determine the effect that they would produce, the incentives they trigger, and finally, to elaborate a superior solution to the case. The direction to be favoured is determined through the complex weighing of benefits and risks the device would bring about, pursuant to existing principles and rights, in particular as emerging from constitutions and other fundamental rights declarations.
To summarize some of the most relevant considerations that are drawn by the guidelines, we could say that definitions are a very relevant issue. Unless driverless vehicles are defined and such definitions used to modify street codes, such devices won’t be allowed to circulate.
Liability rules in many cases could provide a large and material obstacle to the development of desirable applications. Nonetheless, the preferred solutions are technology-specific, and take into account technical peculiarities as well as market structures. For instance, while a large market of users might suggest that a compulsory insurance system requiring owners of driverless vehicles to insure themselves for damages to third parties is sufficient, it does not follow that the same rule could be successfully used with prosthesis or robot companions. The initially more limited market for prostheses, and the potentially very high and uncertain costs existing liability rules could give rise to, would probably discourage private insurance companies from offering such contracts. In the case of robot companions, some more effective solutions could be thought of, including the attribution of legal personhood, similarly to what is done today with corporations.
Since liability pursues safety together with compensation, in some cases the proved inefficacy of existing product liability rules may call for reform. The two issues may be addressed separately: the pursuit of standardization could prove the best approach for the safety, while the adoption of automatic compensation mechanisms (no-fault plans) could ensure victims appropriate compensation.
There are several standardization bodies both at an international and European level (e.g.: respectively International Standard Organization and the European Standard Organizations). Their role should be implemented, and specific authorities should eventually be established with the purpose of determining high safety standards that are narrow enough to be tailored to specific applications – ones that producers can actually conform to. If one takes the current European directives regulating the EC marking, one easily sees that they have too broad a scope. For example, the medical device directive is applicable to anything ranging from a surgical glove to an exoskeleton.
Standardization may also benefit the field of professions that make use of robotic applications, such as surgeons. Requiring doctors to complete specific training in order to obtain a licence to use a specific kind of surgical robot may at once increase the level of performance, reduce accidents and subsequent lawsuits, and encourage the diffusion of such novel techniques.
Can RoboLaw’s guidelines be used outside Europe?
Though the guidelines were developed for the European Commission and address the European legal system, they could be easily transposed to other systems. The problems emerging robotic technologies give rise to vary little from country to country, and so the solutions proposed by RoboLaw could be somewhat similar. Considering that the main aim of the guidelines is to provide narrow tailored recommendations suggesting the legislators how they shall intervene, such results could be applied – with some necessary adaptation – far beyond European borders.
Moreover, even if the solutions were criticized – and we expect them to be in some cases – they do provide a starting point for a technical debate in legal and philosophical terms around specific solutions to precisely defined problems, which itself is a novel outcome that we truly hope to encourage.