Robohub.org
 

Survey: Evaluate ethics of health related privacy with care robots

by and
12 August 2016



share this:
Care-o-bot

Source: Fraunhofer IPA/photo: Jens Kilian

Future Socially Assistive Robots (SARs) should be safe, but patients also have a right to privacy, liberty and social contact. The survey investigates, in a hypothetical scenario, how Annie’s SAR should prioritise tasks, asking to indicate in how far Annie’s privacy and autonomy can be violated in the interest of her well-being. The survey is designed to investigate how variable people’s opinions are and to understand whether respondents agree on a set of behavioural rules in this hypothetical scenario.


Declining fertility and increasing life expectancy result in ageing populations in countries across the world. Today, Europe and North America are home to about 183 million people over 65. By 2030, the U.S. census bureau expects this to rise to 251 million people, a trend which is expected to continue well into the second half of the century. This results in economic and societal challenges in the organisation of health care. Indeed, the health of older persons typically deteriorates with increasing age, creating an increased demand for long-term care.

Socially Assistive Robots (SARs) have been proposed as a means of relieving the disproportional demand the growing group of elderly people place on health services. In the near future, these robots might assist professional health workers in both hospitals and care homes. However, the most desirable scenario is for SARs to help improve care delivery at home and reduce the burden of informal caregivers. In this way, SARs will not only aid in dealing with the unsustainable increase in health care expenses. By allowing patients to live at home for longer SARs could increase patients’ autonomy and self-management.

Data suggests that the introduction of SARs in health care promises to meet with resistance. While people have generally a positive attitude towards robots and their applications many take issue with the notion of robots being used in care. For example, in a large survey conducted in 27 European countries, over 50% of the respondents indicated they wanted to see robots being banned from providing care. In addition, almost 90% of respondents expressed being uncomfortable with the thought of robots care for either children or the elderly.

How could the acceptance of SARs be increased? It goes without saying that robots caring for people should be safe. However, while safety is essential, it is not sufficient. Patients also have a right to privacy, liberty and social contact. Making robots more autonomous would increase their efficiency at relieving the burden of care. However, an increased autonomy implies that smart care robots should be able to balance a patient’s, often conflicting, rights without ongoing supervision. Many of the trade-offs faced by such a robot will require a degree of moral judgement. Therefore, as the cognitive, perceptual and motor capabilities of robots expand, they will be expected to have an improved capacity for autonomously making moral judgements. As summarised by Picard and Picard (1997), the greater the freedom of a machine, the more it will need moral standards. Especially, when interacting with potentially vulnerable people. In other words, if care robots are to take on some care currently provided by human caregivers, they will need to be able of making similar ethical judgements.

Ensuring that SARs act ethically will increase the likelihood of them being accepted by patients and human carers. Studies have confirmed that a lack of trust and concerns about the ethical behaviour of robots currently hampers the acceptance of SARs as carers. Therefore, a number of research groups have developed methods to implement a chosen set of ethical rules in robots. Currently, this field is in its infancy. But promising progress is being made and the field can be expected to develop over the next few years.

While progress is being made on methods for implementing ethical robotic behaviour, selecting the rules to be implemented remains an outstanding issue. Different approaches have been suggested. First, a number of authors have suggested deriving behavioural rules from moral frameworks such as utilitarianism or Kantian deontology. On the other hand, machine learning techniques have been proposed as the way of extracting the rules SARs should obey. Both approaches have limitations and have, so far, not resulted in satisfying results.

A third method to decide on the rules, we advocate here, is an empirical approach. We believe the best way to decide on which rules a robot should follow in a certain situation is by asking the various stakeholders – including patients, their families and caregivers as well as health professionals. In other words, we think the way forward is to query the expectations of potential users. An advantage of this approach is that it directly involves stakeholders into the design of future robots. Indeed, quite often discussions on the ethical behaviour of robots are very academic and focussed only on the opinions of academics, engineers and lawmakers. However, in the end, the acceptance of robotic carers by users will determine whether the huge promises of SARs can be fulfilled.

In this article, we start our empirical investigation by querying your opinion on the behavioural rules a robot should follow when providing home care. Starting small, we present only a single scenario:

Annie, an elderly lady living at home, is being cared for by an advanced robot. The primary duty of the robot is to guarantee Annie’s safety and well-being. This includes reminding her to take prescribed medication. One day, Annie refuses to take her medication. In deciding how to respond to Annie’s refusal, the robot needs to weigh his duty of care against Annie’s right on privacy and autonomy.

The survey investigates how the robot’s priorities should be weighed by asking to indicate in how far Annie’s privacy and autonomy can be violated in the interest of her well-being. In particular, for various health consequences, we ask which of a number of robot actions are permissible. The survey is designed to investigate how variable people’s opinions are. In other words, we want to know whether respondents agree on a set of behavioural rules in this simple hypothetical scenario. If most respondents agree on how priorities should be weighed, this shows that designing a robot with agreed-on ethical behaviour might be possible. If responses vary widely, the results would show that what is ethical behaviour is a matter of opinion. In this case, robots might need substantial tuning by the user before deployment.


You can either take the survey below, or click here. Mobile users may find it easier to open a new window to take the survey.



tags: ,


Dieter Vanderelst is a Post-Doctoral research fellow at the Bristol Robotics Laboratory where he is working on Ethical Robots.
Dieter Vanderelst is a Post-Doctoral research fellow at the Bristol Robotics Laboratory where he is working on Ethical Robots.

Jurgen Willems is currently a Post-doctoral researcher at the University of Hamburg.
Jurgen Willems is currently a Post-doctoral researcher at the University of Hamburg.





Related posts :



Robot Talk Episode 92 – Gisela Reyes-Cruz

In the latest episode of the Robot Talk podcast, Claire chatted to Gisela Reyes-Cruz from the University of Nottingham about how humans interact with, trust and accept robots.
04 October 2024, by

Robot Talk Episode 91 – John Leonard

In the latest episode of the Robot Talk podcast, Claire chatted to John Leonard from Massachusetts Institute of Technology about autonomous navigation for underwater vehicles and self-driving cars. 
27 September 2024, by

Interview with Jerry Tan: Service robot development for education

We find out about the Jupiter2 platform and how it can be used in educational settings.
18 September 2024, by

#RoboCup2024 – daily digest: 21 July

In the last of our digests, we report on the closing day of competitions in Eindhoven.
21 July 2024, by and

#RoboCup2024 – daily digest: 20 July

In the second of our daily round-ups, we bring you a taste of the action from Eindhoven.
20 July 2024, by and

#RoboCup2024 – daily digest: 19 July

Welcome to the first of our daily round-ups from RoboCup2024 in Eindhoven.
19 July 2024, by and





Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association