Robohub.org
 

People favour expressive, communicative robots over efficient and effective ones


by
19 August 2016



share this:
BERT2, a humanoid robot assistant. Credit: University of Bristol

BERT2, a humanoid robot assistant. Credit: University of Bristol

Making an assistive robot partner expressive and communicative is likely to make it more satisfying to work with and lead to users trusting it more, even if it makes mistakes, a new study suggests.

But the research also shows that giving robots human-like traits could have a flip side – users may even lie to the robot in order to avoid hurting its feelings.

These were the main findings of the study I undertook as part of my MSc in Human Computer Interaction at University College London (UCL), with the objective of designing robotic assistants that people can trust. I’m presenting the research at the IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) later this month.

With the help of my supervisors, Professors Nadia Berthouze at UCL and Kerstin Eder at the University of Bristol, I constructed an experiment with a humanoid assistive robot helping users to make an omelette. The robot was tasked with passing the eggs, salt and oil but dropped one of the polystyrene eggs in two of the conditions and then attempted to make amends.

The aim was to investigate how a robot may recover a users’ trust when it makes a mistake and how it can communicate its erroneous behaviour to somebody who is working with it, either at home or at work.

BERT2, a humanoid robot assistant. Credit: University of Bristol

BERT2, a humanoid robot assistant. Credit: University of Bristol

The somewhat surprising result suggests that a communicative, expressive robot is preferable for the majority of users to a more efficient, less error prone one, despite it taking 50 per cent longer to complete the task.

Users reacted well to an apology from the robot that was able to communicate, and were particularly receptive to its sad facial expression which is likely to have reassured them that it ‘knew’ it had made a mistake.

At the end of the interaction, the communicative robot was programmed to ask participants whether they would give it the job of kitchen assistant, but they could only answer yes or no and were unable to qualify their answers.

Some were reluctant to answer and most appeared very uncomfortable. One person was under the impression that the robot looked sad when he said ‘no’, when it had not been programmed to appear so. Another complained of emotional blackmail and a third went as far as to lie to the robot.

Their reactions would suggest that, having seen the robot display human-like emotion when the egg dropped, many participants were now pre-conditioned to expect a similar reaction and therefore hesitated to say no; they were mindful of the possibility of a display of further human-like distress.

The research underlines that human-like attributes, such as regret, can be powerful tools in negating dissatisfaction but we must identify with care which specific traits we want to focus on and replicate. If there are no ground rules then we may end up with robots with different personalities, just like the people designing them.

Expressive_robots_1

BERT2, a humanoid robot assistant. Credit: University of Bristol

“Trust in our counterparts is fundamental for successful interaction, says Kerstin Eder, who leads the Verification and Validation for Safety in Robots research theme at the Bristol Robotics Laboratory. “This study gives key insights into how communication and emotional expressions from robots can mitigate the impact of unexpected behaviour in collaborative robotics. Complementing thorough verification and validation with sound understanding of these human factors will help engineers design robotic assistants that people can trust.”

The study was aligned with the EPSRC funded project Trustworthy Robotic Assistants, where new verification and validation techniques are being developed to ensure safety and trustworthiness of the machines that will enhance our quality of life in the future.

The IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), takes place from 26 to 31 August in New York City, and the study – Believing in BERT: Using expressive communication to enhance trust and counteract operational error in physical Human-Robot Interaction’ by Adriana Hamacher, Nadia Bianchi-Berthouze, Anthony G. Pipe and Kerstin Eder – will be published by the IEEE as part of the conference proceedings, available via the IEEE Xplore Digital Library.

A pre-publication copy of the research paper is available at: https://arxiv.org/pdf/1605.08817.pdf



tags: , , ,


Adriana Hamacher Associate Editor at Robohub and the UK's Knowledge Transfer Network and a contributor to Economist Insights
Adriana Hamacher Associate Editor at Robohub and the UK's Knowledge Transfer Network and a contributor to Economist Insights





Related posts :



Robot Talk Episode 109 – Building robots at home, with Dan Nicholson

  14 Feb 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Dan Nicholson from MakerForge.tech about creating open source robotics projects you can do at home.

Robot Talk Episode 108 – Giving robots the sense of touch, with Anuradha Ranasinghe

  07 Feb 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Anuradha Ranasinghe from Liverpool Hope University about haptic sensors for wearable tech and robotics.

Robot Talk Episode 107 – Animal-inspired robot movement, with Robert Siddall

  31 Jan 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Robert Siddall from the University of Surrey about novel robot designs inspired by the way real animals move.

Robot Talk Episode 106 – The future of intelligent systems, with Didem Gurdur Broo

  24 Jan 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Didem Gurdur Broo from Uppsala University about how to shape the future of robotics, autonomous vehicles, and industrial automation.

Robot Talk Episode 105 – Working with robots in industry, with Gianmarco Pisanelli 

  17 Jan 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Gianmarco Pisanelli from the Advanced Manufacturing Research Centre about how to promote the safe and intuitive use of robots in manufacturing.

Robot Talk Episode 104 – Robot swarms inspired by nature, with Kirstin Petersen

  10 Jan 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Kirstin Petersen from Cornell University about how robots can work together to achieve complex behaviours.

Robot Talk Episode 103 – Delivering medicine by drone, with Keenan Wyrobek

  20 Dec 2024
In the latest episode of the Robot Talk podcast, Claire chatted to Keenan Wyrobek from Zipline about drones for delivering life-saving medicine to remote locations.

Robot Talk Episode 102 – Soft robots inspired by plants, with Isabella Fiorello

  13 Dec 2024
In the latest episode of the Robot Talk podcast, Claire chatted to Isabella Fiorello from the University of Freiburg about bioinspired living materials for soft robotics.





Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association