Robohub.org
 

People favour expressive, communicative robots over efficient and effective ones


by
19 August 2016



share this:
BERT2, a humanoid robot assistant. Credit: University of Bristol

BERT2, a humanoid robot assistant. Credit: University of Bristol

Making an assistive robot partner expressive and communicative is likely to make it more satisfying to work with and lead to users trusting it more, even if it makes mistakes, a new study suggests.

But the research also shows that giving robots human-like traits could have a flip side – users may even lie to the robot in order to avoid hurting its feelings.

These were the main findings of the study I undertook as part of my MSc in Human Computer Interaction at University College London (UCL), with the objective of designing robotic assistants that people can trust. I’m presenting the research at the IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) later this month.

With the help of my supervisors, Professors Nadia Berthouze at UCL and Kerstin Eder at the University of Bristol, I constructed an experiment with a humanoid assistive robot helping users to make an omelette. The robot was tasked with passing the eggs, salt and oil but dropped one of the polystyrene eggs in two of the conditions and then attempted to make amends.

The aim was to investigate how a robot may recover a users’ trust when it makes a mistake and how it can communicate its erroneous behaviour to somebody who is working with it, either at home or at work.

BERT2, a humanoid robot assistant. Credit: University of Bristol

BERT2, a humanoid robot assistant. Credit: University of Bristol

The somewhat surprising result suggests that a communicative, expressive robot is preferable for the majority of users to a more efficient, less error prone one, despite it taking 50 per cent longer to complete the task.

Users reacted well to an apology from the robot that was able to communicate, and were particularly receptive to its sad facial expression which is likely to have reassured them that it ‘knew’ it had made a mistake.

At the end of the interaction, the communicative robot was programmed to ask participants whether they would give it the job of kitchen assistant, but they could only answer yes or no and were unable to qualify their answers.

Some were reluctant to answer and most appeared very uncomfortable. One person was under the impression that the robot looked sad when he said ‘no’, when it had not been programmed to appear so. Another complained of emotional blackmail and a third went as far as to lie to the robot.

Their reactions would suggest that, having seen the robot display human-like emotion when the egg dropped, many participants were now pre-conditioned to expect a similar reaction and therefore hesitated to say no; they were mindful of the possibility of a display of further human-like distress.

The research underlines that human-like attributes, such as regret, can be powerful tools in negating dissatisfaction but we must identify with care which specific traits we want to focus on and replicate. If there are no ground rules then we may end up with robots with different personalities, just like the people designing them.

Expressive_robots_1

BERT2, a humanoid robot assistant. Credit: University of Bristol

“Trust in our counterparts is fundamental for successful interaction, says Kerstin Eder, who leads the Verification and Validation for Safety in Robots research theme at the Bristol Robotics Laboratory. “This study gives key insights into how communication and emotional expressions from robots can mitigate the impact of unexpected behaviour in collaborative robotics. Complementing thorough verification and validation with sound understanding of these human factors will help engineers design robotic assistants that people can trust.”

The study was aligned with the EPSRC funded project Trustworthy Robotic Assistants, where new verification and validation techniques are being developed to ensure safety and trustworthiness of the machines that will enhance our quality of life in the future.

The IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), takes place from 26 to 31 August in New York City, and the study – Believing in BERT: Using expressive communication to enhance trust and counteract operational error in physical Human-Robot Interaction’ by Adriana Hamacher, Nadia Bianchi-Berthouze, Anthony G. Pipe and Kerstin Eder – will be published by the IEEE as part of the conference proceedings, available via the IEEE Xplore Digital Library.

A pre-publication copy of the research paper is available at: https://arxiv.org/pdf/1605.08817.pdf



tags: , , ,


Adriana Hamacher Associate Editor at Robohub and the UK's Knowledge Transfer Network and a contributor to Economist Insights
Adriana Hamacher Associate Editor at Robohub and the UK's Knowledge Transfer Network and a contributor to Economist Insights





Related posts :



Congratulations to the #ICRA2025 best paper award winners

  27 May 2025
The winners and finalists in the different categories have been announced.

#ICRA2025 social media round-up

  23 May 2025
Find out what the participants got up to at the International Conference on Robotics & Automation.

Robot Talk Episode 122 – Bio-inspired flying robots, with Jane Pauline Ramos Ramirez

  23 May 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Jane Pauline Ramos Ramirez from Delft University of Technology about drones that can move on land and in the air.

Robot Talk Episode 121 – Adaptable robots for the home, with Lerrel Pinto

  16 May 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Lerrel Pinto from New York University about using machine learning to train robots to adapt to new environments.

What’s coming up at #ICRA2025?

  16 May 2025
Find out what's in store at the IEEE International Conference on Robotics & Automation, which will take place from 19-23 May.

Robot see, robot do: System learns after watching how-tos

  14 May 2025
Researchers have developed a new robotic framework that allows robots to learn tasks by watching a how-to video

AI-powered robots help tackle Europe’s growing e-waste problem

  12 May 2025
EU-funded researchers have developed adaptable robots that could transform the way we recycle electronic waste, benefiting both the environment and the economy.

Robot Talk Episode 120 – Evolving robots to explore other planets, with Emma Hart

  09 May 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Emma Hart from Edinburgh Napier University about algorithms that 'evolve' better robot designs and control systems.



 

Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


©2025.05 - Association for the Understanding of Artificial Intelligence


 












©2025.05 - Association for the Understanding of Artificial Intelligence