Robohub.org
 

Gestures improve communication – even with robots


by
31 March 2016



share this:
NAO robot. Photo courtesy: Paul Bremner/UWE

NAO robot. Photo courtesy: Paul Bremner/UWE

By: Abigail Pattenden

In the world of robot communication, it seems actions speak louder than words. Scientists in the UK have discovered that by getting robot avatars to “talk with their hands,” we understand them as well as we do our fellow human beings.

Avatars have been in existence since the 1980s and today are used by millions of people across the globe. They are big business too: from artificial intelligence to social media and psychotherapy to high-end video games, they are used to sell things, to solve problems, to teach us and to entertain us. As avatars become more sophisticated, and their use in society grows, research is focusing on how to improve communication with them.  Getting your message across with your avatar is more important than ever, and learning how to improve this communication is a big deal.

Scientists Paul Bremner and Ute Leonards took on this challenge in a recent study published in Frontiers in Psychology. They built their study around the hypothesis that if avatars were to use “iconic” hand gestures together with speech, we would understand them more easily. Iconic gestures have a distinct meaning, like opening a door or a book, and using gestures together with speech is known as “multi-modal communication.” The aim of the study was to discover if people could understand avatars performing multi-modal communication as well as they could a human actor. The study also investigated if multi-modal communication by an avatar was more understandable than speech alone.

A tele-operator controlling the NAO robot. Photo courtesy: Paul Bremner/UWE

A tele-operator controlling the NAO robot. Photo courtesy: Paul Bremner/UWE

To test their theory, the scientists filmed an actor reading out a series of phrases whilst performing specific iconic gestures. They then filmed an avatar using these recorded phrases and mimicking the gestures. Films of both the actor and avatar were then shown to the experiment participants, who had to identify what the human and avatar were trying to communicate. The research was a success: the scientists were able to prove that multi-modal communication by avatars is indeed more understandable than speech alone. Not only that, but when using multi-modal communication, we understand them as well as we do humans.

Getting the NAO robot to wave. Photo courtesy: Paul Bremner/UWE

Getting the NAO robot to wave. Photo courtesy: Paul Bremner/UWE

Getting avatars to talk with their hands in the same way that humans do was a challenge in itself.  Whilst performing the gestures, the actor used state-of-the-art technology. His movements were tracked using a Microsoft Kinect sensor so that his arm gestures could be recorded as data. The avatar used this data to mimic his gestures. The equipment did have some limitations however; the avatar does not have the same hand shape or degree of movement available as a human – something the pair plans to work on in the future.

Despite the limitations, the scientists’ research showed that their method of translating human gestures to an avatar was successful. More importantly, they are confident that the avatar gestures, when used with speech, are as easily understood as from a human. Now that this is established, the pair plans to carry out more research in the field. Future work will involve looking at more types of gestures and in different settings, plus how to make the translation of gestures from human to avatar more efficient. There will be plenty of work to keep them going – they have yet to take on different cultures, and Italy, a nation of people famed for expressive hand gestures, is still on the horizon.

Read more about the research here.



tags:


Frontiers in Psychology is an open access journal that aims at publishing the best research across the entire field of psychology
Frontiers in Psychology is an open access journal that aims at publishing the best research across the entire field of psychology





Related posts :



Livestream of RoboCup2025

  18 Jul 2025
Watch the competition live from Salvador!

Tackling the 3D Simulation League: an interview with Klaus Dorer and Stefan Glaser

and   15 Jul 2025
With RoboCup2025 starting today, we found out more about the 3D simulation league, and the new simulator they have in the works.

An interview with Nicolai Ommer: the RoboCupSoccer Small Size League

and   01 Jul 2025
We caught up with Nicolai to find out more about the Small Size League, how the auto referees work, and how teams use AI.

RoboCupRescue: an interview with Adam Jacoff

and   25 Jun 2025
Find out what's new in the RoboCupRescue League this year.

Robot Talk Episode 126 – Why are we building humanoid robots?

  20 Jun 2025
In this special live recording at Imperial College London, Claire chatted to Ben Russell, Maryam Banitalebi Dehkordi, and Petar Kormushev about humanoid robotics.

Gearing up for RoboCupJunior: Interview with Ana Patrícia Magalhães

and   18 Jun 2025
We hear from the organiser of RoboCupJunior 2025 and find out how the preparations are going for the event.

Robot Talk Episode 125 – Chatting with robots, with Gabriel Skantze

  13 Jun 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Gabriel Skantze from KTH Royal Institute of Technology about having natural face-to-face conversations with robots.

Preparing for kick-off at RoboCup2025: an interview with General Chair Marco Simões

and   12 Jun 2025
We caught up with Marco to find out what exciting events are in store at this year's RoboCup.



 

Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


©2025.05 - Association for the Understanding of Artificial Intelligence


 












©2025.05 - Association for the Understanding of Artificial Intelligence