Robohub.org
 

Robot’s Delight: Japanese robots rap about their artificial intelligence

by
28 March 2017



share this:

YouTube: Dylan F Glas

“Robot’s Delight – A Lyrical Exposition on Learning by Imitation from Human-Human Interaction” is a video submission that won Best Video at the 2017 ACM/IEEE International Conference on Human-Robot Interaction (HRI 2017). The team also provides an in-depth explanation of the techniques and robotics in the video.

Watch it here:

Select video clips (c) 2016, IEEE. Reused, with permission.


Although social robots are growing in popularity and technical feasibility, it is still unclear how we can effectively program social behaviors. There are many difficulties in programming social robots — we need to design hundreds or thousands of dialogue rules, anticipate situations the robot will face, handle common recognition errors, and program the robot to respond to many variations of human speech and behavior. Perhaps most challenging is that we often do not understand the reasoning behind our own behavior and so it is hard to program such implicit knowledge into robots.

In this video, we present two studies exploring learning-by-imitation from human-human interaction. In these studies, we developed techniques for learning typical actions and execution logic directly from people interacting naturally with each other. We propose that this is more scalable and robust than developing interaction logic by hand, and it requires much less effort.

In the first study, we asked participants to role-play a shopkeeper and a customer in a camera shop scenario, and we recorded their motion and speech in 178 interactions. By extracting typical motion and speech actions using unsupervised clustering algorithms, we created a set of robot behaviors and trained a machine learning classifier to predict which of those actions a human shopkeeper would have performed in any given situation. Using this classifier, we programmed a Robovie robot to imitate the movement and speech behavior of the shopkeeper, e.g. greeting the customer, answering questions about camera features, and introducing different cameras.

Experiment results showed that our techniques enabled the robot to perform correct behaviors 84.8% of the time, which was particularly interesting since speech recognition was only 76.8% accurate. This illustrates the robustness of the system to sensor noise, which is one advantage of using noisy, real-world data for training.

In the second study, we used a similar technique to train the android ERICA to imitate people’s behavior in a travel agent scenario. In this case, the challenge was to model the topic of the interaction so that ERICA could learn to answer ambiguous questions like “how much does this package cost?”.

We did this by observing that utterances in the same topic tend to occur together in interactions, so we calculated co-occurrence metrics similar to those used in product recommendation systems for online shopping sites. Using these metrics, we were able to cluster the customer and shopkeeper actions into topics, and these topics were used to improve ERICA’s predictor in order to answer ambiguous questions.

In both of these studies, we illustrated a completely hands-off approach to developing robot interaction logic – the robots learned only from example data of people interacting with each other, and no designer or programmer was needed! We think scalable, data-driven techniques like these promise to be powerful tools for developing even richer, more humanlike interaction logic for robots in the future.

The extended abstract for this video can be found here.

For the full details of the Robovie study, please see our IEEE Transactions on Robotics paper.

Phoebe Liu, Dylan F. Glas, Takayuki Kanda, and Hiroshi Ishiguro, Data-Driven HRI: Learning Social Behaviors by Example from Human-Human Interaction, in IEEE Transactions on Robotics, Vol. 32, No. 4, pp. 988-1008, 2016.

Author’s preprint is here.



tags: , , , ,


Dylan Glas is the lead architect for the ERICA android system in the ERATO Ishiguro Symbiotic Human-Robot Interaction Project
Dylan Glas is the lead architect for the ERICA android system in the ERATO Ishiguro Symbiotic Human-Robot Interaction Project





Related posts :



Robot Talk Episode 98 – Gabriella Pizzuto

In the latest episode of the Robot Talk podcast, Claire chatted to Gabriella Pizzuto from the University of Liverpool about intelligent robotic manipulators for laboratory automation.
15 November 2024, by

Online hands-on science communication training – sign up here!

Find out how to communicate about your work with experts from Robohub, AIhub, and IEEE Spectrum.
13 November 2024, by

Robot Talk Episode 97 – Pratap Tokekar

In the latest episode of the Robot Talk podcast, Claire chatted to Pratap Tokekar from the University of Maryland about how teams of robots with different capabilities can work together.
08 November 2024, by

Robot Talk Episode 96 – Maria Elena Giannaccini

In the latest episode of the Robot Talk podcast, Claire chatted to Maria Elena Giannaccini from the University of Aberdeen about soft and bioinspired robotics for healthcare and beyond.
01 November 2024, by

Robot Talk Episode 95 – Jonathan Walker

In the latest episode of the Robot Talk podcast, Claire chatted to Jonathan Walker from Innovate UK about translating robotics research into the commercial sector.
25 October 2024, by

Robot Talk Episode 94 – Esyin Chew

In the latest episode of the Robot Talk podcast, Claire chatted to Esyin Chew from Cardiff Metropolitan University about service and social humanoid robots in healthcare and education.
18 October 2024, by





Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association