Robohub.org
 

Robot’s Delight: Japanese robots rap about their artificial intelligence


by
28 March 2017



share this:

YouTube: Dylan F Glas

“Robot’s Delight – A Lyrical Exposition on Learning by Imitation from Human-Human Interaction” is a video submission that won Best Video at the 2017 ACM/IEEE International Conference on Human-Robot Interaction (HRI 2017). The team also provides an in-depth explanation of the techniques and robotics in the video.

Watch it here:

Select video clips (c) 2016, IEEE. Reused, with permission.


Although social robots are growing in popularity and technical feasibility, it is still unclear how we can effectively program social behaviors. There are many difficulties in programming social robots — we need to design hundreds or thousands of dialogue rules, anticipate situations the robot will face, handle common recognition errors, and program the robot to respond to many variations of human speech and behavior. Perhaps most challenging is that we often do not understand the reasoning behind our own behavior and so it is hard to program such implicit knowledge into robots.

In this video, we present two studies exploring learning-by-imitation from human-human interaction. In these studies, we developed techniques for learning typical actions and execution logic directly from people interacting naturally with each other. We propose that this is more scalable and robust than developing interaction logic by hand, and it requires much less effort.

In the first study, we asked participants to role-play a shopkeeper and a customer in a camera shop scenario, and we recorded their motion and speech in 178 interactions. By extracting typical motion and speech actions using unsupervised clustering algorithms, we created a set of robot behaviors and trained a machine learning classifier to predict which of those actions a human shopkeeper would have performed in any given situation. Using this classifier, we programmed a Robovie robot to imitate the movement and speech behavior of the shopkeeper, e.g. greeting the customer, answering questions about camera features, and introducing different cameras.

Experiment results showed that our techniques enabled the robot to perform correct behaviors 84.8% of the time, which was particularly interesting since speech recognition was only 76.8% accurate. This illustrates the robustness of the system to sensor noise, which is one advantage of using noisy, real-world data for training.

In the second study, we used a similar technique to train the android ERICA to imitate people’s behavior in a travel agent scenario. In this case, the challenge was to model the topic of the interaction so that ERICA could learn to answer ambiguous questions like “how much does this package cost?”.

We did this by observing that utterances in the same topic tend to occur together in interactions, so we calculated co-occurrence metrics similar to those used in product recommendation systems for online shopping sites. Using these metrics, we were able to cluster the customer and shopkeeper actions into topics, and these topics were used to improve ERICA’s predictor in order to answer ambiguous questions.

In both of these studies, we illustrated a completely hands-off approach to developing robot interaction logic – the robots learned only from example data of people interacting with each other, and no designer or programmer was needed! We think scalable, data-driven techniques like these promise to be powerful tools for developing even richer, more humanlike interaction logic for robots in the future.

The extended abstract for this video can be found here.

For the full details of the Robovie study, please see our IEEE Transactions on Robotics paper.

Phoebe Liu, Dylan F. Glas, Takayuki Kanda, and Hiroshi Ishiguro, Data-Driven HRI: Learning Social Behaviors by Example from Human-Human Interaction, in IEEE Transactions on Robotics, Vol. 32, No. 4, pp. 988-1008, 2016.

Author’s preprint is here.



tags: , , , ,


Dylan Glas is the lead architect for the ERICA android system in the ERATO Ishiguro Symbiotic Human-Robot Interaction Project
Dylan Glas is the lead architect for the ERICA android system in the ERATO Ishiguro Symbiotic Human-Robot Interaction Project


Subscribe to Robohub newsletter on substack



Related posts :

Robot Talk Episode 148 – Ethical robot behaviour, with Alan Winfield

  13 Mar 2026
In the latest episode of the Robot Talk podcast, Claire chatted to Alan Winfield from the University of the West of England about developing new standards for ethics and transparency in robotics.

Coding for underwater robotics

  12 Mar 2026
Lincoln Laboratory intern Ivy Mahncke developed and tested algorithms to help human divers and robots navigate underwater.

Restoring surgeons’ sense of touch with robotic fingertips

  10 Mar 2026
Researchers are developing robotic “fingertips” that could give surgeons back their sense of touch during minimally invasive and robotic operations.

Robot Talk Episode 147 – Miniature living robots, with Maria Guix

  06 Mar 2026
In the latest episode of the Robot Talk podcast, Claire chatted to Maria Guix from the University of Barcelona about combining electronics and biology to create biohybrid robots with emergent properties.

Developing an optical tactile sensor for tracking head motion during radiotherapy: an interview with Bhoomika Gandhi

  05 Mar 2026
Bhoomika Gandhi discusses her work on an optical sensor for medical robotics applications.

Humanoid home robots are on the market – but do we really want them?

  03 Mar 2026
Last year, Norwegian-US tech company 1X announced “the world’s first consumer-ready humanoid robot designed to transform life at home”.

Robot Talk Episode 146 – Embodied AI on the ISS, with Jamie Palmer

  27 Feb 2026
In the latest episode of the Robot Talk podcast, Claire chatted to Jamie Palmer from Icarus Robotics about building a robotic labour force to perform routine and risky tasks in orbit.

I developed an app that uses drone footage to track plastic litter on beaches

  26 Feb 2026
Plastic pollution is one of those problems everyone can see, yet few know how to tackle it effectively.



Robohub is supported by:


Subscribe to Robohub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence