Robohub.org
 

What do teachers mean when they say ‘do it like me’?


by
17 February 2014



share this:

This post is part of our ongoing efforts to make the latest papers in robotics accessible to a general audience.

Teaching robots to do tasks is useful, and teaching them in an easy and non time-intensive way is even more useful. The algorithm TRIC presented in the latest paper in Autonomous Robots allows robots to observe a few motions from a human teacher, understand the essence of what the demonstration is, and then repeat it and adapt it to new situations.

Robots should learn to move and do useful tasks in order to be helpful to humans. However, tasks that are easy for a human, like grasping a glass, are not so obvious for a machine. Programming a robot requires time and work. Instead, what if the robot could watch the human and learn why the human did what he did, and in what way?

This is a task that we people do all the time. Imagine you are playing tennis and the teacher says ‘do the forehand like me’ and then shows an example. How should the student understand this? Should he move his fingers, or his elbow? Should he watch the ball, the racket, the ground, or the net? All these possible reference points can be described with numbers. The algorithm presented in this paper, called Task Space Retrieval Using Inverse Feedback Control (TRIC), can help a robot learn the important aspects of a demonstrated motion. Afterwards, the robot should be able to reproduce the moves like an expert, even if the task changes slightly.

The algorithm was successfully tested in simulation on various grasping and manipulation tasks. This figure shows one of these tasks in which a robot hand must approach a box and open the cover. The robot was shown 10 sets of trajectories from a simulated teacher. After training, it was then asked to open a series of boxes where the box is moved, rotated, or of a different size. Overall, TRIC was very good on these scenarios with 24 successes out of 25 tries.

For more information, you can read the paper Discovering relevant task spaces using inverse feedback control (N. Jetchev and M. Toussaint, Autonomous Robots – Springer US, Feb 2014) or ask questions below!



tags: ,


Autonomous Robots Blog Latest publications in the journal Autonomous Robots (Springer).
Autonomous Robots Blog Latest publications in the journal Autonomous Robots (Springer).





Related posts :



Interview with Kate Candon: Leveraging explicit and implicit feedback in human-robot interactions

and   25 Jul 2025
Hear from PhD student Kate about her work on human-robot interactions.

#RoboCup2025: social media round-up part 2

  24 Jul 2025
Find out what participants got up to during the second half of RoboCup2025 in Salvador, Brazil.

#RoboCup2025: social media round-up 1

  21 Jul 2025
Find out what participants got up to during the opening days of RoboCup2025 in Salvador, Brazil.

Livestream of RoboCup2025

  18 Jul 2025
Watch the competition live from Salvador!

Tackling the 3D Simulation League: an interview with Klaus Dorer and Stefan Glaser

and   15 Jul 2025
With RoboCup2025 starting today, we found out more about the 3D simulation league, and the new simulator they have in the works.

An interview with Nicolai Ommer: the RoboCupSoccer Small Size League

and   01 Jul 2025
We caught up with Nicolai to find out more about the Small Size League, how the auto referees work, and how teams use AI.

RoboCupRescue: an interview with Adam Jacoff

and   25 Jun 2025
Find out what's new in the RoboCupRescue League this year.

Robot Talk Episode 126 – Why are we building humanoid robots?

  20 Jun 2025
In this special live recording at Imperial College London, Claire chatted to Ben Russell, Maryam Banitalebi Dehkordi, and Petar Kormushev about humanoid robotics.



 

Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


©2025.05 - Association for the Understanding of Artificial Intelligence


 












©2025.05 - Association for the Understanding of Artificial Intelligence