Robohub.org
 

What do teachers mean when they say ‘do it like me’?


by
17 February 2014



share this:

This post is part of our ongoing efforts to make the latest papers in robotics accessible to a general audience.

Teaching robots to do tasks is useful, and teaching them in an easy and non time-intensive way is even more useful. The algorithm TRIC presented in the latest paper in Autonomous Robots allows robots to observe a few motions from a human teacher, understand the essence of what the demonstration is, and then repeat it and adapt it to new situations.

Robots should learn to move and do useful tasks in order to be helpful to humans. However, tasks that are easy for a human, like grasping a glass, are not so obvious for a machine. Programming a robot requires time and work. Instead, what if the robot could watch the human and learn why the human did what he did, and in what way?

This is a task that we people do all the time. Imagine you are playing tennis and the teacher says ‘do the forehand like me’ and then shows an example. How should the student understand this? Should he move his fingers, or his elbow? Should he watch the ball, the racket, the ground, or the net? All these possible reference points can be described with numbers. The algorithm presented in this paper, called Task Space Retrieval Using Inverse Feedback Control (TRIC), can help a robot learn the important aspects of a demonstrated motion. Afterwards, the robot should be able to reproduce the moves like an expert, even if the task changes slightly.

The algorithm was successfully tested in simulation on various grasping and manipulation tasks. This figure shows one of these tasks in which a robot hand must approach a box and open the cover. The robot was shown 10 sets of trajectories from a simulated teacher. After training, it was then asked to open a series of boxes where the box is moved, rotated, or of a different size. Overall, TRIC was very good on these scenarios with 24 successes out of 25 tries.

For more information, you can read the paper Discovering relevant task spaces using inverse feedback control (N. Jetchev and M. Toussaint, Autonomous Robots – Springer US, Feb 2014) or ask questions below!



tags: ,


Autonomous Robots Blog Latest publications in the journal Autonomous Robots (Springer).
Autonomous Robots Blog Latest publications in the journal Autonomous Robots (Springer).





Related posts :



Robot Talk Episode 138 – Robots in the environment, with Stefano Mintchev

  19 Dec 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Stefano Mintchev from ETH Zürich about robots to explore and monitor the natural environment.

Artificial tendons give muscle-powered robots a boost

  18 Dec 2025
The new design from MIT engineers could pump up many biohybrid builds.

Robot Talk Episode 137 – Getting two-legged robots moving, with Oluwami Dosunmu-Ogunbi

  12 Dec 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Oluwami Dosunmu-Ogunbi from Ohio Northern University about bipedal robots that can walk and even climb stairs.

Radboud chemists are working with companies and robots on the transition from oil-based to bio-based materials

  10 Dec 2025
The search for new materials can be accelerated by using robots and AI models.

Robot Talk Episode 136 – Making driverless vehicles smarter, with Shimon Whiteson

  05 Dec 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Shimon Whiteson from Waymo about machine learning for autonomous vehicles.



 

Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


 












©2025.05 - Association for the Understanding of Artificial Intelligence