Robohub.org
 

Learning tasks across different environments


by
27 July 2010



share this:

In the future, robots will be expected to learn a task and execute it in a variety of realistic situations. Reinforcement-learning and planning algorithms are exactly intended for that purpose. However, one of the main challenges is to make sure actions learned in one environment can be used in new and unforeseen situations in real time.

To address this challenge, Stolle et al. have imagined a series of algorithms which they demonstrate on complex tasks such as solving a marble maze or making Boston Dynamic’s Little Dog navigate over complex terrain (see video below).

The first ingredient of success relies on making robots learn what action to take based on local features, meaning features as viewed by the robot (e.g. “there is a wall to the right”). These local features can then be recognized in new environments when the robot is in similar situations. Instead, many existing algorithms use global information, for example by saying “perform this action in position (x,y,z)”. Changing the environment however would typically make these global policies useless.

The second ingredient makes robots build libraries containing sequences of actions (trajectories) that can bring a robot from its current state to an aimed goal. Robots then apply the actions from the trajectory nearest to their state to achieve a task. This strategy is interesting because it is not computationally expensive and does not require large amounts of fast memory.

Finally, don’t miss the following video of little-dog climbing over a fence. This special purpose behavior can be used in a variety of situations.




Sabine Hauert is President of Robohub and Associate Professor at the Bristol Robotics Laboratory
Sabine Hauert is President of Robohub and Associate Professor at the Bristol Robotics Laboratory





Related posts :

“Robot, make me a chair”

  17 Feb 2026
An AI-driven system lets users design and build simple, multicomponent objects by describing them with words.

Robot Talk Episode 144 – Robot trust in humans, with Samuele Vinanzi

  13 Feb 2026
In the latest episode of the Robot Talk podcast, Claire chatted to Samuele Vinanzi from Sheffield Hallam University about how robots can tell whether to trust or distrust people.

How can robots acquire skills through interactions with the physical world? An interview with Jiaheng Hu

and   12 Feb 2026
Find out more about work published at the Conference on Robot Learning (CoRL).

Sven Koenig wins the 2026 ACM/SIGAI Autonomous Agents Research Award

  10 Feb 2026
Sven honoured for his work on AI planning and search.

Robot Talk Episode 143 – Robots for children, with Elmira Yadollahi

  06 Feb 2026
In the latest episode of the Robot Talk podcast, Claire chatted to Elmira Yadollahi from Lancaster University about how children interact with and relate to robots.

New frontiers in robotics at CES 2026

  03 Feb 2026
Henry Hickson reports on the exciting developments in robotics at Consumer Electronics Show 2026.

Robot Talk Episode 142 – Collaborative robot arms, with Mark Gray

  30 Jan 2026
In the latest episode of the Robot Talk podcast, Claire chatted to Mark Gray from Universal Robots about their lightweight robotic arms that work alongside humans.

Robot Talk Episode 141 – Our relationship with robot swarms, with Razanne Abu-Aisheh

  23 Jan 2026
In the latest episode of the Robot Talk podcast, Claire chatted to Razanne Abu-Aisheh from the University of Bristol about how people feel about interacting with robot swarms.


Robohub is supported by:





 













©2026.01 - Association for the Understanding of Artificial Intelligence