Robohub.org
 

Planning robot motions that humans can relate to


by
28 August 2014



share this:

This post is part of our ongoing efforts to make the latest papers in robotics accessible to a general audience.

Imagine a scenario where a robot and a human are collaborating side by side to perform a tightly coupled physical task together, like clearing a table.

The task amplifies the burden on the robot’s motion. Most motion in robotics is purely functional: industrial robots move to package parts, vacuuming robots move to suck dust, and personal robots move to clean up a dirty table. This type of motion is ideal when the robot is performing a task in isolation.

Collaboration, however, does not happen in isolation. In collaboration, the robot’s motion has a human observer, watching and interpreting the motion.

In a recent paper in Autonomous Robots, Dragan et al. move beyond functional motion, and introduce the notion of an observer and their inferences into motion planning, so that robots can generate motion that is mindful of how it will be interpreted by a human collaborator.

When we collaborate, we make two inferences about our collaborator, action-to-goal and goal-to-action, leading to two important motion properties: legibility and predictability.

Legibility is about conveying intent — moving in a manner that makes the robot’s goal clear to the observer; we infer the robot’s goal based on its ongoing action (action-to-goal). Predictability is about matching the observer’s expectation — matching the motion they predict when they know the robot’s goal; if we know the robot’s goal, we infer its future action from it (goal-to-action).

Predictable and legible motion can be correlated. For example, in an unambiguous situation, where an actor’s observed motion matches what is expected for a given intent (i.e. is predictable), then this intent can be used to explain the motion. If this is the only intent which explains the motion, the observer can immediately infer the actor’s intent, meaning that the motion is also legible. This is why we tend to assume that predictability implies legibility — that if the robot moves in an expected way, then its intentions will automatically be clear. But this isn’t necessarily the case.

The context of handwriting can help us understand what distinguishes these two concepts. Traditionally an attribute of handwritten text, the word “legibility” refers to the quality of being “easy to read”. When we write legibly, we try consciously, and with some effort, to make our writing clear and readable to someone else, the way we would if we were writing an essay for a teacher or a letter for a friend. The word “predictability”, on the other hand, refers to the quality of matching expectation. When we write predictably, we fall back to old habits, and write with minimal effort, the way we might if we were taking lecture notes or writing in a personal diary.

Our legible handwriting is meant to be observed and understood, while our predictable handwriting need only be understood by ourselves. As a consequence, our “legible” and “predictable” handwriting look different. Our friends do not expect to open our personal diary and see our legible writing style … they rightfully assume the diary is meant for personal use, and would expect to see our day-to-day (and perhaps hard to understand) handwriting; our friends might be frustrated, however, if we used this same handwriting to write them a letter.

By formalizing predictability and legibility as directly stemming from the two inferences in opposing directions, goal-to-action and action-to-goal, we show that the two are different in motion as well.

2dhands_auro

Ambiguous situations (occurring often in daily tasks) make this opposition clear: more than one possible intent can be used to explain the motion observed so far, rendering the predictable motion illegible. The figure above exemplifies the effect of this contradiction. The robot hand’s motion on the left is predictable in that it matches expected behavior. The hand reaches out directly towards the target. And yet it is not legible, as it fails to make the intent of grasping the green object clear. In contrast, the trajectory on the right is more legible, making it clear that the target is the green object by deliberately bending away from the red object. But it is less predictable, as it does not match the expected behavior of reaching directly.

Dragan et al. produce predictable and legible motion by mathematically modeling how humans infer motion from goals and goals from motion, and introducing trajectory optimizers that maximize the probability that the right inferences will be made. The figure below shows the robot starting with a predictable trajectory (gray) and optimizing it to be more and more legible (orange).

legibility_herb_num

By exaggerating the motion to the right, it becomes more immediately clear that the robot’s goal is the object on the right. Exaggeration is one of the principles of Disney animation, and it naturally emerges out of the mathematics of legible motion.

For more information, you can read the paper Integrating human observer inferences into robot motion plannings (Anca Dragan, Siddhartha Srinivasa, Autonomous Robots – Springer US, August 2014) or ask questions below!



tags: ,


Autonomous Robots Blog Latest publications in the journal Autonomous Robots (Springer).
Autonomous Robots Blog Latest publications in the journal Autonomous Robots (Springer).


Subscribe to Robohub newsletter on substack



Related posts :

Robot Talk Episode 150 – House building robots, with Vikas Enti

  27 Mar 2026
In the latest episode of the Robot Talk podcast, Claire chatted to Vikas Enti from Reframe Systems about using robotics and automation to build climate-resilient, high-performance homes.

A history of RoboCup with Manuela Veloso

and   24 Mar 2026
Find out how RoboCup got started and how the competition has evolved, from one of the co-founders.

Robot Talk Episode 149 – Robot safety and security, with Krystal Mattich

  20 Mar 2026
In the latest episode of the Robot Talk podcast, Claire chatted to Krystal Mattich from Brain Corp about trustworthy autonomous robots in public spaces.

A multi-armed robot for assisting with agricultural tasks

  18 Mar 2026
How can a robot safely manipulate branches to reveal hidden flowers while remaining aware of interaction forces and minimizing damage?

Graphene-based sensor to improve robot touch

  16 Mar 2026
Multiscale-structured miniaturized 3D force sensors for improved robot touch.

Robot Talk Episode 148 – Ethical robot behaviour, with Alan Winfield

  13 Mar 2026
In the latest episode of the Robot Talk podcast, Claire chatted to Alan Winfield from the University of the West of England about developing new standards for ethics and transparency in robotics.

Coding for underwater robotics

  12 Mar 2026
Lincoln Laboratory intern Ivy Mahncke developed and tested algorithms to help human divers and robots navigate underwater.

Restoring surgeons’ sense of touch with robotic fingertips

  10 Mar 2026
Researchers are developing robotic “fingertips” that could give surgeons back their sense of touch during minimally invasive and robotic operations.



Robohub is supported by:


Subscribe to Robohub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence