Robohub.org
 

Stable visual navigation


by
20 April 2011



share this:

If you’re trying to get from the couch to the fridge, you’ll probably be using vision to navigate and home-in on your fresh drink.

To make your camera-equipped robot do something similar, give it an image taken from the location it is trying to reach (target image). By comparing features in the image taken from its camera and the target image, the robot is able to determine in what direction it should move to make the two images match.

However, challenges often arise when the robot is nearing its goal. If there is little change between the current and target images, the robot motion might start oscillating. To avoid this oscillation, López-Nicolás et al. propose to replace the target image by a smartly chosen virtual image computed at the beginning of the task. Possible features in the current, target and virtual images are shown below.

Left: Initial image of an experiment with the point features detected. Right: Target image with the points matched (circles) and the computed virtual target points (squares).

Experiments were done using a Pioneer P3-DX from ActivMedia. The robot is equipped with a forward looking Point Grey Research Flea2 camera. Results show the robot is able to smoothly navigate towards a target.

In the future, authors hope to equip their robots with omnidirectional cameras to allow them to reach targets all around.




Sabine Hauert is President of Robohub and Associate Professor at the Bristol Robotics Laboratory
Sabine Hauert is President of Robohub and Associate Professor at the Bristol Robotics Laboratory





Related posts :



Using generative AI to diversify virtual training grounds for robots

  24 Oct 2025
New tool from MIT CSAIL creates realistic virtual kitchens and living rooms where simulated robots can interact with models of real-world objects, scaling up training data for robot foundation models.

Robot Talk Episode 130 – Robots learning from humans, with Chad Jenkins

  24 Oct 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Chad Jenkins from University of Michigan about how robots can learn from people and assist us in our daily lives.

Robot Talk at the Smart City Robotics Competition

  22 Oct 2025
In a special bonus episode of the podcast, Claire chatted to competitors, exhibitors, and attendees at the Smart City Robotics Competition in Milton Keynes.

Robot Talk Episode 129 – Automating museum experiments, with Yuen Ting Chan

  17 Oct 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Yuen Ting Chan from Natural History Museum about using robots to automate molecular biology experiments.

What’s coming up at #IROS2025?

  15 Oct 2025
Find out what the International Conference on Intelligent Robots and Systems has in store.

From sea to space, this robot is on a roll

  13 Oct 2025
Graduate students in the aptly named "RAD Lab" are working to improve RoboBall, the robot in an airbag.

Robot Talk Episode 128 – Making microrobots move, with Ali K. Hoshiar

  10 Oct 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Ali K. Hoshiar from University of Essex about how microrobots move and work together.

Interview with Zahra Ghorrati: developing frameworks for human activity recognition using wearable sensors

and   08 Oct 2025
Zahra tells us more about her research on wearable technology.



 

Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


 












©2025.05 - Association for the Understanding of Artificial Intelligence