Robohub.org
 

Stable visual navigation


by
20 April 2011



share this:

If you’re trying to get from the couch to the fridge, you’ll probably be using vision to navigate and home-in on your fresh drink.

To make your camera-equipped robot do something similar, give it an image taken from the location it is trying to reach (target image). By comparing features in the image taken from its camera and the target image, the robot is able to determine in what direction it should move to make the two images match.

However, challenges often arise when the robot is nearing its goal. If there is little change between the current and target images, the robot motion might start oscillating. To avoid this oscillation, López-Nicolás et al. propose to replace the target image by a smartly chosen virtual image computed at the beginning of the task. Possible features in the current, target and virtual images are shown below.

Left: Initial image of an experiment with the point features detected. Right: Target image with the points matched (circles) and the computed virtual target points (squares).

Experiments were done using a Pioneer P3-DX from ActivMedia. The robot is equipped with a forward looking Point Grey Research Flea2 camera. Results show the robot is able to smoothly navigate towards a target.

In the future, authors hope to equip their robots with omnidirectional cameras to allow them to reach targets all around.




Sabine Hauert is President of Robohub and Associate Professor at the Bristol Robotics Laboratory
Sabine Hauert is President of Robohub and Associate Professor at the Bristol Robotics Laboratory





Related posts :



Robot Talk Episode 120 – Evolving robots to explore other planets, with Emma Hart

  09 May 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Emma Hart from Edinburgh Napier University about algorithms that 'evolve' better robot designs and control systems.

Robot Talk Episode 119 – Robotics for small manufacturers, with Will Kinghorn

  02 May 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Will Kinghorn from Made Smarter about how to increase adoption of new tech by small manufacturers.

Multi-agent path finding in continuous environments

  01 May 2025
How can a group of agents minimise their journey length whilst avoiding collisions?

Interview with Yuki Mitsufuji: Improving AI image generation

  29 Apr 2025
Find out about two pieces of research tackling different aspects of image generation.

Robot Talk Episode 118 – Soft robotics and electronic skin, with Miranda Lowther

  25 Apr 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Miranda Lowther from the University of Bristol about soft, sensitive electronic skin for prosthetic limbs.

Interview with Amina Mević: Machine learning applied to semiconductor manufacturing

  17 Apr 2025
Find out how Amina is using machine learning to develop an explainable multi-output virtual metrology system.

Robot Talk Episode 117 – Robots in orbit, with Jeremy Hadall

  11 Apr 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Jeremy Hadall from the Satellite Applications Catapult about robotic systems for in-orbit servicing, assembly, and manufacturing.

Robot Talk Episode 116 – Evolved behaviour for robot teams, with Tanja Kaiser

  04 Apr 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Tanja Katharina Kaiser from the University of Technology Nuremberg about how applying evolutionary principles can help robot teams make better decisions.



 

Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


©2025.05 - Association for the Understanding of Artificial Intelligence


 












©2025.05 - Association for the Understanding of Artificial Intelligence