news    views    talk    learn    |    about    contribute     republish     crowdfunding     archives     events

Stable visual navigation

April 20, 2011

If you’re trying to get from the couch to the fridge, you’ll probably be using vision to navigate and home-in on your fresh drink.

To make your camera-equipped robot do something similar, give it an image taken from the location it is trying to reach (target image). By comparing features in the image taken from its camera and the target image, the robot is able to determine in what direction it should move to make the two images match.

However, challenges often arise when the robot is nearing its goal. If there is little change between the current and target images, the robot motion might start oscillating. To avoid this oscillation, López-Nicolás et al. propose to replace the target image by a smartly chosen virtual image computed at the beginning of the task. Possible features in the current, target and virtual images are shown below.

Left: Initial image of an experiment with the point features detected. Right: Target image with the points matched (circles) and the computed virtual target points (squares).

Experiments were done using a Pioneer P3-DX from ActivMedia. The robot is equipped with a forward looking Point Grey Research Flea2 camera. Results show the robot is able to smoothly navigate towards a target.

In the future, authors hope to equip their robots with omnidirectional cameras to allow them to reach targets all around.

Sabine Hauert is lecturer at the Bristol Robotics Laboratory and co-founder of Robohub, the Robots Podcast and the Autonomous Robots blog... read more


Other articles on similar topics:


comments powered by Disqus


Kickstart Accelerator
April 17, 2017

Are you planning to crowdfund your robot startup?

Need help spreading the word?

Join the Robohub crowdfunding page and increase the visibility of your campaign