news    views    podcast    learn    |    about    contribute     republish    
Share this article: facebook . twitter . linkedin . reddit

Stable visual navigation

April 20, 2011

If you’re trying to get from the couch to the fridge, you’ll probably be using vision to navigate and home-in on your fresh drink.

To make your camera-equipped robot do something similar, give it an image taken from the location it is trying to reach (target image). By comparing features in the image taken from its camera and the target image, the robot is able to determine in what direction it should move to make the two images match.

However, challenges often arise when the robot is nearing its goal. If there is little change between the current and target images, the robot motion might start oscillating. To avoid this oscillation, López-Nicolás et al. propose to replace the target image by a smartly chosen virtual image computed at the beginning of the task. Possible features in the current, target and virtual images are shown below.

Left: Initial image of an experiment with the point features detected. Right: Target image with the points matched (circles) and the computed virtual target points (squares).

Experiments were done using a Pioneer P3-DX from ActivMedia. The robot is equipped with a forward looking Point Grey Research Flea2 camera. Results show the robot is able to smoothly navigate towards a target.

In the future, authors hope to equip their robots with omnidirectional cameras to allow them to reach targets all around.

Sabine Hauert
President & Co-founder
Sabine Hauert is President of Robohub and Associate Professor at the Bristol Robotics Laboratory

comments powered by Disqus

Autonomous Aircraft by Xwing
July 12, 2021

Are you planning to crowdfund your robot startup?

Need help spreading the word?

Join the Robohub crowdfunding page and increase the visibility of your campaign