If you’re trying to get from the couch to the fridge, you’ll probably be using vision to navigate and home-in on your fresh drink.
To make your camera-equipped robot do something similar, give it an image taken from the location it is trying to reach (target image). By comparing features in the image taken from its camera and the target image, the robot is able to determine in what direction it should move to make the two images match.
However, challenges often arise when the robot is nearing its goal. If there is little change between the current and target images, the robot motion might start oscillating. To avoid this oscillation, López-Nicolás et al. propose to replace the target image by a smartly chosen virtual image computed at the beginning of the task. Possible features in the current, target and virtual images are shown below.
Experiments were done using a Pioneer P3-DX from ActivMedia. The robot is equipped with a forward looking Point Grey Research Flea2 camera. Results show the robot is able to smoothly navigate towards a target.
In the future, authors hope to equip their robots with omnidirectional cameras to allow them to reach targets all around.