How can a robot, using vision, go back to a previously visited location?
Möller et al. look at this research question, tagged “Local Visual Homing” in an intuitive manner inspired from social insects returning to their nest. The idea is that a robot, when somewhere important, takes a snapshot of the surrounding visual information. To return to that location later on (homing), it compares its current view of the world with the stored snapshot.
A technique called “image warping” is used to guide the robot to the snapshot location. Simply put, the robot imagines all possible movements it can do and simulates their effect on its current view of the world. It then selects the action that would bring its view closest to the stored snapshot. The outcome of this method is a homing vector that the robot should follow and a measure of how much its orientation has changed.
Using three different implementations of image warping, Möller et al. show how a robot equipped with a panoramic camera could effectively home with reasonable computational effort. Experiments were conducted on a database of real-world images taken by a robot (see example images below).
In the future, robots could use visual homing to go from snapshot to snapshot, thereby navigating through large environments.
Finally, don’t miss the author’s website for an extensive overview of visual navigation techniques.