Imagine arriving in a new city without a map. Starting from the train station, you might take a walk around the block before returning to your starting point. As you go you’ll probably start building a mental map with the interesting shops, restaurants and streets. Since you don’t want to get lost, you also have to place yourself in this map (localization). This problem of simultaneously mapping while localizing is one of the main challenges in robotics to allow robots to deploy in new environments.
Simultaneous localization and mapping (SLAM) problems often assume robots have no information concerning their environment. This means they can only count on their own sensing and odometry, which often results in an accumulation of mapping errors.
However, with the advent of tools such as Google Earth, there is a huge amount of information that can help robots figure out where they are. Building on this idea, Kümmerle et al. propose to localize a robot by matching data from its sensors to aerial images of the environment. This strategy prevents mapping errors from accumulating.
More precisely, the robot combines information from a 3D laser range finder and from a stereo camera with global constraints extracted from aerial images. The video below shows a MobileRobots Powerbot navigating indoors and outdoors while SLAMing.
Results demonstrate that the maps acquired with this method are closer to reality than those generated using state-of-the art SLAM algorithms or GPS.