We saw the need for good underwater robots during the Deepwater spill last summer. In such scenarios, a remote operator controls a robot equipped with a camera and means to build a 2D map of the environment. However, if you want your robot to inspect non-trivial structures such as oil- and gas- production and transport equipment, or if you want it to be more autonomous in challenging environments, 3D mapping is essential.
As seen in previous posts, to make a 3D map for a ground robot you might use a laser-range finder. However, similar sensors are not available in underwater environments and the researchers are left coping with low-resolution and noisy measurement systems. To solve this problem, Bülow et al. propose a new method to combine sensory information from noisy 3D sonar scans that partially overlap. The general idea is that the robot scans the environment, moves a little, and then scans the environment again such that the scans overlap. By comparing them, the researchers are able to figure out how the robot moved and can use that to infer where each scan was taken from. This means that there is no need to add expensive motion sensors typically required by other state-of-the-art strategies (Inertial Navigation Systems, and Doppler Velocity Logs).
The approach was first tested in simulation on virtual images with controllable levels of noise. Results show that the method is not computationally expensive, can deal with large spatial distances between scans, and that it is very robust to noise. The authors then plunged a Tritech Eclipse sonar in a river in Germany to generate 18 scans of the Lesumer Sperrwerk, a river flood gate. Results from that experiment shown in the video below compared well to other approaches described in the literature.
In the future, Bülow et al. hope to combine this approach with SLAM to avoid the accumulation of relative localization errors.