Robots often need to know where they are in the world to navigate efficiently. One of the cheapest ways to localize is to strap a camera on-board and extract visual features from the environment. However, challenges arise when robots move fast enough to create motion blur. The problem is that blurry images lead to decreased accuracy in localization. Because of this, robots that move too fast might no longer be able to localize and as a result might get lost or need to stop and re-localize.
Instead, Hornung et al. propose to use reinforcement learning to determine the optimal policy which allows the robots to go at speeds appropriate for navigation while ensuring that they get to destination as fast as possible. The actual implementation uses an augmented Markov decision process (MDP) to model the navigation task.
The learned policy is then compressed using a clustering technique to avoid being memory-sassy, which would be a major limitation for robots with low storage capacity.
Experiments were successfully conducted on two different robots in indoor and outdoor scenarios (see video) and the robots were faster than if they had navigated at constant speed. In the future, Hornung et al. hope to implement their system on fast moving robots, such as unmanned aerial vehicles!