Robohub.org
 

Place recognition and localization with omnidirectional vision


by
09 May 2011



share this:

Let’s say you just purchased a new service robot and you want it to be able to know its way in your apartment. The obvious thing to do would be to show it around, going from room to room saying “this is the living room” and “this is the kitchen”. The robot, equipped with an omnidirectional camera, could then take pictures along the way while recording its location. This will build-up its visual memory of the apartment. The challenge for the robot next time around is to figure out in what room it is (place recognition) and where it is in this room (localization) based on its current view of the world.

This requires finding a good way to compare new images to the robot’s visual memory. The comparison needs to be robust to robot motion, objects changing place and transformations required to use omnidirectional images. As a solution, Labbani-Igbida et al. propose to compute signatures for each omnidirectional image based on invariant Haar integrals. Signatures are numbers that capture distinctive features in the image (color, shape, texture, interest points…). By comparing signatures between images (similarity), the robot is able to determine in what room it is and at what location much faster than having to process the raw images.

Experiments were conducted using a Koala robot equipped with a paracatadioptric omnidirectional sensor. The robot was first placed in different rooms of an office environment where it took images to build a visual memory. The robot was then set loose to explore the office including places in the environment that had not been previously visited during the memory building phase.

Results show that the robot is able to do space recognition and localization in ways that outperform or perform similarly to state-of-the-art algorithms while being very time and memory efficient. In the future, authors would like to limit the number of images needed for the robot to build its visual memory.




Sabine Hauert is President of Robohub and Associate Professor at the Bristol Robotics Laboratory
Sabine Hauert is President of Robohub and Associate Professor at the Bristol Robotics Laboratory





Related posts :



Robots to the rescue: miniature robots offer new hope for search and rescue operations

  09 Sep 2025
Small two-wheeled robots, equipped with high-tech sensors, will help to find survivors faster in the aftermath of disasters.

#IJCAI2025 distinguished paper: Combining MORL with restraining bolts to learn normative behaviour

and   04 Sep 2025
The authors introduce a framework for guiding reinforcement learning agents to comply with social, legal, and ethical norms.

Researchers are teaching robots to walk on Mars from the sand of New Mexico

  02 Sep 2025
Researchers are closer to equipping a dog-like robot to conduct science on the surface of Mars

Engineering fantasy into reality

  26 Aug 2025
PhD student Erik Ballesteros is building “Doc Ock” arms for future astronauts.

RoboCup@Work League: Interview with Christoph Steup

and   22 Aug 2025
Find out more about the RoboCup League focussed on industrial production systems.

Interview with Haimin Hu: Game-theoretic integration of safety, interaction and learning for human-centered autonomy

and   21 Aug 2025
Hear from Haimin in the latest in our series featuring the 2025 AAAI / ACM SIGAI Doctoral Consortium participants.

AIhub coffee corner: Agentic AI

  15 Aug 2025
The AIhub coffee corner captures the musings of AI experts over a short conversation.

Interview with Kate Candon: Leveraging explicit and implicit feedback in human-robot interactions

and   25 Jul 2025
Hear from PhD student Kate about her work on human-robot interactions.



 

Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


 












©2025.05 - Association for the Understanding of Artificial Intelligence