Robohub.org
 

Learning Bayesian filters without precise ground truth


by
21 January 2011



share this:

As seen in a previous post, Bayes filters such as Kalman filters can be used to estimate the state of a robot. Usually, this requires having a model of how the robot’s sensor measurements relate to the state you want to observe and a model to predict how the robot’s control inputs impact its state. When little is know about these models, machine learning techniques can help find models automatically.

In the scenario imagined by Ko et al. the goal is to infer a car’s position (state) on a race track based on remote control commands and measurements from an Inertial Measurement Unit (IMU, see red box in picture below) that provides turn rates in roll, pitch, and yaw and accelerations in 3 dimensions. The car is only controlled in speed since a “rail” in the track makes it turn. As a start, Ko et al. collect data by driving the car around the track while recording its remote control commands, IMU measurements, and position (ground truth) estimated using an overhead camera. The data is then used to train probabilistic models (Gaussian Process models) that are finally integrated into a Bayes filter.

However, the need for ground truth requires extra effort and additional hardware such as the overhead camera. To overcome this problem, Ko et al. extend their method to deal with situations where no, little, or noisy ground truth is used to train the models.

Results show the successful tracking of the car with better performance than state-of-the-art approaches, even when no ground truth is given. The authors also show how the developed method can be used to allow the car or a robot arm to replay trajectories based on expert demonstrations.




Sabine Hauert is President of Robohub and Associate Professor at the Bristol Robotics Laboratory
Sabine Hauert is President of Robohub and Associate Professor at the Bristol Robotics Laboratory





Related posts :



Robots to the rescue: miniature robots offer new hope for search and rescue operations

  09 Sep 2025
Small two-wheeled robots, equipped with high-tech sensors, will help to find survivors faster in the aftermath of disasters.

#IJCAI2025 distinguished paper: Combining MORL with restraining bolts to learn normative behaviour

and   04 Sep 2025
The authors introduce a framework for guiding reinforcement learning agents to comply with social, legal, and ethical norms.

Researchers are teaching robots to walk on Mars from the sand of New Mexico

  02 Sep 2025
Researchers are closer to equipping a dog-like robot to conduct science on the surface of Mars

Engineering fantasy into reality

  26 Aug 2025
PhD student Erik Ballesteros is building “Doc Ock” arms for future astronauts.

RoboCup@Work League: Interview with Christoph Steup

and   22 Aug 2025
Find out more about the RoboCup League focussed on industrial production systems.

Interview with Haimin Hu: Game-theoretic integration of safety, interaction and learning for human-centered autonomy

and   21 Aug 2025
Hear from Haimin in the latest in our series featuring the 2025 AAAI / ACM SIGAI Doctoral Consortium participants.

AIhub coffee corner: Agentic AI

  15 Aug 2025
The AIhub coffee corner captures the musings of AI experts over a short conversation.

Interview with Kate Candon: Leveraging explicit and implicit feedback in human-robot interactions

and   25 Jul 2025
Hear from PhD student Kate about her work on human-robot interactions.



 

Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


 












©2025.05 - Association for the Understanding of Artificial Intelligence