Robohub.org
 

Video: Quadrocopter learns from its mistakes, perfects air racing


by
08 November 2012



share this:

First person view of the quadrocopter racing through a pylon slalom course.

Manual programming of robots only gets you so far. And, as you can see in the video, for quadrocopters that’s not very far at all (see the “Without Learning” part starting at 1:30):

On its first try to navigate the obstacle course, the flying robot attempts to navigate based on a pre-computed flight path. The path is derived using a basic mathematical (“first principles”) model. But quadrocopters have complex aerodynamics – the force produced by the propellers changes depending on the vehicle’s velocity and orientation, and thus the actual amount of force produced is quite different from what the simple math describes.

What’s worse, these flying vehicles use soft propellers for safety, which bend differently depending on how much thrust is applied and wear rapidly with use (and even more rapidly when crashing).

Even with continuous feedback on the robot’s position from the motion capture system, manually programming the robots with a control sequence that takes all these imperfections into account is impractical.

 

My colleague Angela Schoellig and the Flying Machine Arena team here at ETH Zurich have now developed and implemented algorithms that allow their flying robots to race through an obstacle parcours – and learn to improve their performance.

Here is how Angela described the process to me:

The learning algorithm is applied to a quadrocopter that is guided by an underlying trajectory-following controller. The result of the learning is an adapted input trajectory in terms of desired positions. The algorithm has been equipped with several unique features that are particularly important when pushing vehicles to the limits of their dynamic capabilities and when applying the learning algorithm to increasingly complex systems:

1. We designed an input update rule that explicitly takes actuation and sensor limits into account by solving a constrained convex problem.
2. We developed an identification routine that extracts the model data required by the learning algorithm from a numerical simulation of the vehicle dynamics. That is, the algorithm is applicable to systems for which an analytical model is difficult (or impossible) to derive.
3. We combined model data and experimental data, traditional filtering methods and state-of-the- art optimization techniques to achieve an effective and computationally efficient learning strategy that achieves convergence in less than ten iterations.

 

The result is a robot that learns and improves each time it tries to perform a task.

In this example the robot races through a pylon parcours, calling to mind air races, such as the Red Bull Air Racing Championships or the Reno Air Races – except that there are no human pilots that spent their life learning to fly – here it’s robots doing the learning. And they are efficient, taking less than 10 training sessions to find the optimal steering commands!

Moreover, the learning algorithms are not specific to slalom racing, they can be used to learn other tasks. As Angela points out:

Our goal is to enable autonomous systems – such as the quadrocopter in the video – to ‘learn’ the way humans do: through practice.

 

The videos below show how learning algorithms can be used for other robotic tasks:

 

 

 

 

Full disclosure: Angela and the Flying Machine Arena team work in the same lab as I. Also, I’m working on RoboEarth, trying to allow robot learning on a much larger scale.

 



tags: , , , , , , , ,


Markus Waibel is a Co-Founder and COO of Verity Studios AG, Co-Founder of Robohub and the ROBOTS Podcast.
Markus Waibel is a Co-Founder and COO of Verity Studios AG, Co-Founder of Robohub and the ROBOTS Podcast.

            AUAI is supported by:



Subscribe to Robohub newsletter on substack



Related posts :

Gradient-based planning for world models at longer horizons

  28 Apr 2026
What were the problems that motivated this project and what was the approach to address them?

Robot Talk Episode 153 – Origami-inspired robots, with Chenying Liu

  24 Apr 2026
In the latest episode of the Robot Talk podcast, Claire chatted to Chenying Liu from University of Oxford about how a robot's physical form can actively contribute to sensing, processing, decision-making, and movement.

Sony AI table tennis robot outplays elite human players

  22 Apr 2026
New robot and AI system has beaten professional and elite table tennis players.

AI system learns to keep warehouse robot traffic running smoothly

  20 Apr 2026
This new approach adapts to decide which robots should get the right of way at every moment, avoiding congestion and increasing throughput.

Robot Talk Episode 152 – Dexterous robot hands, with Rich Walker

  17 Apr 2026
In the latest episode of the Robot Talk podcast, Claire chatted to Rich Walker from Shadow Robot Company about their advanced robotic hands for research and industry.

What I’ve learned from 25 years of automated science, and what the future holds: an interview with Ross King

and   14 Apr 2026
Ross King created the first robot scientist back in 2009. He spoke to us about the nature of scientific discovery, the role AI has to play, and his recent work in DNA computing.

Robot Talk Episode 151 – Robots to study the ocean, with Simona Aracri

  10 Apr 2026
In the latest episode of the Robot Talk podcast, Claire chatted to Simona Aracri from National Research Council of Italy about innovative robot designs for oceanography and environmental monitoring.

Generative AI improves a wireless vision system that sees through obstructions

  08 Apr 2026
With this new technique, a robot could more accurately detect hidden objects or understand an indoor scene using reflected Wi-Fi signals.



AUAI is supported by:







Subscribe to Robohub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence