Robohub.org
 

Learning acrobatic maneuvers for quadrocopters


by
17 April 2012



share this:

Have you ever seen those videos of quadrocopters performing acrobatic maneuvers?

The latest paper on the Autonomous Robots website presents a simple method to make your robot achieve adaptive fast open-loop maneuvers, whether it’s performing multiple flips or fast translation motions. The method is thought to be straightforward to implement and understand, and general enough that it could be applied to problems outside of aerial acrobatics.

Before the experiment, an engineer with knowledge of the problem defines a maneuver as an initial state, a desired final state, and a parameterized control function responsible for producing the maneuver. A model of the robot motion is used to initialize the parameters of this control function. Because models are never perfect, the parameters then need to be refined during experiments. The error between the robot’s desired state and its achieved state after each maneuver is used to iteratively correct parameter values. More details can be found in the figure below or in the paper.

Method to achieve adaptive fast open-loop maneuver. p represents the parameters to be adapted, C is a first-order correction matrix, γ is a correction step size, and e is a vector of error measurements. (1) The user defines a motion in terms of initial and desired final states and a parameterized input function. (2) A first-principles continuous-time model is used to find nominal parameters p0 and C. (3) The motion is performed on the physical vehicle, (4) the error is measured and (5) a correction is applied to the parameters. The process is then repeated.

Experiments were performed in the ETH Flying Machine Arena which is equipped with an 8-camera motion capture system providing robot position and rotation measurements used for parametric learning.




Sabine Hauert is President of Robohub and Associate Professor at the Bristol Robotics Laboratory
Sabine Hauert is President of Robohub and Associate Professor at the Bristol Robotics Laboratory

            AUAI is supported by:



Subscribe to Robohub newsletter on substack



Related posts :

How to teach the same skill to different robots

  11 May 2026
A new framework to teach a skill to robots with different mechanical designs, allowing them to carry out the same task without rewriting code for each.

Robot Talk Episode 155 – Making aerial robots smarter, with Melissa Greeff

  08 May 2026
In the latest episode of the Robot Talk podcast, Claire chatted to Melissa Greeff from Queen's University about autonomous navigation and learning for drones.

New understanding of insect flight points way to stable flapping-wing robots

  07 May 2026
The way bugs and birds flap their wings may look effortless, but the dynamics that keep them aloft are dizzyingly complex and difficult to quantify.

Robotically assembled building blocks could make construction more efficient and sustainable

  05 May 2026
Research suggests constructing a simple building from interlocking subunits should be mechanically feasible and have a much smaller carbon footprint.

Robot Talk Episode 154 – Visual navigation in insects and robots, with Andrew Philippides

  01 May 2026
In the latest episode of the Robot Talk podcast, Claire chatted to Andrew Philippides from the University of Sussex about what we can learn from ants and bees to improve robot navigation.

Ultralightweight sonar plus AI lets tiny drones navigate like bats

  29 Apr 2026
Researchers develop ultrasound-based perception system inspired by bat echolocation.

Gradient-based planning for world models at longer horizons

  28 Apr 2026
What were the problems that motivated this project and what was the approach to address them?

Robot Talk Episode 153 – Origami-inspired robots, with Chenying Liu

  24 Apr 2026
In the latest episode of the Robot Talk podcast, Claire chatted to Chenying Liu from University of Oxford about how a robot's physical form can actively contribute to sensing, processing, decision-making, and movement.



AUAI is supported by:







Subscribe to Robohub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence