Robohub.org
 

Drones learn acrobatics by themselves


by
24 June 2020



share this:


Researchers from NCCR Robotics at the University of Zurich and Intel developed an algorithm that pushes autonomous drones to their physical limit.
Since the dawn of flight, acrobatics has been a way for pilots to prove their bravery and worth. It is also a way to push the envelope of what can be done with an aircraft, learning lessons that are useful to all pilots and engineers. The same is true for unmanned flight. Professional drone pilots perform acrobatic maneuvers in dedicated competitions, pushing drones to their physical limits and perfecting their control and efficiency.

Now a collaboration between researchers from the University of Zurich (part of the NCCR Robotics consortium) and Intel has developed a quadcopter that can learn to fly acrobatics autonomously, paving the way to drones that can fully exploit their agility and speed, and cover more distance within their battery life. Though no drone mission will probably ever require a power loop or a Matty flip – the typical acrobatic maneuvers – a drone that can perform them autonomously is likely to be more efficient at all times.

A step forward towards integrating drones in our everyday life

Researchers of the University of Zurich and Intel developed a novel algorithm that pushes autonomous drones with only on-board sensing and computation close to their physical limits. To prove the efficiency of the developed algorithm, the researchers made an autonomous quadrotor fly acrobatic maneuvers such as the Power Loop, the Barrel Roll, and the Matty Flip, during which the drone incurs accelerations of up to 3g. “Several applications of drones, such as search-and-rescue or delivery, will strongly benefit from faster drones, which can cover large distances in limited time. With this algorithm we have taken a step forward towards integrating autonomously navigating drones into our everyday life”, says Davide Scaramuzza, Professor and Director of the Robotics and Perception Group at the University of Zurich, and head of the Rescue Robotics Grand Challenge for NCCR Robotics.

Simulation for training, real-world for testing
The navigation algorithm that allows drones to fly acrobatic maneuvers is represented by an artificial neural network that directly converts observations from the on-board camera and inertial sensors, to control commands. This neural network is trained exclusively in simulation. Learning agile maneuvers entirely in simulation has several advantages: (i) Maneuvers can be simply specified by reference trajectories in simulation and do not require expensive demonstrations by a human pilot, (ii) training is safe and does not pose any physical risk to the quadrotor, and (iii) the approach can scale to a large number of diverse maneuvers, including ones that can only be performed by the very best human pilots.

The algorithm transfers its knowledge to reality by using appropriate abstractions of the visual and inertial inputs (i.e., feature tracks and integrated inertial measurements), which decreases the gap between the simulated and physical world. Indeed, without physically-accurate modeling of the world or any fine-tuning on real-world data, the trained neural network can be deployed on a real quadrotor to perform acrobatic maneuvers.

Towards fully autonomous drones
Within a few hours of training in simulation, our algorithm learns to fly acrobatic maneuvers with an accuracy comparable to professional human pilots. Nevertheless, the research team warns that there is still a significant gap between what human pilots and autonomous drones can do. “The best human pilots still have an edge over autonomous drones given their ability to quickly interpret and adapt to unexpected situations and changes in the environment,” says Prof. Scaramuzza.

Paper: E. Kaufmann*, A. Loquercio*, R. Ranftl, M. Müller, V. Koltun, D. Scaramuzza “Deep Drone Acrobatics”, Robotics: Science and Systems (RSS), 2020
Paper
Video
Code



tags:


NCCR Robotics





Related posts :



Robot Talk Episode 135 – Robot anatomy and design, with Chapa Sirithunge

  28 Nov 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Chapa Sirithunge from University of Cambridge about what robots can teach us about human anatomy, and vice versa.

Learning robust controllers that work across many partially observable environments

  27 Nov 2025
Exploring designing controllers that perform reliably even when the environment may not be precisely known.

Human-robot interaction design retreat

  25 Nov 2025
Find out more about an event exploring design for human-robot interaction.

Robot Talk Episode 134 – Robotics as a hobby, with Kevin McAleer

  21 Nov 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Kevin McAleer from kevsrobots about how to get started building robots at home.

ACM SIGAI Autonomous Agents Award 2026 open for nominations

  19 Nov 2025
Nominations are solicited for the 2026 ACM SIGAI Autonomous Agents Research Award.

Robot Talk Episode 133 – Creating sociable robot collaborators, with Heather Knight

  14 Nov 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Heather Knight from Oregon State University about applying methods from the performing arts to robotics.

CoRL2025 – RobustDexGrasp: dexterous robot hand grasping of nearly any object

  11 Nov 2025
A new reinforcement learning framework enables dexterous robot hands to grasp diverse objects with human-like robustness and adaptability—using only a single camera.



 

Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


 












©2025.05 - Association for the Understanding of Artificial Intelligence