Robohub.org
 

Drones learn acrobatics by themselves


by
24 June 2020



share this:


Researchers from NCCR Robotics at the University of Zurich and Intel developed an algorithm that pushes autonomous drones to their physical limit.
Since the dawn of flight, acrobatics has been a way for pilots to prove their bravery and worth. It is also a way to push the envelope of what can be done with an aircraft, learning lessons that are useful to all pilots and engineers. The same is true for unmanned flight. Professional drone pilots perform acrobatic maneuvers in dedicated competitions, pushing drones to their physical limits and perfecting their control and efficiency.

Now a collaboration between researchers from the University of Zurich (part of the NCCR Robotics consortium) and Intel has developed a quadcopter that can learn to fly acrobatics autonomously, paving the way to drones that can fully exploit their agility and speed, and cover more distance within their battery life. Though no drone mission will probably ever require a power loop or a Matty flip – the typical acrobatic maneuvers – a drone that can perform them autonomously is likely to be more efficient at all times.

A step forward towards integrating drones in our everyday life

Researchers of the University of Zurich and Intel developed a novel algorithm that pushes autonomous drones with only on-board sensing and computation close to their physical limits. To prove the efficiency of the developed algorithm, the researchers made an autonomous quadrotor fly acrobatic maneuvers such as the Power Loop, the Barrel Roll, and the Matty Flip, during which the drone incurs accelerations of up to 3g. “Several applications of drones, such as search-and-rescue or delivery, will strongly benefit from faster drones, which can cover large distances in limited time. With this algorithm we have taken a step forward towards integrating autonomously navigating drones into our everyday life”, says Davide Scaramuzza, Professor and Director of the Robotics and Perception Group at the University of Zurich, and head of the Rescue Robotics Grand Challenge for NCCR Robotics.

Simulation for training, real-world for testing
The navigation algorithm that allows drones to fly acrobatic maneuvers is represented by an artificial neural network that directly converts observations from the on-board camera and inertial sensors, to control commands. This neural network is trained exclusively in simulation. Learning agile maneuvers entirely in simulation has several advantages: (i) Maneuvers can be simply specified by reference trajectories in simulation and do not require expensive demonstrations by a human pilot, (ii) training is safe and does not pose any physical risk to the quadrotor, and (iii) the approach can scale to a large number of diverse maneuvers, including ones that can only be performed by the very best human pilots.

The algorithm transfers its knowledge to reality by using appropriate abstractions of the visual and inertial inputs (i.e., feature tracks and integrated inertial measurements), which decreases the gap between the simulated and physical world. Indeed, without physically-accurate modeling of the world or any fine-tuning on real-world data, the trained neural network can be deployed on a real quadrotor to perform acrobatic maneuvers.

Towards fully autonomous drones
Within a few hours of training in simulation, our algorithm learns to fly acrobatic maneuvers with an accuracy comparable to professional human pilots. Nevertheless, the research team warns that there is still a significant gap between what human pilots and autonomous drones can do. “The best human pilots still have an edge over autonomous drones given their ability to quickly interpret and adapt to unexpected situations and changes in the environment,” says Prof. Scaramuzza.

Paper: E. Kaufmann*, A. Loquercio*, R. Ranftl, M. Müller, V. Koltun, D. Scaramuzza “Deep Drone Acrobatics”, Robotics: Science and Systems (RSS), 2020
Paper
Video
Code



tags:


NCCR Robotics





Related posts :



A flexible lens controlled by light-activated artificial muscles promises to let soft machines see

  30 Oct 2025
Researchers have designed an adaptive lens made of soft, light-responsive, tissue-like materials.

Social media round-up from #IROS2025

  27 Oct 2025
Take a look at what participants got up to at the IEEE/RSJ International Conference on Intelligent Robots and Systems.

Using generative AI to diversify virtual training grounds for robots

  24 Oct 2025
New tool from MIT CSAIL creates realistic virtual kitchens and living rooms where simulated robots can interact with models of real-world objects, scaling up training data for robot foundation models.

Robot Talk Episode 130 – Robots learning from humans, with Chad Jenkins

  24 Oct 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Chad Jenkins from University of Michigan about how robots can learn from people and assist us in our daily lives.

Robot Talk at the Smart City Robotics Competition

  22 Oct 2025
In a special bonus episode of the podcast, Claire chatted to competitors, exhibitors, and attendees at the Smart City Robotics Competition in Milton Keynes.

Robot Talk Episode 129 – Automating museum experiments, with Yuen Ting Chan

  17 Oct 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Yuen Ting Chan from Natural History Museum about using robots to automate molecular biology experiments.

What’s coming up at #IROS2025?

  15 Oct 2025
Find out what the International Conference on Intelligent Robots and Systems has in store.

From sea to space, this robot is on a roll

  13 Oct 2025
Graduate students in the aptly named "RAD Lab" are working to improve RoboBall, the robot in an airbag.



 

Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


 












©2025.05 - Association for the Understanding of Artificial Intelligence