Robohub.org
 

Self-flying drone dips, darts and dives through trees at 30 mph: Video demo


by
03 November 2015



share this:
MITCSAIL

By Adam Conner-Simons, MIT CSAIL

A researcher from MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) has developed an obstacle-detection system that allows a drone to autonomously dip, dart and dive through a tree-filled field at upwards of 30 miles per hour.

“Everyone is building drones these days, but nobody knows how to get them to stop running into things,” says CSAIL PhD student Andrew Barry, who developed the system as part of his thesis with MIT professor Russ Tedrake. “Sensors like lidar are too heavy to put on small aircraft, and creating maps of the environment in advance isn’t practical. If we want drones that can fly quickly and navigate in the real world, we need better, faster algorithms.”

Running 20 times faster than existing software, Barry’s stereo-vision algorithm allows the drone to detect objects and build a full map of its surroundings in real-time. Operating at 120 frames per second, the software – which is open-source and available online – extracts depth information at a speed of 8.3 milliseconds per frame.

The drone, which weighs just over a pound and has a 34-inch wingspan, was made from off-the-shelf components costing about $1,700, including a camera on each wing and two processors no fancier than the ones you’d find on a cellphone.

How it works

Traditional algorithms focused on this problem would use the images captured by each camera, and search through the depth-field at multiple distances – 1 meter, 2 meters, 3 meters, and so on – to determine if an object is in the drone’s path.

Such approaches, however, are computationally intensive, meaning that the drone cannot fly any faster than five or six miles per hour without specialized processing hardware.

Barry’s realization was that, at the fast speeds that his drone could travel, the world simply does not change much between frames. Because of that, he could get away with computing just a small subset of measurements – specifically, distances of 10 meters away.

“You don’t have to know about anything that’s closer or further than that,” Barry says. “As you fly, you push that 10-meter horizon forward, and, as long as your first 10 meters are clear, you can build a full map of the world around you.”

While such a method might seem limiting, the software can quickly recover the missing depth information by integrating results from the drone’s odometry and previous distances.

Barry says that he hopes to further improve the algorithms so that they can work at more than one depth, and in environments as dense as a thick forest.

“Our current approach results in occasional incorrect estimates known as ‘drift,’” he says. “As hardware advances allow for more complex computation, we will be able to search at multiple depths and therefore check and correct our estimates. This lets us make our algorithms more aggressive, even in environments with larger numbers of obstacles.”

LINKS

Paper: “Pushbroom Stereo for High-Speed Navigation in Cluttered Environments”

CSAIL’s Robot Locomotion Group

Andrew Barry

Russ Tedrake

RELATED NEWS STORIES

MIT News: “Charging solution for delivery drones: take after our feathered friends”

MIT News: “Autonomous robot flies indoors”

If you liked this article, you may also be interested in:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.



tags: , ,


CSAIL MIT The Computer Science and Artificial Intelligence Laboratory – known as CSAIL ­– is the largest research laboratory at MIT and one of the world’s most important centers of information technology research.
CSAIL MIT The Computer Science and Artificial Intelligence Laboratory – known as CSAIL ­– is the largest research laboratory at MIT and one of the world’s most important centers of information technology research.





Related posts :



Teaching robots to map large environments

  05 Nov 2025
A new approach could help a search-and-rescue robot navigate an unpredictable environment by rapidly generating an accurate map of its surroundings.

Robot Talk Episode 131 – Empowering game-changing robotics research, with Edith-Clare Hall

  31 Oct 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Edith-Clare Hall from the Advanced Research and Invention Agency about accelerating scientific and technological breakthroughs.

A flexible lens controlled by light-activated artificial muscles promises to let soft machines see

  30 Oct 2025
Researchers have designed an adaptive lens made of soft, light-responsive, tissue-like materials.

Social media round-up from #IROS2025

  27 Oct 2025
Take a look at what participants got up to at the IEEE/RSJ International Conference on Intelligent Robots and Systems.

Using generative AI to diversify virtual training grounds for robots

  24 Oct 2025
New tool from MIT CSAIL creates realistic virtual kitchens and living rooms where simulated robots can interact with models of real-world objects, scaling up training data for robot foundation models.

Robot Talk Episode 130 – Robots learning from humans, with Chad Jenkins

  24 Oct 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Chad Jenkins from University of Michigan about how robots can learn from people and assist us in our daily lives.

Robot Talk at the Smart City Robotics Competition

  22 Oct 2025
In a special bonus episode of the podcast, Claire chatted to competitors, exhibitors, and attendees at the Smart City Robotics Competition in Milton Keynes.

Robot Talk Episode 129 – Automating museum experiments, with Yuen Ting Chan

  17 Oct 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Yuen Ting Chan from Natural History Museum about using robots to automate molecular biology experiments.



 

Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


 












©2025.05 - Association for the Understanding of Artificial Intelligence