Robohub.org
 

Faster, more nimble drones on the horizon


by
31 May 2017



share this:

Engineers at MIT have come up with an algorithm to tune a Dynamic Vision Sensor (DVS) camera, simplifying a scene to its most essential visual elements and potentially enabling the development of faster drones. Image: Jose-Luis Olivares/MIT

There’s a limit to how fast autonomous vehicles can fly while safely avoiding obstacles. That’s because the cameras used on today’s drones can only process images so fast, frame by individual frame. Beyond roughly 30 miles per hour, a drone is likely to crash simply because its cameras can’t keep up.

Recently, researchers in Zurich invented a new type of camera, known as the Dynamic Vision Sensor (DVS), that continuously visualizes a scene in terms of changes in brightness, at extremely short, microsecond intervals. But this deluge of data can overwhelm a system, making it difficult for a drone to distinguish an oncoming obstacle through the noise.

Now engineers at MIT have come up with an algorithm to tune a DVS camera to detect only specific changes in brightness that matter for a particular system, vastly simplifying a scene to its most essential visual elements.

The results, which they presented at the IEEE American Control Conference in Seattle, can be applied to any linear system that directs a robot to move from point A to point B as a response to high-speed visual data. Eventually, the results could also help to increase the speeds for more complex systems such as drones and other autonomous robots.

“There is a new family of vision sensors that has the capacity to bring high-speed autonomous flight to reality, but researchers have not developed algorithms that are suitable to process the output data,” says lead author Prince Singh, a graduate student in MIT’s Department of Aeronautics and Astronautics. “We present a first approach for making sense of the DVS’ ambiguous data, by reformulating the inherently noisy system into an amenable form.”

Singh’s co-authors are MIT visiting professor Emilio Frazzoli of the Swiss Federal Institute of Technology in Zurich, and Sze Zheng Yong of Arizona State University.

Taking a visual cue from biology

The DVS camera is the first commercially available “neuromorphic” sensor — a class of sensors that is modeled after the vision systems in animals and humans. In the very early stages of processing a scene, photosensitive cells in the human retina, for example, are activated in response to changes in luminosity, in real time.

Neuromorphic sensors are designed with multiple circuits arranged in parallel, similarly to photosensitive cells, that activate and produce blue or red pixels on a computer screen in response to either a drop or spike in brightness.

Instead of a typical video feed, a drone with a DVS camera would “see” a grainy depiction of pixels that switch between two colors, depending on whether that point in space has brightened or darkened at any given moment. The sensor requires no image processing and is designed to enable, among other applications, high-speed autonomous flight.

Researchers have used DVS cameras to enable simple linear systems to see and react to high-speed events, and they have designed controllers, or algorithms, to quickly translate DVS data and carry out appropriate responses. For example, engineers have designed controllers that interpret pixel changes in order to control the movements of a robotic goalie to block an incoming soccer ball, as well as to direct a motorized platform to keep a pencil standing upright.

But for any given DVS system, researchers have had to start from scratch in designing a controller to translate DVS data in a meaningful way for that particular system.

“The pencil and goalie examples are very geometrically constrained, meaning if you give me those specific scenarios, I can design a controller,” Singh says. “But the question becomes, what if I want to do something more complicated?”

Cutting through the noise

In the team’s new paper, the researchers report developing a sort of universal controller that can translate DVS data in a meaningful way for any simple linear, robotic system. The key to the controller is that it identifies the ideal value for a parameter Singh calls “H,” or the event-threshold value, signifying the minimum change in brightness that the system can detect.

Setting the H value for a particular system can essentially determine that system’s visual sensitivity: A system with a low H value would be programmed to take in and interpret changes in luminosity that range from very small to relatively large, while a high H value would exclude small changes, and only “see” and react to large variations in brightness.

The researchers formulated an algorithm first by taking into account the possibility that a change in brightness would occur for every “event,” or pixel activated in a particular system. They also estimated the probability for “spurious events,” such as a pixel randomly misfiring, creating false noise in the data.

Once they derived a formula with these variables in mind, they were able to work it into a well-known algorithm known as an H-infinity robust controller, to determine the H value for that system.

The team’s algorithm can now be used to set a DVS camera’s sensitivity to detect the most essential changes in brightness for any given linear system, while excluding extraneous signals. The researchers performed a numerical simulation to test the algorithm, identifying an H value for a theoretical linear system, which they found was able to remain stable and carry out its function without being disrupted by extraneous pixel events.

“We found that this H threshold serves as a ‘sweet-spot,’ so that a system doesn’t become overwhelmed with too many events,” Singh says. He adds that the new results “unify control of many systems,” and represent a first step toward faster, more stable autonomous flying robots, such as the Robobee, developed by researchers at Harvard University.

“We want to break that speed limit of 20 to 30 miles per hour, and go faster without colliding,” Singh says. “The next step may be to combine DVS with a regular camera, which can tell you, based on the DVS rendering, that an object is a couch versus a car, in real time.”

This research was supported in part by the Singapore National Research Foundation through the SMART Future Urban Mobility project.



tags: , , , , , ,


MIT News





Related posts :



Gearing up for RoboCupJunior: Interview with Ana Patrícia Magalhães

and   18 Jun 2025
We hear from the organiser of RoboCupJunior 2025 and find out how the preparations are going for the event.

Robot Talk Episode 125 – Chatting with robots, with Gabriel Skantze

  13 Jun 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Gabriel Skantze from KTH Royal Institute of Technology about having natural face-to-face conversations with robots.

Preparing for kick-off at RoboCup2025: an interview with General Chair Marco Simões

and   12 Jun 2025
We caught up with Marco to find out what exciting events are in store at this year's RoboCup.

Interview with Amar Halilovic: Explainable AI for robotics

  10 Jun 2025
Find out about Amar's research investigating the generation of explanations for robot actions.

Robot Talk Episode 124 – Robots in the performing arts, with Amy LaViers

  06 Jun 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Amy LaViers from the Robotics, Automation, and Dance Lab about the creative relationship between humans and machines.

Robot Talk Episode 123 – Standardising robot programming, with Nick Thompson

  30 May 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Nick Thompson from BOW about software that makes robots easier to program.

Congratulations to the #AAMAS2025 best paper, best demo, and distinguished dissertation award winners

  29 May 2025
Find out who won the awards presented at the International Conference on Autonomous Agents and Multiagent Systems last week.

Congratulations to the #ICRA2025 best paper award winners

  27 May 2025
The winners and finalists in the different categories have been announced.



 

Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


©2025.05 - Association for the Understanding of Artificial Intelligence


 












©2025.05 - Association for the Understanding of Artificial Intelligence