Robohub.org
 

Bees’ ‘waggle dance’ may revolutionize how robots talk to each other in disaster zones


by
18 July 2022



share this:

Image credit: rtbilder / Shutterstock.com

By Conn Hastings, science writer

Honeybees use a sophisticated dance to tell their sisters about the location of nearby flowers. This phenomenon forms the inspiration for a form of robot-robot communication that does not rely on digital networks. A recent study presents a simple technique whereby robots view and interpret each other’s movements or a gesture from a human to communicate a geographical location. This approach could prove invaluable when network coverage is unreliable or absent, such as in disaster zones.

Where are those flowers and how far away are they? This is the crux of the ‘waggle dance’ performed by honeybees to alert others to the location of nectar-rich flowers. A new study in Frontiers in Robotics and AI has taken inspiration from this technique to devise a way for robots to communicate. The first robot traces a shape on the floor, and the shape’s orientation and the time it takes to trace it tell the second robot the required direction and distance of travel. The technique could prove invaluable in situations where robot labor is required but network communications are unreliable, such as in a disaster zone or in space.

Honeybees excel at non-verbal communication

If you have ever found yourself in a noisy environment, such as a factory floor, you may have noticed that humans are adept at communicating using gestures. Well, we aren’t the only ones. In fact, honeybees take non-verbal communication to a whole new level.

By wiggling their backside while parading through the hive, they can let other honeybees know about the location of food. The direction of this ‘waggle dance’ lets other bees know the direction of the food with respect to the hive and the sun, and the duration of the dance lets them know how far away it is. It is a simple but effective way to convey complex geographical coordinates.

Applying the dance to robots

This ingenious method of communication inspired the researchers behind this latest study to apply it to the world of robotics. Robot cooperation allows multiple robots to coordinate and complete complex tasks. Typically, robots communicate using digital networks, but what happens when these are unreliable, such as during an emergency or in remote locations? Moreover, how can humans communicate with robots in such a scenario?

To address this, the researchers designed a visual communication system for robots with on-board cameras, using algorithms that allow the robots to interpret what they see. They tested the system using a simple task, where a package in a warehouse needs to be moved. The system allows a human to communicate with a ‘messenger robot’, which supervises and instructs a ‘handling robot’ that performs the task.

Robot dancing in practice

In this situation, the human can communicate with the messenger robot using gestures, such as a raised hand with a closed fist. The robot can recognize the gesture using its on-board camera and skeletal tracking algorithms. Once the human has shown the messenger robot where the package is, it conveys this information to the handling robot.

This involves positioning itself in front of the handling robot and tracing a specific shape on the ground. The orientation of the shape indicates the required direction of travel, while the length of time it takes to trace it indicates the distance. This robot dance would make a worker bee proud, but did it work?

The researchers put it to the test using a computer simulation, and with real robots and human volunteers. The robots interpreted the gestures correctly 90% and 93.3% of the time, respectively, highlighting the potential of the technique.

“This technique could be useful in places where communication network coverage is insufficient and intermittent, such as robot search-and-rescue operations in disaster zones or in robots that undertake space walks,” said Prof Abhra Roy Chowdhury of the Indian Institute of Science, senior author on the study. “This method depends on robot vision through a simple camera, and therefore it is compatible with robots of various sizes and configurations and is scalable,” added Kaustubh Joshi of the University of Maryland, first author on the study.


Video credit: K Joshi and AR Chowdury


This article was originally published on the Frontiers blog.



tags: ,


Frontiers Journals & Blog





Related posts :



Learning robust controllers that work across many partially observable environments

  27 Nov 2025
Exploring designing controllers that perform reliably even when the environment may not be precisely known.

Human-robot interaction design retreat

  25 Nov 2025
Find out more about an event exploring design for human-robot interaction.

Robot Talk Episode 134 – Robotics as a hobby, with Kevin McAleer

  21 Nov 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Kevin McAleer from kevsrobots about how to get started building robots at home.

ACM SIGAI Autonomous Agents Award 2026 open for nominations

  19 Nov 2025
Nominations are solicited for the 2026 ACM SIGAI Autonomous Agents Research Award.

Robot Talk Episode 133 – Creating sociable robot collaborators, with Heather Knight

  14 Nov 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Heather Knight from Oregon State University about applying methods from the performing arts to robotics.

CoRL2025 – RobustDexGrasp: dexterous robot hand grasping of nearly any object

  11 Nov 2025
A new reinforcement learning framework enables dexterous robot hands to grasp diverse objects with human-like robustness and adaptability—using only a single camera.



 

Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


 












©2025.05 - Association for the Understanding of Artificial Intelligence