Robohub.org
 

Bees’ ‘waggle dance’ may revolutionize how robots talk to each other in disaster zones


by
18 July 2022



share this:

Image credit: rtbilder / Shutterstock.com

By Conn Hastings, science writer

Honeybees use a sophisticated dance to tell their sisters about the location of nearby flowers. This phenomenon forms the inspiration for a form of robot-robot communication that does not rely on digital networks. A recent study presents a simple technique whereby robots view and interpret each other’s movements or a gesture from a human to communicate a geographical location. This approach could prove invaluable when network coverage is unreliable or absent, such as in disaster zones.

Where are those flowers and how far away are they? This is the crux of the ‘waggle dance’ performed by honeybees to alert others to the location of nectar-rich flowers. A new study in Frontiers in Robotics and AI has taken inspiration from this technique to devise a way for robots to communicate. The first robot traces a shape on the floor, and the shape’s orientation and the time it takes to trace it tell the second robot the required direction and distance of travel. The technique could prove invaluable in situations where robot labor is required but network communications are unreliable, such as in a disaster zone or in space.

Honeybees excel at non-verbal communication

If you have ever found yourself in a noisy environment, such as a factory floor, you may have noticed that humans are adept at communicating using gestures. Well, we aren’t the only ones. In fact, honeybees take non-verbal communication to a whole new level.

By wiggling their backside while parading through the hive, they can let other honeybees know about the location of food. The direction of this ‘waggle dance’ lets other bees know the direction of the food with respect to the hive and the sun, and the duration of the dance lets them know how far away it is. It is a simple but effective way to convey complex geographical coordinates.

Applying the dance to robots

This ingenious method of communication inspired the researchers behind this latest study to apply it to the world of robotics. Robot cooperation allows multiple robots to coordinate and complete complex tasks. Typically, robots communicate using digital networks, but what happens when these are unreliable, such as during an emergency or in remote locations? Moreover, how can humans communicate with robots in such a scenario?

To address this, the researchers designed a visual communication system for robots with on-board cameras, using algorithms that allow the robots to interpret what they see. They tested the system using a simple task, where a package in a warehouse needs to be moved. The system allows a human to communicate with a ‘messenger robot’, which supervises and instructs a ‘handling robot’ that performs the task.

Robot dancing in practice

In this situation, the human can communicate with the messenger robot using gestures, such as a raised hand with a closed fist. The robot can recognize the gesture using its on-board camera and skeletal tracking algorithms. Once the human has shown the messenger robot where the package is, it conveys this information to the handling robot.

This involves positioning itself in front of the handling robot and tracing a specific shape on the ground. The orientation of the shape indicates the required direction of travel, while the length of time it takes to trace it indicates the distance. This robot dance would make a worker bee proud, but did it work?

The researchers put it to the test using a computer simulation, and with real robots and human volunteers. The robots interpreted the gestures correctly 90% and 93.3% of the time, respectively, highlighting the potential of the technique.

“This technique could be useful in places where communication network coverage is insufficient and intermittent, such as robot search-and-rescue operations in disaster zones or in robots that undertake space walks,” said Prof Abhra Roy Chowdhury of the Indian Institute of Science, senior author on the study. “This method depends on robot vision through a simple camera, and therefore it is compatible with robots of various sizes and configurations and is scalable,” added Kaustubh Joshi of the University of Maryland, first author on the study.


Video credit: K Joshi and AR Chowdury


This article was originally published on the Frontiers blog.



tags: ,


Frontiers Journals & Blog





Related posts :



Robot Talk Episode 126 – Why are we building humanoid robots?

  20 Jun 2025
In this special live recording at Imperial College London, Claire chatted to Ben Russell, Maryam Banitalebi Dehkordi, and Petar Kormushev about humanoid robotics.

Gearing up for RoboCupJunior: Interview with Ana Patrícia Magalhães

and   18 Jun 2025
We hear from the organiser of RoboCupJunior 2025 and find out how the preparations are going for the event.

Robot Talk Episode 125 – Chatting with robots, with Gabriel Skantze

  13 Jun 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Gabriel Skantze from KTH Royal Institute of Technology about having natural face-to-face conversations with robots.

Preparing for kick-off at RoboCup2025: an interview with General Chair Marco Simões

and   12 Jun 2025
We caught up with Marco to find out what exciting events are in store at this year's RoboCup.

Interview with Amar Halilovic: Explainable AI for robotics

  10 Jun 2025
Find out about Amar's research investigating the generation of explanations for robot actions.

Robot Talk Episode 124 – Robots in the performing arts, with Amy LaViers

  06 Jun 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Amy LaViers from the Robotics, Automation, and Dance Lab about the creative relationship between humans and machines.

Robot Talk Episode 123 – Standardising robot programming, with Nick Thompson

  30 May 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Nick Thompson from BOW about software that makes robots easier to program.

Congratulations to the #AAMAS2025 best paper, best demo, and distinguished dissertation award winners

  29 May 2025
Find out who won the awards presented at the International Conference on Autonomous Agents and Multiagent Systems last week.



 

Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


©2025.05 - Association for the Understanding of Artificial Intelligence


 












©2025.05 - Association for the Understanding of Artificial Intelligence