Robohub.org
 

ManyEars: open source framework for sound processing


by
08 February 2013



share this:

One of the latest papers in the Journal Autonomous Robots presents ManyEars, an open framework for robot audition.

Making robots that are able to localize, track and separate multiple sound sources, even in noisy places, is essential for their deployment in our everyday environments. This could for example allow them to process human speech, even in crowded places, or identify noises of interest and where they came from. Unlike vision however, there are few software and hardware tools that can easily be integrated to robotic platforms.

The ManyEars open source framework allows users to easily experiment with robot audition. The software, which can be downloaded here, is compatible with ROS (Robot Operating System). Its modular design makes it possible to interface with different microphone configurations and hardware, thereby allowing the same software package to be used for different robots. A Graphical User Interface is provided for tuning parameters and visualizing information about the sound sources in real-time. The figure below illustrates the architecture of the software library. It is composed of five modules: Preprocessing, Localization, Tracking, Separation and Postprocessing.
Structure
To make use of the software, a computer, a sound card and microphones are required. ManyEars can be used with commercially available sound cards and microphones. However, commercial sound cards present limitations when used for embedded robotic applications: they can be expensive and have functionalities which are not required for robot audition. They also require significant amount of power and size. For these reasons, the authors introduce a customized microphone board and sound card available as an open hardware solution that can be used on your robot and interfaced with the software package. The board uses an array of microphones, instead of only one or two, thereby allowing a robot to localize, track, and separate multiple sound sources.

The framework is demonstrated using the microphones array on the IRL-1 robot shown below. The placement of the microphones is marked by red circles. Results show that the robot is able to track two human speakers producing uninterrupted speech sequences, even when they are moving, and crossing paths. For videos of the IRL-1, check out the lab’s YouTube Channel.
IRL-1
For more information, you can read the following paper:
The ManyEars open framework, F. Grondin, D. Létourneau, F. Ferland, V. Rousseau, F. Michaud, Autonomous Robots – Springer US, Feb 2013



tags: , , , ,


Sabine Hauert is President of Robohub and Associate Professor at the Bristol Robotics Laboratory
Sabine Hauert is President of Robohub and Associate Professor at the Bristol Robotics Laboratory





Related posts :



Robot Talk at the Smart City Robotics Competition

  22 Oct 2025
In a special bonus episode of the podcast, Claire chatted to competitors, exhibitors, and attendees at the Smart City Robotics Competition in Milton Keynes.

Robot Talk Episode 129 – Automating museum experiments, with Yuen Ting Chan

  17 Oct 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Yuen Ting Chan from Natural History Museum about using robots to automate molecular biology experiments.

What’s coming up at #IROS2025?

  15 Oct 2025
Find out what the International Conference on Intelligent Robots and Systems has in store.

From sea to space, this robot is on a roll

  13 Oct 2025
Graduate students in the aptly named "RAD Lab" are working to improve RoboBall, the robot in an airbag.

Robot Talk Episode 128 – Making microrobots move, with Ali K. Hoshiar

  10 Oct 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Ali K. Hoshiar from University of Essex about how microrobots move and work together.

Interview with Zahra Ghorrati: developing frameworks for human activity recognition using wearable sensors

and   08 Oct 2025
Zahra tells us more about her research on wearable technology.

Women in robotics you need to know about 2025

  06 Oct 2025
This global list celebrates women's impact across the robotics ecosystem and globe.

Robot Talk Episode 127 – Robots exploring other planets, with Frances Zhu

  03 Oct 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Frances Zhu from the Colorado School of Mines about intelligent robotic systems for space exploration.



 

Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


 












©2025.05 - Association for the Understanding of Artificial Intelligence