Researchers use single joystick to control swarm of RC robots

09 September 2013

share this:

What can you do with 12 RC robots all slaved to the same joystick remote control?  Common sense might say you need 11 more remotes, but our video demonstrates you can steer all the robots to any desired final position by using an algorithm we designed. The algorithm exploits rotational noise: each time the joystick tells the robots to turn, every robot turns a slightly different amount due to random wheel slip. We use these differences to slowly push the robots to goal positions. The current algorithm is slow, so we’re designing new algorithms that are 200x faster. You can help by playing our online game:

The algorithm extends to any number of robots; this video shows a simulation with 120 robots and a more complicated goal pattern.

A swarm of r-one robots designed and built by Rice undergraduates forming the Rice University logo.  The robots are all connected to the same joystick remote control – so each robot gets exactly the same commands, but by using an algorithm we designed, they can be steered to any formation.

Our research is motivated by real-world challenges in microrobotics and nanorobotics, where often all the robots are steered by the same control signal (IROS 2012 paper). Our colleagues Yan Ou and Agung Julius at RPI and Paul Kim and MinJun Kim at Drexel use an external magnetic field to steer single-celled protozoa swimming in a Petri dish. The same magnetic field is applied to every protozoa. We want controllers to steer them to do useful tasks such as targeted drug delivery and mobile sensing.

Electromagnets (left) create a magnetic field that applies the same control signal to every protozoa in the beaker (right).

Other examples include bacteria that move toward a light source (phototaxis), single celled organisms attracted by a chemical source (chemotaxis), microrobots driven by an external magnetic field (magmites) or capacitive charge (scratch-drive robots), or synthetic molecules with light-driven motors (nanocars).

How it works
To emulate micro or nanorobots, our robots are programmed to behave as simple remote control cars, and tuned to listen to the same frequency.

Our robots all receive the same broadcast signal, and all either turn in place or drive x mm forwards or backwards.

We can then either drive the robots around with a simple joystick, or let a computer apply the control.  Regardless, the commands are the same, and consist merely of “go forwards/backwards x seconds” or “turn left for 2 seconds”. The computer has an advantage over human players, because it can precisely measure the position and orientation of every robot and compute the position error using an overhead camera and printed April Tag barcode attached to each robot.

The signal broadcast to every robot is either “go forwards/backwards x seconds” or “turn left for 2 seconds”.

Our control law is globally asymptotically stable, which means we can start the robots in any configuration and they will be guided to our desired formation. In the video, the robots start in a rectangle and are then guided to form the letter ‘R’. Next, 120 simulated robots start forming the word ‘Robotics’ and are guided to form our university logo and beloved owl mascot.

The math

Brief introductory video on Ensemble Control of Robotic Systems.

Our initial work, with Tim Bretl at the University of Illinois, investigated a parallel problem: robust, open-loop control of a robot with unknown parameters. We chose two classical robot platforms to demonstrate our approach, the nonholonomic unicycle and the plate-ball manipulator.

The nonholonomic unicycle is a canonical model for mobile robotics, and can model robots including roombas, tanks and cars. It has two inputs: forward speed and turning rate. It is easy to design an open-loop input sequence to steer the robot if the wheel size is known, but if the wheel size is unknown that same input sequence is scaled by the wheel size and can move the robot to drastically different positions (as shown in the video below). In our paper we showed how to generate open-loop input sequences that are robust to an unknown wheel size. The key insight is that the sequence is designed to move from start to goal an imaginary continuum of robots containing every possible wheel size. This steering algorithm is based on piecewise-constant inputs and Taylor series approximations. Taylor series approximations give us a clear method for increasing precision. If we want the robot continuum to be closer to the goal, we simply increase the order of the Taylor series approximation.

A new approach to the classic plate-ball manipulator allows us to independently orient multiple spheres with the same control inputs. See our IROS 2012 paper).

We applied a similar approach to the ball-plate manipulator, a canonical model for robotic manipulation by rolling. In the classical version of this system, a ball is held between two parallel plates and manipulated by maneuvering the upper plate while holding the lower plate fixed. The ball can be brought to any position and orientation though translations of the upper plate. The two inputs are speed of the ball center along the x-axis and speed along the y-axis. Changing the ball size inversely affects the rate of orientation change.

Key insights
For robust control, we steered one real robot with a particular wheel size by imagining steering a continuum of all possible wheel sizes. Later, with Cem Onyuksel and Tim Bretl, we realized instead we could use the same input to steer many real robots. Our approach, based on a Control-Lyapunov function, allowed us to control the position of any number of robots using the same broadcast control signal. You can test this out by purchasing several RC cars tuned to the same radio frequency. If you command the cars to go forward, all will move forward. If you command them to turn, all turn—but due to process noise all turn a slightly different amount. The algorithm, published in another 2012 IROS article, shows that rotational noise improves control, but translational noise impairs control.

The three hands of a pocket watch are all set by the same knob. In the same way, our turning command is sent to every robot. Unlike position, the orientation of each robot cannot be independently controlled. Image credit: Alexander T. Carroll.

Our algorithm allowed us to control the final position of n robots, but we could not control the final orientation. To understand why, consider setting a pocket watch with hour, minute, and second hands. The hour and minute hands overlap 22 times per day, every 12/11 hours (the first crossing is at 1:05:27), but the hour, minute, and second hands overlap only twice: midnight and noon.  In the same way, our turning command is sent to every robot. Even steering 10 robots to point in the same direction is like flipping a coin until you get 10 heads in a row. This seemed a closed door. Then I saw Bill Amend’s Foxtrot cartoon on January 1st 2012, where the strip’s younger brother steers 64 differential-drive robots to launch rubber darts at his older sister. The younger brother was holding a simple remote controller – just like our problem.  After a day of effort, we proved we could both predict the final orientation of each robot and arbitrarily pick the final position of each robot – sufficient for attacking, surrounding, or defending targets. The video for our upcoming IROS 2013 paper illustrates this algorithm using robots equipped with laser turrets.

With James McLurkin, we mounted lasers on robots and showed that even with only one joystick controller, it is possible to steer the robots to all attack a target.

Future directions

With Chris Ertel, we’ve just launched SwarmControl, an online game where players can steer a swarm of robots to complete challenges. The goal is to test several different control and visualization schemes for swarms of robots, and to gather quantitative data about what mechanisms help people work with these swarms most effectively. Secondarily, we’ve created a simple platform for publishing these academic user experiments projects online – why settle merely for a handful of undergrads to sample with? We’d like to advance the state of the art in experimentation.

Swarm_Control_Screenshot, an online game where you control a robotic swarm and help answer research questions.

All results are available as CSV or JSON from the results page.  SwarmControl is an open-source project hosted at

Like this article? You may also be interested in:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.

tags: , , , ,

Aaron Becker is a Postdoctoral Research Associate at Rice University's Multi-Robot Systems Lab.
Aaron Becker is a Postdoctoral Research Associate at Rice University's Multi-Robot Systems Lab.

Related posts :

Open Robotics Launches the Open Source Robotics Alliance

The Open Source Robotics Foundation (OSRF) is pleased to announce the creation of the Open Source Robotics Alliance (OSRA), a new initiative to strengthen the governance of our open-source robotics so...

Robot Talk Episode 77 – Patricia Shaw

In the latest episode of the Robot Talk podcast, Claire chatted to Patricia Shaw from Aberystwyth University all about home assistance robots, and robot learning and development.
18 March 2024, by

Robot Talk Episode 64 – Rav Chunilal

In the latest episode of the Robot Talk podcast, Claire chatted to Rav Chunilal from Sellafield all about robotics and AI for nuclear decommissioning.
31 December 2023, by

AI holidays 2023

Thanks to those that sent and suggested AI and robotics-themed holiday videos, images, and stories. Here’s a sample to get you into the spirit this season....
31 December 2023, by and

Faced with dwindling bee colonies, scientists are arming queens with robots and smart hives

By Farshad Arvin, Martin Stefanec, and Tomas Krajnik Be it the news or the dwindling number of creatures hitting your windscreens, it will not have evaded you that the insect world in bad shape. ...
31 December 2023, by

Robot Talk Episode 63 – Ayse Kucukyilmaz

In the latest episode of the Robot Talk podcast, Claire chatted to Ayse Kucukyilmaz from the University of Nottingham about collaboration, conflict and failure in human-robot interactions.
31 December 2023, by

Robohub is supported by:

Would you like to learn how to tell impactful stories about your robot or AI system?

training the next generation of science communicators in robotics & AI

©2024 - Association for the Understanding of Artificial Intelligence


©2021 - ROBOTS Association