Robohub.org
ep.

197

podcast
 

Multi-Agent Systems and Human-Swarm Interaction with Magnus Egerstedt

by
11 December 2015



share this:


Control_theory_Swarm_Magnus_Egerstedt

Transcript below.

In this episode, Andrew Vaziri interviews Magnus Egerstedt, Professor at Georiga Tech, about his research in swarm robotics and multi-agent systems. They discuss privacy and security concerns, as well as research into interfaces designed to enable a single operator to control large swarms of robots.

The video below shows some of the strategies used by Magnus’ lab.

Magnus Egerstedt

magnus_headshot

Magnus Egerstedt is Schlumberger Professor in the School of Electrical and Computer Engineering at the Georgia Institute of Technology, where he serves as Associate Chair for Research. He received an MSc in Engineering Physics and a PhD in Applied Mathematics from the Royal Institute of Technology, Stockholm, Sweden, and a BSc in Philosophy from Stockholm University. He was then a Postdoctoral Scholar at Harvard University. Dr. Egerstedt is the director of the Georgia Robotics and Intelligent Systems Laboratory (GRITS Lab), a Fellow of the IEEE, and a recipient of a number of research and teaching awards, including the Ragazzini Award from A2C2.
Links:


Transcript

Andrew: Welcome to the Robots Podcast. Can you introduce yourself to our listeners and tell us about your research focus?

Magnus: I’m Magnus Egerstedt and I’m a swarm robotics professor from Georgia Tech. I’m interested in finding out how to get large teams of robots together, in particular when the individual robots may not be all that talented. The question: how can they make decisions so that beautiful, effective and exciting global patterns in behaviors emerge?

Andrew: Is this limited to physical robots, or are there other systems where this can be applied?

Magnus: I’m mainly interested in robotics, but this question of distributed decision making is really something. It could go from smart invertors on the smart grid to computational units and video games, to biological systems in that order. We have cells, we have birds, and we even have human societies – they’re all distributed systems as we walk through life with limited information. And yet, we’re able to build anthills and levers and human societies.

Andrew: Swarms of robots …. distributed systems … these are very complicated things. How do you generally go about trying to model those?

Magnus: I always start by pretending that I’m a somewhat stupid robot with very limited information, and start by asking: “What would I do?  How would I think about this?” After I’ve tried to do that, step two is almost always look to nature, because nature is filled with examples of elegant solutions to our problems. Now, I’m not a biologist and I am by no means trying to mimic nature. I’m shamelessly stealing good ideas that social and behavioral biologists have had. Then, when I’m done stealing from nature, I sit down with pen and paper, work out the math, put it on the robots and find it doesn’t work, because it never works when you go from math to robots. You have to iterate on the actual robots a few times.

Andrew: When you’re working with these robots there are constraints that we don’t have in nature, in terms of the way they’re able to communicate with each other. Is it difficult to model that? What is the approach?

Magnus: When you actually start deploying large teams of robots there are some things that are just impossible to model.

Things happen that you didn’t foresee: robots bumping into each other saturating other communication channels, weird occlusions and the infrared sensors messing with each other. There are always things you cannot model. One of the great arts of swarm robotics is: “How do you go from rudimentary fables or models that are somewhat relevant to things that can be deployed on a large system?” This transition is a lot of times tricky, but it’s also deeply satisfying. Swarm robotics is exciting because in the last 10 or so years we’ve gone from spotty anecdotal understanding of how to structure these things, to large teams of robots on the ground, water, and air that can actually coordinate their actions and build interesting geometries.

We now understand certain things that we didn’t ten years ago. Now it’s common to see 25 robots coordinating actions on YouTube. Ten years ago this would have been unheard of. The basic functionality of how do we build these geometries or get these global behaviors out, we’ve kind of understood both on a practical and on a mathematical level. Now there are a huge number of issues we don’t understand. Let’s say for instance one of the robots is not playing by the right rules, or let’s say it’s broken – or even worse – it’s malicious. How do we identify this? How do we not let our robots be duped by these bad robots? There are a huge number of questions that we clearly don’t understand yet, but do we understand the basics of how to assemble these kinds of structures.

Andrew: I think that’s interesting you mentioned malicious actors potentially being in the system. I know you recently published a paper about differential privacy in multi-agent systems. What is differential privacy, and what is it good for?

Magnus: Let’s say you and I and other robots are trying to do something. But what we’re doing individually is really having our own cost of utility function that we’re trying to minimize. I may like to be where it’s light and you want to be positioned so that you’re not too close to flying robots, because you’re afraid of flying robots. We have to live in this space together, but we have our own objectives. Differential privacy is a way of hiding certain information by sprinkling noise onto what we do. For instance, I may move in a little bit of a noisy way so that you can’t figure out what I’m up to. We’re still respecting each other’s actions, but no one can infer what I am trying to do. The trick here is to insert the noise in a clever way so that we don’t end up doing erratic things. We’re achieving what we set out to do as individuals, but no one can tell what’s going on.

This idea of differential privacy came from the database literature. Let’s say we have a database of all our salaries. I add my salary to the database. If you know how many people are already in the database, and you know how it changed, then you can back out my salary. But if all of us sprinkle in a little bit of clever noise then certain things about the salaries – like what standard salary and so forth – you can still get from the database, but you cannot back out any individual pieces of information. It’s the same idea with swarm robotics. You sprinkle noise on the motions to hide your intentions. This is a flip-side from the malicious behavior where you want to be able to detect a robot isn’t playing by the rules. There are two sides of the same coin. One is to hide your intentions, and the other is you want to be able to figure out what bad robots are doing. Bad robot!

Andrew: I think a theme that we’ve all seen in our local big box stores is that everything wants to be smart these days: smart grid, smart homes. Is that something that we should maybe take with a grain of salt given the concerns of being able to peer into these systems and learn something or maybe we shouldn’t know?

Magnus: Fortunately yes, I think one thing in general when we talk about technology is it’s not going to be decoupled from us; we’re going to be users of technology. We need to – in my mind – build our systems so that it respects us as individuals and maintains our privacy. People are concerned about self-driving cars. We’re not going to wake up tomorrow and be sitting in self-driving cars. This is sneaking up on us whereby we get more autonomy in the cars, but people are engaged. People are almost always going to be in the mix when it comes to distributed robotics systems. Also distributed gadgets and the internet of things; people are going to be on the internet of things together with their things.

Andrew: Differential privacy could be one way to make the internet of things a little bit more private. You mentioned self-driving cars. I know that you’ve done research on the other side of the coin where malicious agents are actively trying to change or affect what the system does. It sounds like a plot from a spy movie. Could you tell us a little bit about controlling an enemy unmanned aerial vehicle by exploiting its own safety systems? How did that work?

Magnus: We were really interested in this question, as we’re about to see aerial drone delivery systems. Google and Amazon, if you believe the talk, are going to be delivering things by aerial drones. The question is then: “Is it possible to take over these systems and take over in such a way that the systems themselves do not know they’re being taken over?” This is not a classic cyber security question; it’s a cyber physical security question.

A lot of these systems including our aerial drones that we were playing on have safeguards to hear our reasonable physical behaviors. Basically it’s – here is the stuff that is within reason! And as long as we are within reason we’re going to be fine. Our question became: “Can we hide malicious intent inside the noise of what is considered a reasonable behavior, and yet, get the systems to do completely the wrong things.” I don’t want to say that we completely demonstrated this is the case, but we’ve certainly demonstrated examples where it is possible to hide and trick the system into thinking that it’s doing the right thing. We’re camouflaging the malicious trajectories inside reasonable and rational behavior.

Andrew: In this particular example how did you inject for that signal, what was the mechanism by which you influenced the aircraft?

Magnus: In my lab we’re not hackers. Since we designed that aircraft we had an error system or a safety system that detects that something was wrong. Then we transferred wirelessly over the network and injected new signals. If someone is really clever they could probably build something to detect this is going on within the cyber security side. That was not what we were interested in. What we were interested in was to see if it is possible to hide the malicious takeover within the cyber physical security system. This is similar to the idea behind Stocks net. Where nuclear centrifuges control systems were made to believe they were behaving right. Not because someone hacked in, but because you made the signals look plausible to the security systems.

We actually drew a lot of inspiration from … this is kind of amusing … from dragonflies. Dragonflies will move in such a way. Let’s say you’re a dragonfly; you’re a male dragonfly and I’m another male dragonfly and now we’re going to fight. I move in a way to have the sun in a straight line behind me, so now you can’t tell where I am. I go in making sure that no matter how you move I’m going to move, so that you only see the sun and then BAM! I’m going to get you. Similarly in control for aerial there’s a notion of an observable subspace which here are things that the trajectory of the system can do that’s hidden from the output. We exploited this by restricting emotion to these unobservable modes of the system.

Andrew: That’s very interesting. This adds a whole other layer. It’s not just cyber warfare, it’s not that you hacked; you changed the code that was running on the target. It’s exactly intact as it was designed by its creator. You’ve changed what it observes and what it thinks it observes. That seems like a very hard thing in general to defend against, to not only take into account the cyber aspects but also all the ways things could physically interact. Is there a principled way to make sure that systems aren’t vulnerable in this fashion?

Magnus: We have been looking into this question and at this point there isn’t a principled way but here are ways people have done it. We looked at what we actually called motion probes. The system every now and then does a somewhat strange manoeuvre, where it’s basically a signature manoeuvre. It could make a small sidestep, or the quad copter makes something that resembles a figure eight. This is unexpected; if the attacker isn’t expecting it then you can actually excite these previously unobservable modes and say, “Hey, something is actually not behaving the way it should be there!”

This is how we are thinking about it and it also goes back to this question of detecting not malicious takeovers, but malicious agents. If we are swarm robots, we’re supposed to do something. If I then make a figure eight I know what you’re supposed to do in response to that, if you are one of my people. If you don’t respond correctly I can detect that perhaps something is fishy here.

Andrew:  I’d like to switch gears a little bit and ask about something that’s high risk, but in a different sense. I recall reading, maybe four years ago in a systems magazine. There’s a little blub about you and you wrote about your research philosophy and research style. You mentioned liking to keep a few high risk, high payoff topics on the horizon. I just wanted to ask, what does that mean? What does a high risk, high payoff topic look like to you?

Magnus: The wonderful thing about being a professor is that you’re largely free to explore things you find interesting. I have to pay the bills and, more importantly, make sure my graduate students don’t stave. I need to do things that are five years out which people care about. But I also have the freedom to think about things I truly am interested in that are way out there. If they come to fruition they’re going to be important, but if they don’t chances are they don’t. I always like to have a few of these questions that I think are fundamentally important and interesting down the line on the back burner.

Every now and then I return to them, so right now I have a few. One thing that I’m really fascinated by is this question of human swarm interactions. Let’s say you’re a person with a joystick surrounded by a million robot mosquitoes and you want them to do something. What are the right ways to engage with truly large teams of things? That’s one thing that I’m fascinated by right now. Another thing that drives me absolutely nuts is that people talk now about heterogeneous multi-robot system.

Andrew: What does that mean?

Magnus: It means different kinds of robots. In my lab, we have things on the ground and things in the air. But fundamentally, why is it good to be different? Why should I have fast and slow robots together? What is it that makes that better than just having a bunch of fast robots?

To me, this is an interesting question because we can understand what kinds of robots we should deploy. If you have an earthquake and you want to send out robots to figure out what’s going on in the rubble you can say the optimal thing to do is to have nine  snakes robots, 15 aerial robots and 24 ground robots. And here’s what they should be doing. To me this is fascinating. I again look to nature like ecosystems which are highly diverse and consider, why is that? How come hummingbirds’ live next to sloths? Is there something fundamentally good about that?

Andrew: I recall from some years ago you mentioned one of these high risk areas was human swarm interaction. How has it worked out over the past few years there were moments where it seemed dire?

Magnus: First of all, human swarm interaction has become an expectable thing to spend your time on and there are other people engaged with it. I think we’ve understood things, if you have small teams of robots then an effective way of engaging with them is actually to grab hold of a leader and inject information through the leader. That works well up to maybe four or five robots, then it kind of collapses. The reason why is that people get bad at imagining what – let’s say I have ten robots around me and I grab one and I shake it. How does that propagate through the ten robots? We just lose track.

For example look at universities, militaries, big companies … we’re all hierarchically organized. I think the reason becomes we don’t know what happens if we have 250 people to manage, all at the same time. I looked at sheep herding and found you’re not controlling individual sheep. You’re pushing things at the boundary with herding dogs. Instead of controlling individuals you’re exerting forces on the outskirts of the swarm. It turns out we can do some things like that quite well. I have worked with swarming tractors, so the question becomes: What does the farmer do at the side-line of the field with an iPad? Maybe one way of engaging is not to engage with the tractors at all but to engage with the map and say, “Here are regions of interest. Now you tractors, go figure it out!”

Andrew: You did some research with a leader-follower system and in that case it was a haptic interface, could you tell us a little bit about that?

Magnus: Yeah, this goes back to this question of its very unclear for people what it means to control large teams of things. We needed some way of making this information concrete to a person.

We basically computed something that we called the manipulability, which is a measure of how easy it is to move this swarm in particular directions and this we could compute. Then we used that to generate forces. The idea is try to push the swarm in a direction in which it was hard to move it then we’re experiencing large forces. It turned out that this was fairly useful. We did user studies that people could solve tasks that were very hard to solve with only a joystick, but with a haptic joystick they could solve tasks more effectively. This goes back to the question of, what information you actually need about a large swarm. In this case it was just force. If you want them to do more elaborate things you probably need more information. But if in general you just want to move the swarm from point A to point B then we thought the findings in this study where haptic feedback serves a strong and positive purpose.

Andrew: Could you paint us a picture of what the experimental setup looked like? What were the robots like? What were the users looking at and what were they holding?

Magnus: The user was holding a phantom omni, the brand doesn’t matter. It was a haptic interface; it looks like a little pan connected to a ball. You’re moving it up, to the side, or in. You’re experiencing a force depending on what feedback that the little haptic force generating ball is giving you. That was what the user was holding. Then the users were in the lab and we had 10 robots. The robots were arranged in a particular formation. What the user could do is select which robot, he or she, wanted to use as the leader and then drag it in various directions.

You can imagine having a long and skinny formation and the leader robot is embedded inside it in the middle. If you move along the direction of the long and skinny thing then that’s going to be hard! You have to move all these things out of the way and you’re experiencing lots of forces. It has what’s called low manipulability. If you move it away from the long and skinny direction, then it’s easier to melt into a form or shape that follows the leader. The user was looking at the 10 robots, the user did not know how the robots were organized, and basically tried to get them to go from one point to another point while navigating a little bit of an obstacle course, so no robots were allowed to actually get stuck behind an obstacle.

Andrew: How did you assess if it was being effective if the person felt like it was a good interface or not?

Magnus: Yeah, this is always when you do user studies, how do you know if it’s good or not? We used three metrics. One, we just timed them. So how long does it take? Two, we looked at how much did the user have to work. We summed up the control signals that were sent to the swarm. And third, we asked them! Basically we asked: did you find this annoying? There are probably more scientific ways of asking that question, but that’s basically what it boils down to: what level of stress, engagement, or discomfort did you feel when you were doing this?

Andrew: A more advanced interface might be something you alluded to the farmer by the field with a tablet and algorithms that were suited to that. Could you tell us a little bit about that?

Magnus: The backdrop to this is rather interesting. The tractors now are really advanced pieces of machinery. They can largely drive themselves. Put in the GPS of the path and then they can, more or less, do this by themselves. Moreover, cornfields are really not that complicated they’re nice and straight. In terms of the types of environments you’re encountering they’re not that complicated. The manufacturing floor is way more complicated, or my daughter’s bedroom … way more complicated. The farm field is not that complicated. This tractor manufacturer came to me and said, “You know what we know now.” So how do we make these autonomous, which means we don’t need a driver in the tractor.

That means we can make the tractors smaller, and that’s good. Maybe because it has less environmental impact, you don’t have to go back and retail afterwards. These are all good reasons why small is better. Except that it’s not less effective. Instead of having one small tractor lets have ten small tractors. Then the question is but how do you actually engage with these? They literally came and asked, “What should the farmer do, we want to give the farmer an iPad, what should the farmer do?” That was the thing we got rather excited about.

Andrew: What did you end up making, what algorithm and what kind of user interface or display?

Magnus: We ended up going with something that’s fundamentally a good idea, which is that the farmer should actually not deal with the tractors. What the farmer should be dealing with are higher level tasks. In this case what we had is a map of the world and the farmer could outline things that were interesting, or things that were urgent. Or where are where there was lots of moisture, or not a lot of moisture. Things that basically described we used, we call density functions to describe areas of the interest. Then robots run what’s called a dynamic coverage algorithm. That means the robots disperse themselves in such a way that they cover areas of interest where we get the same amount of stuff we’re in charge of. Technically, this is called a voronoi partition. The robots computered a voronoi partition.

Then inside that partition they did their driving around. Then the farmer could do is move their finger around, and on the flight dynamically the robots reconfigured themselves. We took this and this now, which has nothing to do with tractors anymore. But we have this general way of painting event density on areas of interest and then have the robots dynamically respond to that in a distributed manner.

Andrew: Have you received any feedback from either farmers or researchers about some of these advancements?

Magnus: So I have! I did a photo shoot where I was wearing a weird baseball cap and rubber boots in the lab. Don’t make fun of me; I pretended to be a robot farmer. But seriously people are excited about it. The interesting thing about swarm robotics is that it’s up and coming and these kinds of environmental surveillance tasks may be farming or just trying to figure out what’s going on in an area. I had a NASA project where we were monitoring effects of climate change down at Antarctica. We actually never ended up going to Antarctica, unfortunately.

There, you deploy robots for fairly long periods of time and they have to just respond to what’s going on in the environment and this kind of way of painting the world at the user level with areas of interest is a very natural thing. Now if you’re flying two drones, that’s not how to do it, then you need to worry about individual aircraft. If you’re just interested in covering areas then this is a really effective way of doing it. We’ve had quite a lot of funding from the Air Force Office of Scientific Research to pursue this further because in a way it’s a very simple way of describing missions that robots can immediately execute.

Andrew: In closing, I wanted to ask you a philosophical question. Would it be fair to say that you’re a fan of soccer?

Magnus: Ha! Yes, it would be fair to say that I’m a fan of soccer. In fact as soon as we’re done with this I’m going to go put on my soccer gear and go play a soccer game for old men; you have to be over 40 to play in that league.

Andrew: You must be aware of robocup’s robotic soccer league. Currently, they’re pitting teams of robots against each other. Their goal is to field the team of robots to beat the world cup championship human team by 2050. I think that shows a high level of optimism and the future of multi-agents teams and in swarm robotics perhaps as well. What do you see the future as, how will we get there?

Magnus: Robots are already better than people at many things. Robots are really good at not getting bored. They’re really good at responding quickly to things. They’re really good at crunching big numbers. I think if you do a robotic 100 meter dash.

We’re not far away from having bipedal robots that are faster than Usain Bolt, the fastest runner in the world. As a soccer player if you look at the artistry of Messi we’re way far away from that. I think part of that is actually not an algorithmic question. Where we’re lagging right now a lot of times is that actuation and power supplies. The humanoids of today that are not tethered, they have to schlep very large battery packs with them and, if you’re lucky, 3 degrees of freedom foot. Messi’s foot has many more degrees of freedom and he’s small and agile.

Andrew: Perhaps the World Cup is megatronics challenge. What to you would be the swarm robotics coming of age moment where you say that’s the goal?

Magnus: That’s actually a really good question. A big part of what’s driving me is purely intellectual curiosity. I think there are deep scientific questions about how do you get truly emergent behaviors out from simple local rules.

Andrew: What do you mean by an emergent behavior?

Magnus: Simple emergent behavior is how do you send out robots to cover an area where all they’re doing is seeing very locally– what’s going on? A higher level question is that you have these little neurons and all they’re doing is sending spike trains around and, all of a sudden, you have conscious human beings. That’s a very high level and emergent behavior question. Once we understand that to me that’s a really interesting question and it has repercussions way beyond robotics. The big question is not, how do I get 200 robots to mow my lawn more effectively? Even though I do think we’re going to have teams of robotic lawn mowers, small little guys, which are more agile than these big robotic lawn mowers that are out there now.

But, scientifically understand, how do we make these distributed decisions? Along the way we’re going to have robots on Mars. We’re going to have swarms of robots on Mars. We’re going to have robot helpers in hospitals and on the manufacturing floor. These things are coming. Ultimately computers are useful tools, but they’re only useful in as much as we do useful things with them. I think people tend to think of robots as, “Oh my God, we’re going to have an army of terminators!” I hope not. I hope we’re going to use robots to make our lives better. And use them to find energy, food, and travel to remote locations around the globe. I hope that we, as people, will be able to use them as tools for good as opposed to tools for evil … like an army of terminators.

Andrew: A fascinating discussion, thank you for joining us!

Magnus: You bet, thank you for having me!

This audio interview was transcribed and edited for clarity. 



tags: , , , ,


Andrew Vaziri





Related posts :



Open Robotics Launches the Open Source Robotics Alliance

The Open Source Robotics Foundation (OSRF) is pleased to announce the creation of the Open Source Robotics Alliance (OSRA), a new initiative to strengthen the governance of our open-source robotics so...

Robot Talk Episode 77 – Patricia Shaw

In the latest episode of the Robot Talk podcast, Claire chatted to Patricia Shaw from Aberystwyth University all about home assistance robots, and robot learning and development.
18 March 2024, by

Robot Talk Episode 64 – Rav Chunilal

In the latest episode of the Robot Talk podcast, Claire chatted to Rav Chunilal from Sellafield all about robotics and AI for nuclear decommissioning.
31 December 2023, by

AI holidays 2023

Thanks to those that sent and suggested AI and robotics-themed holiday videos, images, and stories. Here’s a sample to get you into the spirit this season....
31 December 2023, by and

Faced with dwindling bee colonies, scientists are arming queens with robots and smart hives

By Farshad Arvin, Martin Stefanec, and Tomas Krajnik Be it the news or the dwindling number of creatures hitting your windscreens, it will not have evaded you that the insect world in bad shape. ...
31 December 2023, by

Robot Talk Episode 63 – Ayse Kucukyilmaz

In the latest episode of the Robot Talk podcast, Claire chatted to Ayse Kucukyilmaz from the University of Nottingham about collaboration, conflict and failure in human-robot interactions.
31 December 2023, by





Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association