Robohub.org
 

DARPA’s Gill Pratt on Google’s robotics investments

by and
19 March 2014



share this:
DAROA LS3 - Boston DynamicsDAROA LS3 - Boston Dynamics

When Google bought Boston Dynamics last December, the news made headlines, but it was not the first time the Internet giant has invested in DARPA-funded robotics. As part of Robohub’s Big Deals series, we asked Gill Pratt, Program Manager of DARPA’s Defense Sciences Office, to shed some light on what DARPA thinks about Google’s robotics acquisitions , and what it might mean to the robotics and open source communities.

Congratulations, Gill, on the DARPA Robotics Trials – what was your personal assessment of the event?

I was very pleased with how well the teams performed, and in particular that the hardware for the majority of the teams worked so reliably. In general, the teams did slightly better than we had expected and this was probably the result of a lot of very careful preparation. The infrastructure that we set up also worked reliably in the trials. Still, the DRC Trials were really just a beginning and showed us the potential of what can be done in disaster scenarios with robots.

Cloud computing was an important facet of the DARPA Robotics Challenge and especially the Virtual Robotics Challenge … How do you see cloud computing helping the development of robots?

The importance of cloud computing for Robotics falls into two different categories. The first has to do with simulation environments. We used cloud computing for the first time with the Virtual Robotics Challenge, which we had back in June as a precursor to the DRC Trials, to do real-time simulation of tasks in unstructured environments. We rented a whole lot of cloud computing space, we ran a very good simulator developed for DARPA by the Open Source Robotics Foundation, and we adjusted the latency – the time delay – so that it would be the same for all the teams, even though they were located in different parts of the world. It was a tremendous enabler to run a real-time competition based on simulation that also involves human-robot collaboration. Doing so required that the simulation run in real-time. That was a big first and we felt it was very successful.

Another important aspect of cloud computing is something that we have not used to date, but I think has a lot of potential for the future: the ability to have a robot’s computing and data hosted remotely. There’s a lot of potential for this, in particular for doing perception. Forty percent of the human brain is used for perception, and perception is one of the most difficult tasks for an autonomous robot. It’s very difficult to fit a computer with the size, weight, and power that you need to achieve really good perception onto a robot: the computer just gets too large, it consumes too much power, and it weighs too much. However, if you have access to cloud resources with lots of data and lots of computing cycles, they can help move that burden off of the physical robot.

Perhaps most exciting for the future of cloud computing in robotics is that when one robot learns how to perceive something, or learns how to do a particular task, that learning can be instantly shared with other robots. This sharing could have a catalytic effect on the capabilities of robots, particularly in structured environments.

This is really an up and coming field. How likely are we to see commercial interest come out of cloud computing for robotics?

If you look at the research being done we see a lot of possibilities with cloud computing for robots. In the commercial world, I think we are going to see it applied first to structured environments where there are lots of human artifacts — things like doors, stairs, furniture, tools — and where a large database of stored examples of these artifacts can help the recognition problem.

The Atlas robot, created by Bostron Dynamics and DARPA, was used by several teams in the DRC Trials. Source: DARPA

The Atlas robot, created by Bostron Dynamics and DARPA, was used by several teams in the DRC Trials. Source: DARPA

This is an interview for our Big Deals series, so we’re wondering, what does DARPA think about Google buying up some of the technologies that DARPA has spent years building? We’re thinking of the cars, the humanoids …

We are thrilled to see commercial interest. It’s one of the signs of success for the investments that we have made in future technology.

We think that it takes commercial investment to drive down the cost of technology. A very well known example is the cellphone. Cellphones have microprocessors in them that many years ago began with investments by DARPA’s Microsystems Technology Office and others. Cell phones have inertial measurement units to figure out the tilt of the cellphone. They have displays; they have GPS receivers; they have radio devices that work at low power. I can point to any number of DARPA investments that helped start those technologies, and of course cellphones now talk to the Internet, which is probably the best known output of all of DARPA’s investments. But if the Department of Defense wanted to produce a cellphone without the commercial world having picked it up and turned it into a product that billions of people use, it would cost many orders of magnitude more than it does now. It’s really because of the commercial world that we’ve seen the price of cellphones go down.

But DARPA is not only interested in seeding technologies with these early investments; we are also committed to solving real problems for national security. In fact, we have a variety of programs that use cellphones. One of these is called TransApps, where soldiers use smart phones to plan and carry out missions and layer and share data between teams. This is an example where investment by the commercial world made possible a national security capability that otherwise might not be feasible because of excessive cost.

By analogy, what’s going on in robotics is that we’ve seen some interest in the commercial world in taking this to the next step. And they have resources far beyond what DARPA has to further develop a technology and, most importantly, to drive down cost.

So these big deals are a good thing for DARPA?

Yes, they are a wonderful thing for DARPA and, more broadly, for our nation.

Do you think it’s a good thing for the field of robotics in general?

Absolutely, but I understand why there’s some anxiety.  It’s true that, for a very short time, some talented people in the field will be working for a commercial firm rather than for DARPA. However, that effect is transient and very small compared to the much more important thing, which is that these corporate investments both accelerate the field and show that it is going to be real.

This anxiety is a lot like the kind you get when your kids leave home. It’s sad, and for a while you’re going to miss them, but wait a while and then the grandkids come. The important thing is for Robotics to become a field that is not only about prototypes or lab demonstrations, but a field where significant resources are invested to make products that really improve people’s lives. We saw it with the Roomba vacuum cleaner, but we want to go beyond that. These kinds of commercial investments will make the field far more attractive to startups and to students who are planning their careers. The net effect is strongly positive.

How do you think this corporate interest in robotics will impact the open source community?

Open source is very important. It allows us to develop pre-competitive catalysts to help the whole field move forward.

In the integrated circuits world it was recognized that you couldn’t build experimental integrated circuits as easily as you could wire up a circuit with discrete components like capacitors and resistors and transistors that you solder by hand. As a result, people working in the area at that time needed to develop a simulator to allow for the experimentation of how a chip would work without actually having to make the chip. The idea was to do the design and the debug cycle on the computer in simulation instead of having to do it on the bench: when you were done you could press a button and have confidence that the chip would actually work. The open-source simulator SPICE filled that niche.

The very same kind of thing is true in the robotics field. The key is to recognize that simulation is really a pre-competitive technology now.  We don’t need to have proprietary closed simulation systems and keep reinventing the simulator over and over again. We’ve done it enough times now to know that it would be useful to apply open source. In the same way, I think that an operating system is precompetitive – it has been done enough times that a new company should not try to distinguish itself based on a proprietary version of an OS.

Rather, let’s just move on to the next thing where the real innovation is going to occur. DARPA funded the Open Source Robotics Foundation to develop an open source simulator for the DARPA Robotics Challenge because we think that it will catalyze the whole field to everyone’s advantage.

With the purchase of Boston Dynamics, some headlines were saying that Google bought military robotics. Is this an accurate assessment?

I think it’s very inaccurate. It’s based on the confusion – and the conflation – of two aspects of robotics: remote bodies and remote brains. A robot allows you to have a mechanical body take the place of a human being; for example, a robot can work in a dangerous environment while a human supervisor is in a safe place. And a robot also allows you to perform some of the function of the human brain autonomously in a remote location. When you conflate bodies and brains, and say that there is only one idea here, it’s like saying, “The robot looks like person, so it must be as intelligent as a person, and therefore I should be scared of all robots, particularly those funded by the Department of Defense.”

All of the robots that I’m aware of, except for the ones that actually have weapons or specialized sensors on them, are generic. They are neither military nor commercial. They are just robots that have mobility and perhaps some manipulation, but there are not classed to one type or the other. And they are all almost completely empty-headed: their bodies may look like ours, but their heads are almost totally empty.

In terms of autonomy, the technology has such a long way to go that the fears that have been generated are way out of proportion to the state of the art. At the DRC Trials, for example, we had people supervising the robots in pretty tight loops, telling them what to do. While autonomy may be able to handle very simple environments, close human supervision is going to be needed in unstructured environments for some time.

What do you say to people who are uncomfortable with the use of military robots in a commercial context?

I think there are legitimate concerns, but from a rational point of view it’s important to distinguish between a weapon, which none of the robots we are talking about are, and a machine that makes us feel scared because it looks like us but it’s hard to understand exactly how it works. Of course, science fiction has stoked that fear with movies like Terminator and so forth, but that reaction actually goes back a long, long way to stories like Frankenstein – anything that sort of looks like a person but is hard to understand will generate fear. It’s an emotional reaction. However, from a rational perspective, if you consider how little autonomy these real machines actually have, you realize that the worry is way out of proportion to reality.

What are some of the benefits that could come out of combining Google’s expertise in data with hardware from Boston Dynamics?

I don’t know the specific applications that Google is thinking of – it’s their prerogative to discuss them or not, and they haven’t told DARPA.

Generally speaking, when you think about data and robotics, there are a lot of exciting things that could happen. For example, you could do perception better by using a lot of data and sophisticated search. This is not necessarily the kind of search we think of when we type words into our browsers, but the kind that involves images, where a robot could say, “Okay, I see something and because of many prior examples I know what to do,” and share that data in a useful way.

Would voice commands play into this dynamic, in addition to visual search? Would it make it easier for humans to interface with robots?

I’m not an expert on voice, so I don’t want to speculate, but I know that you can train better when you have a lot of data. Voice and natural language recognition have the potential to help with planning and perception, and I think they could also facilitate more intuitive interfaces for sharing information between robots and humans.

Team SCHAFT raises the arms of its S-One robot in victory after successfully completing the Climb Industrial Ladder task at the DRC Trials. SCHAFT won that task and three others, and scored the most points of any team at the event. Source: DARPA.

Team SCHAFT raises the arms of its S-One robot in victory after successfully completing the Climb Industrial Ladder task at the DRC Trials. SCHAFT won that task and three others, and scored the most points of any team at the event. Source: DARPA.

What new robotic technologies will DARPA be pushing that might appeal to big companies that are not traditionally robotics focused?

We’re really interested in cooperation directly between robots and between human beings and machines. We are also interested in dealing with dynamic environments. What I would like to do for the future of the DRC, and what I think that DARPA is going to push even beyond that, is to answer questions like “What if the communication is intermittent?” and “What if the environment is beyond a set of familiar things that I’ve seen before?” … What do we do if we’re actually outdoors? How do we handle this piece of vegetation that’s in front of us? Do we go through it or around it? If we go through it, do we move the branches out of the way?

We also still have a lot of research to do to figure out how to apply cloud computing when the environment is unstructured, like it would be in disasters or outdoor scenarios.  And if the communications are intermittent, we have to figure out how to cache enough of the intelligence on the robot itself so that it can continue to do what it needs to even when it’s disconnected from the cloud. Of course, disconnected operation may be at a reduced level of effectiveness, but we don’t want the robot to stop altogether when communications go down for a little while. We want the robot to have at least some ability to complete the tasks it was given to do.

How do you actually put behavior in the cloud? How do you take a data driven approach to behavior? I think that’s very much an unknown thing right now. Data driven approaches to perception are easier to understand, but we are going to try to take the next step and look at behavior as well.

 

If you liked this article, you may also be interested in:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.

 



tags: , , , , , , , , , , ,


Gill Pratt is a Program Manager in the Defense Sciences Office at DARPA.
Gill Pratt is a Program Manager in the Defense Sciences Office at DARPA.

Robohub Editors





Related posts :



Robot Talk Episode 98 – Gabriella Pizzuto

In the latest episode of the Robot Talk podcast, Claire chatted to Gabriella Pizzuto from the University of Liverpool about intelligent robotic manipulators for laboratory automation.
15 November 2024, by

Online hands-on science communication training – sign up here!

Find out how to communicate about your work with experts from Robohub, AIhub, and IEEE Spectrum.
13 November 2024, by

Robot Talk Episode 97 – Pratap Tokekar

In the latest episode of the Robot Talk podcast, Claire chatted to Pratap Tokekar from the University of Maryland about how teams of robots with different capabilities can work together.
08 November 2024, by

Robot Talk Episode 96 – Maria Elena Giannaccini

In the latest episode of the Robot Talk podcast, Claire chatted to Maria Elena Giannaccini from the University of Aberdeen about soft and bioinspired robotics for healthcare and beyond.
01 November 2024, by

Robot Talk Episode 95 – Jonathan Walker

In the latest episode of the Robot Talk podcast, Claire chatted to Jonathan Walker from Innovate UK about translating robotics research into the commercial sector.
25 October 2024, by

Robot Talk Episode 94 – Esyin Chew

In the latest episode of the Robot Talk podcast, Claire chatted to Esyin Chew from Cardiff Metropolitan University about service and social humanoid robots in healthcare and education.
18 October 2024, by





Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association