Robohub.org
 

Is surveillance the new business model for consumer robotics?

by
06 May 2014



share this:
data_eye_privacy

A discussion with Ryan Calo on Google, your personal data, and the consumer robotics market

Can someone tell me: What’s Google doing, making robots? Or at least: What is it doing, making such acquisitions?

As you probably heard, in the first two weeks of December last year, they bought some pretty heavy companies (house specialties in parentheses):

  • Schaft (Navigation)
  • Industrial Perception (Vision)
  • Redwood Robotics (Manipulation)
  • Meka Robotics (People-safe designs)
  • Holomini (Navigation)
  • Bot & Dolly (Camera tracking)
  • Boston Dynamics (Navigation)
  • DeepMind (General AI & abstracted reasoning)

This list covers AI, robotic vision, and a famous four-legged ‘bot that can bust into a sprint that could beat Usain Bolt. So it’s safe to say that the focus is on navigation and vision.  This is nice for the roboticists who are waiting for market validation in the consumer space. Google is providing that. They’re also providing us with a fleet of robots that can navigate, see, and use the network.

What’s Google’s motivation?

Presumably, as a publicly-trade company, this is intended to make Google more money.

They did a pretty good job last year. Sitting between Apple and Coca-Cola, they’re the second most valuable brand in the world. Paid clicks were up approximately 31 percent over 2012, and as of year end, Google employed 47,756 full-time employees. They also, in 2013 had advertising revenues of approximately $50 billion.

Google exchanges personal information for online services. Expanding their horizons from online tools, a few years ago, Google gave Android away to handset makers for the same reason Google gave free web search and email away to you and I: to collect personal information and display other information back. This is Google’s job and their profits show that they are very good at that.

It makes sense, then, that Andy Rubin, who headed up the Android effort, is now heading up Google’s robotics effort. After all, robots – like smartphones – are not just a platform for providing products and services; they are also a platform for collecting data.

Personal information as currency

Nothing is free, so if you aren’t paying cash, you’re paying with your personal information. And that information, from 2013, was worth $50B … so it stands to reason that Google’s motive for getting into robotics is that it offers yet another means of harvesting more information about you. Google will be using robots to collect and display personal information, just like Android did, and before that, email and web searches did. If Rubin has moved on from phones to robots, it’s probably because he thinks robots offer the greater potential in terms of opportunity.

There are three reasons why this makes sense:

  1. Robots will give Google access to the physical world for a reduced cost. Like Apple, they are seeking to produce advanced hardware with companies that have proven track records for such manufacturing, and Google is evidently partnering with FoxConn to make this happen.
  2. Robots will allow Google access to home and business information with unprecedented fidelity of information. By providing us with robots that are cheap (if not free; think of how smartphones as subsidized), Google can eventually build data conduits into our homes that will allow the company to know what we read, what dog food we buy, what we wear, and when we turn on the Nest thermostat (and, yes, the business model is the same for Nest as it is for robotics: more consumer data).
  3. As with an open source OS for smartphones, by producing an open source OS for robots, Google will have direct access to all of the information that Android offered the company.

A couple of weeks ago, while attending RoboMadness at my old stomping grounds of SRI, the CTO and EVP of iRobot, Dr. Paolo Pirjanian, made a curious statement on stage. He outlined a ‘holy trinity’ for robotics as: vision, navigation, and the cloud, and pointed out that, “navigation is key” because it will build environment maps that will allow problem solving. So for example, a 3D environment map will allow the indexing of objects, as well as conversational interfaces for voice and gesture control. If a book is out of place and you want your robot to put it back, then the idea of “back” needs to be indexed somewhere. That somewhere is the cloud, so this information will, then, attach to the cloud.

All that personal data …  all those clouds …

Here we have an example of mapping, navigation and cloud robotics, which, if I try to look at it objectively, sounds a lot like what Google appears to be doing.

Is surveillance the new business model for consumer robotics?

In my desperation to get to the bottom of this I called up Ryan Calo, for he once wrote that “robotics combines, for the first time, the promiscuity of data with the capacity to do physical harm; robotic systems accomplish tasks in ways that cannot be anticipated in advance; and robots increasingly blur the line between person and instrument.”

Professor Calo researches the intersection of law, robotics and the Internet. His work on drones, driverless cars, privacy, and other topics has appeared in many famous publications and he serves on a number of advisory boards, including the Electronic Privacy Information Center (EPIC), the Electronic Frontier Foundation (EFF), and others.

I threw these ideas at him and he took difference with the angle. Here was Calo’s response:

The frame is right, though I would quibble with characterizations of profit – there is a potential for win-win and it’s not necessarily a transgression of privacy. I would love that.  In fact I kind of wish that Google and Facebook would consider creating a robot platform for the home. The robotics industry really needs a personal robot in much the same way as a personal computer. We need something that is very versatile – something we could run all kinds of different code on – a platform or PC that is open to 3rd party innovation.

Then what are these companies that collect our data, like Google or Facebook, up to?

They want to bring in the Internet to more places – to where it is hard to bring it.  Amazon wants robots that can shave seconds, or fractions of a second, in the time it takes to get a package from the warehouse. This is how they’re thinking of using robots.

So in other words, with regards to these companies, so far they are leaving a lot on the table and it may be that, cynically, they end up building this platform that personal robotics needs and their motivation is to get a look inside the home, but that doesn’t seem to be the case. I understand there are things like Nest that give some perspective on the home, so it is a natural fit for Google’s other enterprises. But it is a closed platform that does one thing.

Isn’t that why Google has traditionally offered more services? To get more user information?

No, it’s to get more users. Google doesn’t just rely on users, but also on inventory – it needs a place to display those ads. They want more screens. They created email to bring more people into the fold. But I don’t think Google does everything they do just to get more information. They want to go to advertisers to say, “We have X users” and that increases the value.

You could explain a lot by saying, “They think that smart objects of various kinds are the future and they want to use all these new screens.” Are they building driverless cars to collect personal information? Well, they’ve been doing that with people. Maybe they want drones to fly around for the same reason. But Google appreciates that contemporary life is shifting. Once you have driverless cars. the screen can be as big as you want so that Google can own that screen. If everyone is being driven by cars, maybe Google will have a credible map so that people have to build on that infrastructure. They want to be the ecosystem.

Mr Calo and I have more discussions ahead, but for now, like all good underlings, we’ll just have to keep an eye on the surveillance system to see when it wakes up.

 

If you liked this article, you may also be interested in:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.



tags: , , , , , ,


Mark Stephen Meadows is President of BOTanic, a company that provides natural language interfaces for conversational avatars, robots, IoT appliances, and connected systems.
Mark Stephen Meadows is President of BOTanic, a company that provides natural language interfaces for conversational avatars, robots, IoT appliances, and connected systems.





Related posts :



Octopus inspires new suction mechanism for robots

Suction cup grasping a stone - Image credit: Tianqi Yue The team, based at Bristol Robotics Laboratory, studied the structures of octopus biological suckers,  which have superb adaptive s...
18 April 2024, by

Open Robotics Launches the Open Source Robotics Alliance

The Open Source Robotics Foundation (OSRF) is pleased to announce the creation of the Open Source Robotics Alliance (OSRA), a new initiative to strengthen the governance of our open-source robotics so...

Robot Talk Episode 77 – Patricia Shaw

In the latest episode of the Robot Talk podcast, Claire chatted to Patricia Shaw from Aberystwyth University all about home assistance robots, and robot learning and development.
18 March 2024, by

Robot Talk Episode 64 – Rav Chunilal

In the latest episode of the Robot Talk podcast, Claire chatted to Rav Chunilal from Sellafield all about robotics and AI for nuclear decommissioning.
31 December 2023, by

AI holidays 2023

Thanks to those that sent and suggested AI and robotics-themed holiday videos, images, and stories. Here’s a sample to get you into the spirit this season....
31 December 2023, by and

Faced with dwindling bee colonies, scientists are arming queens with robots and smart hives

By Farshad Arvin, Martin Stefanec, and Tomas Krajnik Be it the news or the dwindling number of creatures hitting your windscreens, it will not have evaded you that the insect world in bad shape. ...
31 December 2023, by





Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association