On April 8-9, Stanford Law School held the second annual robotics and law conference, We Robot. This year’s event focused on near-term policy issues in robotics and featured panels and papers by scholars, practitioners, and engineers on topics like intellectual property, tort liability, legal ethics, and privacy. The full program is here.
This post is part of Robohub’s We Robot coverage.
Robot Demo
Speaker: Ian Danforth
What was billed as a robot demonstation was coupled with a talk by Ian Danforth on household robots.
After introducing the audience to his robotic companion, Danforth spoke about a current rise of personal robots. What we had been waiting for previously, he says, were factors like science (machine learning, neuroscience), computation, sufficient bandwidth, affordable components, a global market, societal readiness (i.e. for people to get over the assumption that all robots look like Rosie), and sufficient data (because robots require lots of data to capture human-like experiences). But now that we’re seeing all of the above come into existence, he estimates that there will be an explosion of home robotics within the next three years.
It starts small
It starts cute
It starts now
Why small and cute? Danforth says because of expectations. Designing household robots to be adorable can create a tolerance where the expectation would otherwise be that the robot perform flawlessly. We never expect our pets to act perfectly, he says, or small children. And when they don’t, sometimes we actually like it. It’s an intrinsically enjoyable experience to try to teach something that is cute and small and makes mistakes. The “now” part stresses that when we talk about home robots, we are talking about existing technology. “This is not the future – this is today.”
Five years from now, should we want our children to have robotic pets, or Lassie? Danforth says it depends. “Is your kid allergic to Lassie? Do you mind cleaning up poop?” In the United States, 4 million cats and dogs are killed every year. For the percentage of people who would like to be pet owners, but can’t be or shouldn’t be, artificial pets may be a viable alternative.
Someone asked whether we should give these robots “rights”. Danforth asserted that people will certainly want to. This could start on the level of company policies, e.g. if you abuse your pet, that could be something that causes you to lose access to the service that enables it. But he thinks that it will be a long time before the law recognizes any artificial entity as sufficiently complex to deserve legal protection.
When asked about data collection and privacy, Danforth said that he as a developer is aware of the large amounts of different data he is collecting, and that he is thinking about how to ensure people’s privacy (e.g. by encrypting video and audio streams). The two challenges to address: 1. How informed is the end-user about what happens with what data? 2. The technological responsibility of preventing unnecessary data retention and developing sufficient security around what’s necessary. But he also expressed the hope that non-developers were thinking more deeply about these issues and would be able to help him both “be a radical innovator” and “not get sued.”
The discussion also covered expectation management, projection, artificial intelligence, character gender, and different methods of encoding personality.
At some point during Danforth’s talk, his robot somewhat impolitely fell asleep. The rest of the audience most certainly did not.
See all the We Robot coverage on Robohub →