The 1953 New Yorker cartoon that started the “Take me to your leader” meme showed two aliens newly arrived on earth asking a donkey to, effectively, give them policy guidance. This is exactly what our ‘brave new’ human-robot world looks like. Complex technologies can have profound and subtle impacts on the world, and robotics is not only a multidisciplinary field, but one that will have impact on every area of life. Where do we go for policy?
Ryan Calo’s recent report for the Brookings Institute, “The Case for a Federal Robotics Commission”, calls for a central body to address the issue of lack of competent and timely policy guidance in robotics. For example, the US risks falling far behind other countries in the commercial UAV field due to the failure of the FAA to produce regulations governing drones. Calo points out the big gap between policy set at the research level ie. OSTP and at the commercial application end of the scale ie. FAA.
However, with robotics being a technology applicable in almost every domain, there will always need to be multiple governing bodies. One central agency is insufficient. Perhaps the answer lies in central information points, like the Brookings Institute, or Robohub, which provides a bridge between robotics researchers and the ‘rest of the world’. Informed discussion is at the heart of democracy and in a complex technical world, scientists, social scientists and science communicators must lead the debate.
I suggest that our current robotics policy agenda needs to be reformed and better informed . This article provides a review of some recent policy reports and considers the changing shape of 21st century scientific debate. In conclusion, I make several recommendations for change:
The Pew Report and the problem with popular opinion
Much of today’s information comes via the media and popular opinion, from policy, analysis or government groups that are just plain out of touch, or unable to absorb or use information across disciplines. In the worst cases a feedback loop is created, of bad opinions being repeated until they are accepted as truth. Recent reports from the Brookings Institute and the Pew Research Center demonstrate both the good and the bad of current policy debates.
The recent widely reported Pew Research Center Report on “AI, Robotics and the Future of Jobs” highlights the ridiculousness of the situation. The report canvassed more than 12,000 experts sourced from previous reports, targeted list serves and subscribers to Pew’s research, who are largely professional technology strategists. 8 broad questions were presented, covering various technology trends. 1,896 experts and members of the interested public responded to the question on AI and robotics.
The problem is that very few of the respondents have more than a glancing knowledge of robotics. To anyone in robotics, the absence of people with expertise in robotics and AI is glaringly obvious. While there are certainly insightful people and opinions in the report, the net weight of this report is questionable, particularly as findings are reduced to executive summary level comments such as:
“Half of these experts (48%) envision a future in which robots and digital agents have displaced significant numbers of both blue- and white-collar workers – with many expressing concern that this will lead to vast increases in income inequality, masses of people who are effectively unemployable, and breakdowns in the social order.”
These findings are simply popular opinion without basis in fact. However, the Pew Research Center is well respected and considered relevant. The center is a non-partisan organization which provides all findings freely “to inform the public, the press and policy makers”, not just on the internet and future of technology, but on religion, science, health, even the impact of the World Cup.
How do you find the right sort of information to inform policy and public opinion about robotics? How do you strike a balance between understanding technology and understanding the social implications of technology developments?
Improving the quality of public policy through good design
Papers like Heather Knight’s “How Humans Respond to Robots” or Ryan Calo’s “The Case for a Federal Robotics Commission” for the Brookings Institute series on “The Future of Civilian Robotics”, and organizations like Robohub and the Robots Association, are good examples of initiatives that improve public policy debate. At one end of the spectrum, an established policy organization is sourcing from established robotics experts. At the other end, a peer group of robotics experts is providing open access to the latest research and opinions within robotics and AI, including exploring ethical and economic issues.
Heather Knight’s report “How Humans Respond to Robots: Building Public Policy through Good Design” for the Brookings Institute is a good example of getting it right. The Brookings Institute is one of the oldest and most influential think tanks in the world, founded in Washington D.C. in 1916. The Brookings Institute is non-partisan and generally regarded as centrist in agenda. Although based in the US, the institute has global coverage and attracts funding from both philanthropic and government sources including, the govts of the US, UK, Japan, and China. It is the most frequently cited think tank in the world.
Heather Knight is conducting doctoral research at CMU’s Robotics Institute in human-robot interaction. She has worked at NASA JPL and Aldebaran Robotics, she cofounded the Robot Film Festival and she is an alumnus of the Personal Robots Group at MIT. She has degrees in Electrical Engineering, Computer Science and Mechanical Engineering. Here you have a person well anchored in robotics with a broad grasp of the issues, who has prepared an overview on social robotics and robot/society interaction. This report is a great example of public policy through good design, if it does indeed makes its way into the hands of people who could use it.
As Knight explains, “Human cultural response to robots has policy implications. Policy affects what we will and will not let robots do. It affects where we insist on human primacy and what sort of decisions we will delegate to machines.” Automation, AI and robotics is entering the world of human-robot collaboration and we need to support and complement the full spectrum of human objectives.
Knight’s goal was not to be specific about policy but rather to sketch out the range of choices we currently face in robotics design and how they will affect future policy questions, and she provides many anecdotes and examples, where thinking about “smart social design now, may help us navigate public policy considerations in the future.”
Summary: “How Humans Respond to Robots”
Firstly, people require very little prompting to treat machines or personas as having agency. Film animators have long understood just how simple it is to turn squiggles on the screen into expressive characters in our minds and eyes. We are neurologically coded to follow motion and to interpret even objects as having social or intentional actions. This has implications for future human relationships as our world becomes populated with smart moving objects, many studies show that we can bond with devices and even enjoy taking orders from them.
There is also the impact of the “uncanny valley” – a term that describes the cognitive dissonance created when something is almost, but not quite, human. This is still a fluid and far from well-understood effect, but it foreshadows our need for familiarity, codes and conventions around human-robot interactions. Film animators have created a vocabulary of tricks that create the illusion of emotion. So, too, have robot designers, who are developing tropes of sounds, colors, and prompts (that may borrow from other devices like traffic lights or popular culture) to help robots convey their intentions to people.
With regard to our response to robots, Knight draws attention to the fallacy of generalization across cultures. Most HRI or Human-Robot Interaction studies show that we also have very different responses along other axes, such as gender, age, experience, engagement etc. regardless of culture.
Similarly, our general responses have undergone significant change as we’ve adapted to precursor technologies such as computers, the internet and mobile phones. Our willingness to involve computers and machines in our personal lives seems immense, but raises the issues of privacy and also social isolation as well as the more benign prospects of utility, therapy and companionship.
As well as perhaps regulating or monitoring the uses of AI, automation and robots, Knight asks: do we need to be proactive in considering the rights of machines? Or at least in considering conventions for their treatment? Ethicists are doing the important job of raising these issues, ranging from what choices an autonomous vehicle should make in a scenario where all possible outcomes involove human injury, or if we should ‘protect’ machines in order to protect our social covenants with real beings. As Kant said in his treatise on ethics, we have no moral obligation towards animals, and yet our behavior towards them reflects our humanity.
“If he is not to stifle his human feelings, he must practice kindness towards animals, for he who is cruel to animals becomes hard also in his dealings with men.” Kant
This suggests that, as a default, we should create more machines that are machine-like, machines that by design and appearance telegraph their constraints and behaviors. We should avoid the urge to anthropomorphize and personalize our devices, unless we can guarantee our humane treatment of them.
Knight outlines a human-robot partnership framework across three categories: Telepresence Robots, Collaborative Robots and Autonomous Vehicles. A telepresence robot is comparatively transparent, acting as a proxy for a person, who provides the high level control. A collaborative robot may be working directly with someone (as in robot surgery) or be working on command but interacting autonomously with other people (ie. delivery robot). An autonomous vehicle extends the previous scenarios and may be able to operate at distance or respond directly to the driver, pilot or passenger.
The ratio of shared autonomy is shifting towards the robot, and the challenge is to create patterns of interaction that minimize friction and maximize transparency, utility and social good. In conclusion, Knight calls for designers to better understand human culture and practices in order to frame issues for policy makers.
Brookings Institute and NY Times: Creating a place for dialogue
The Brookings Institute also released several other reports on robotics policy directions as part of their series on The Future of Civilian Robots, which culminated in a panel discussion. This format is similar to the NY Times Room for Debate, which brings outside experts together to discuss timely issues. However, there is a preponderance of law, governance, education and journalist experts on the panels, perhaps because these disciplines attract multidisciplinary or “meta” thinkers.
Is this the right mix? Are lawyers the right people to be defining the policy scope of robotics? Ryan Calo’s contribution to robotics as a law scholar has been both insightful and pragmatic, and well beyond the scope of any one robotics researcher or robot business. However, Calo has made robotics and autonomous vehicles his specialty area and has spent years engaged in dialogue with many robotics researchers and businesses.
Before moving to the University of Washington as Faculty Director of their new Tech Policy Lab, Calo was the Director of Robotics and Privacy at Stanford Law School’s Center for Internet & Society. Calo has an AB in Philosophy from Dartmouth College and a Doctorate in Law, cumme laude, from the University of Michigan. His writings have won best paper at conferences, have been read to the Senate, have provoked research grants, and have been republished in many top newspapers and journals.
Which comes first, the chicken or the egg? As technologies become more complex, can social issues be considered without a deep understanding of the technology and what it can or can’t enable? Equally, is it the technology that needs to be addressed or regulated, or is it the social practices, which might or might not be changed as we embrace new technologies?
It’s not surprising that lawyers are setting the standard for the policy debate, as writing and enacting policy is their bread and butter. But the underlying conclusion seems to be that we need deep engagement across many disciplines to develop good policy.
Summary: “The Case for a Federal Robotics Commission”
When Toyota customers claimed that their cars were causing accidents, the various government bodies involved called on NASA to investigate the complex technology interactions and separate mechanical issues from software problems. Ryan Calo takes the position that robotics, as a complex emerging technology, needs an organization capable of investigating potential future issues and shaping policy accordingly.
Calo calls on the US to create a Federal Robotics Commission, or risk falling behind the rest of world in innovation. Current bodies are ill-equipped to tackle “robotics in society” issues other than in piecemeal fashion. Understanding robotics requires cross-disciplinary expertise, and the technology itself may make possible new human experiences across a range of fields.
“Specifically, robotics combines, for the first time, the promiscuity of data with physical embodiment – robots are software that can touch you.” says Calo.
Society is still integrating the internet and now “bones are on the line in addition to bits”. There may be more victims, but how do we identify the perpetrators in a future full of robots? Law is, by and large, defined around human intent and foreseeability, so current legal structures may require review.
Calo considers the first robot-specific law passed by Nevada in 2011 for “autonomous vehicles”, which defined autonomous activity in a way that included most modern car behaviors, and thus had to be repealed. Where that error was due to a lack of technical expertise, Calo foresees the problem of a new class of behaviors being introduced.
Human driving error accounts for tens of thousands of fatalities. While autonomous vehicles will almost certainly reduce accidents, they might create some accidents that would not have occurred if humans were driving. Is this acceptable?
Calo also describes the ‘underinclusive’ nature of robotics policy, citing the FAA developing regulations for drones, which often serve as delivery mechanism for small cameras. However, the underlying issue of privacy is raised any time small cameras are badly deployed; in trees, on phones, on poles, or planes, or birds, not just in drones.
Other issues raised by Calo include: the impact of high frequency automated activity with real world repercussions; the potential for adaptive, or ‘cognitive’, use of communications frequencies; and potential problems swapping between automated and human control of systems, if required by either malfunction or law.
Calo then describes his vision for a Federal Robotics Commission modeled on similar previous organizations. This FRC would advise other agencies on policy relating to robots, drones or autonomous vehicles, and also advise federal, state and local lawmakers on robotics law and policy.
The FRC would convene domestic and international stakeholders across industry, government, academia and NGOs to discuss the impact of robotics and AI on society, and could potentially file ‘friend of the court’ briefs in complex technology matters.
Does this justify the call for another agency? Calo admits that there is overlap with the National Institute of Standards and Technology, the White House Office of Science and Technology Policy, and the Congressional Research Service. However, he believes that none of these bodies speaks to the whole of the “robotics in society” question.
Calo finishes with an interesting discussion with Cory Doctorow, about whether or not robotics could be considered separate to computers “and the networks that connect them”. Calo posits that the physical harm an embodied system, or robot, could do is very different to the economic or intangible harm done by software alone.
In conclusion, Calo calls for a Federal Robotics Commission to take charge of early legal and policy infrastructure for robotics. It was the decision to apply the First Amendment to the internet, and to immunize platforms for what users do, that allowed internet technology to thrive. And has, in turn, created new 21st century platforms for legal and policy debate.
Robohub – Using 21st century tools for science communication
In the 21st century, science has access to a whole new toolbox of communications. Where 19th century science was presented as theater, in the form of public lectures and demonstrations, 20th century science grew an entire business of showcases, primarily conferences and journals. New communication mediums are now disrupting established science communication.
There is an increasing expectation that science can be turned into a top 500 Youtube channel, like Minute Physics, or an award winning twitter account, like Neil De Grasse Tyson’s @neiltyson – which has 2.34 million followers. We are witnessing the rise of MOOCs (multi person open online courses) like the Khan Academy, and Open Access journals, like PLOS, the Public Library of Science.
Berkeley University has just appointed a ‘wikipedian-in-residence’, Kevin Gorman. The ‘wikiepedian-in-residency’ initiative started with museums, libraries and galleries, making information about artifacts and exhibits available to the broader public. This is a first however for a university and the goal is twofold: to extend public access to research that is usually behind paywalls or simply obscure; and to improve the writing, researching and publishing skills of students. Students are encouraged to find gaps in wikipedia and fill them, with reference to existing research.
In between individual experts and global knowledge banks there is space for curated niche content. Robohub is one of the sites that I think can play an integral role in both shaping the quality of debate in robotics and expanding the science communication toolbox. (Yes, I’m deeply involved in the site, so am certainly biased. But the increasing number of experts who are giving their time voluntarily to our site, and the rising amount of visitors, give weight to my assertions.)
Robohub had its inception in 2008 with the birth of the Robots Podcast, a biweekly feature on a range of robotics topics, now numbering more than 150 episodes. As the number of podcasts and contributors grew, the non-profit Robots Association was formed to provide an umbrella group tasked with spinning off new forms of science communication, sharing robotics research and information across the sector, across the globe and to the public.
Robohub is an online news site with high quality content, more than 140 contributors and 65,000 unique visitors per month. Content ranges from one-off stories about robotics research or business, to ongoing lecture series and micro lectures, to inviting debate about robotics issues, like the ‘Robotics by Invitation’ panels and the Roboethics polls. There are other initiatives in development including report production, research video dissemination and being a hub for robotics jobs, crowdfunding campaigns, research papers and conference information.
In lieu of a global robotics policy think tank, organizations like Robohub can do service by developing a range of broad policy reports, or by providing public access to a curated selection of articles, experts and reports.
In Conclusion
“Take me to your leader?” Even if we can identify our leaders, do they know where we are going? I suggest that our current robotics policy agenda needs to be reformed and better informed. This article provides a review of some recent policy reports and considers the changing shape of 21st century scientific debate. In conclusion, I make several recommendations for change:
The creation of a global robotics policy think tank.
I believe that a global robotics policy think tank will create informed debate across all silos and all verticals – a better solution than regulation or precautionary principle.
That the CTO of USA and the global equivalents make robotics a key strategy discussion.
Robotics has been identified as an important global and national economic driver. The responsibility or impetus to bridge silos, preventing both policy and innovation, must come from the top.
That a US Robotics Commission is created – while robotics is an emerging field – to implement a cross disciplinary understanding of this technological innovation and its impacts at all levels of society.
At a national rather than a global level, NASA is stepping in to bridge the gaps between technology developed under the aegis of bodies, like OSTP, NSF, DARPA etc. and the end effector regulatory bodies, like the DOF, DOA, DOT etc. Perhaps a robotics-specific organization or division within NASA is called for.
That funding bodies make grants available for cross disciplinary organizations engaged in creating a platform for informed debate on emerging technologies.
Organizations that are cross disciplinary with a global reach find it difficult to qualify for funding, as most funding agencies restrict their contributions, either locally or by discipline. A far-reaching technology like robotics needs a far-reaching policy debate.