As part of the UK’s National Robotics Week, The University of Sheffield hosted the 17th Towards Autonomous Systems (TAROS) conference from 28-30 June. Among the papers and discussions on the development of autonomous robotics research, two sessions on the last day looked at robots in the public eye and exploring the issue of responsible research in robotics.
Speakers at these panels included Sheffield Robotics’ Director Tony Prescott, Amanda Sharkey from Sheffield’s Department of Computer Science, and Bristol’s Professor Alan Winfield. The sessions looked broadly at the ethical issues confronting robotics research, but a particularly useful discussion, lead by Hilary Sutcliffe, Director of MATTER, examined how robots are regarded in the public imagination, and the vital need to confront these sometimes negative perceptions as we move forward with responsible research.
When I first started researching the public perception of robotics, I was tasked with answering a specific question: ‘Why are we afraid of robots?’ The – very legitimate – concern among many roboticists was that their research and innovations would crash upon the rocks of public resistance. Realising that robots are held in deep suspicion by a public fed a constant stream of dystopian science-fiction and newspapers that can’t get enough of THAT picture of the Terminator, many people researching robotics and AI were desperate to avoid the fraught battles faced by research into genetically modified organisms: dubbed ‘Frankenstein foods’, hostility in the UK and the EU more generally has been such that GMOs have been largely rejected by the public caught in the war of competing interests and the cacophony of voices, from scientists, corporations, environmentalists, the media and politicians.
When we look at the case of GMOs, we are not seeking to determine whether GMOs are ‘good’ or ‘bad’, but merely, how the debate was shaped by particular interests, how the re-assurances of scientists and experts working in the field can be met by skeptical audiences and, especially, how public fear has been driven decision-making.
Hilary Sutcliffe and MATTER have been working with the University of Sheffield across a number of departments and faculties to create an agenda for future responsible research and innovation; more than merely putting plasters over public concerns, we are trying to genuinely work with both internal and external stakeholders, bringing together researchers, industry and specific public interests, to include all concerns at every level of decision-making and research initiatives. The case of GMOs has long been one of keen interest to us as we try to learn lessons from the past and move forward with new ideas about how to do things differently. But the case of Brexit – the British vote to leave the European Union – has provided a new set of questions, and some new lessons, and made clear that there are a new set of challenges facing public attitudes to innovation.
This is a summary of some of Sutcliffe’s ideas, to which, I have added some more specific and historical contexts, the core of my own research into the cultural and social influences and impacts of robotics, in an attempt to understand not only where we are, but how we got here.
Sutcliffe pointed out that ‘protests are proxies for other issues’. Brexit wasn’t just about the people of Britain being unhappy with the direction of the EU, or with the EU’s regulation of trade, the European Convention on Human Rights, or even, arguably, immigration to Britain from the rest of the EU. The vote in favour of Brexit, and the debate that lead up to that vote, was clearly about wider concerns among the British population: a feeling of loss of control, being ignored by a distant, elite class of politicians and ‘experts’ who too often have little time for the concerns of the public.
The principle slogan for the pro-Brexit campaigners was ‘Take Control‘ or ‘Take Back Control‘, and it’s clear that a primary motivation for people voting in favour of Brexit was reclaiming some imagined powers that were lost. Whether or not there has been a genuine loss of power or control is not the point: for now, let us set aside criticisms that the Leave campaign flatly lied, by misrepresented the powers claimed by the EU and the extent to which the European Commission, for example, controls aspects of life in Britain. (Though make no mistake – these were terrible, consciously manipulated falsehoods and the Leave campaign exploited both fear and ignorance in propagating these lies.) We also need to set aside, for now, the challenge that people did not have ‘control’, i.e. political authority, of their lives before Britain joined the EEC in 1973.
Whatever the validity of the feelings, sentiments and ideas that give rise to anxiety, it is important for us to remember that the anxiety itself is real. We can and should counter fears of immigration with the genuine statistics about the levels and impacts of immigration (especially since so many in Britain, and particularly Leave voters, overestimate the levels of immigration in their country), just as we should reassure people that the increasing automation of workplaces will mean more job opportunities elsewhere (or, perhaps even more leisure time) in a society that can be more, not less, equal. But to provide these arguments while dismissing the public’s fears, and assuming that reasonable arguments will easily triumph over the irrationality of anxiety is both to underestimate the power of storytelling and the imagination and to patronise the genuine concerns of whole sections of our societies that have genuine worries that the rosy utopia promised by politicians, policy-makers and scientists might not come to pass.
While it is clear that this loss of control, real or imagined, can be traced to the vicissitudes of global (late-)capitalism, that attribution to such a nebulous, complex force, complicates the argument, and so this genuine problem often escapes blame for much of what it has wrought upon modern societies because it is hard to explain and, most problematically, it is faceless. It is much easier to identify and blame a bogeyman – and it is much easier for politicians and the media to ‘sell’ bogeymen as responsible for stealing control away from people and their traditional communities. The greatest of these bogeymen, the one that bears the brunt of people’s frustration, is the immigrant, as the Brexit debate demonstrated. The face that has to bear the blame is dark-skinned, with a beard or a hijab (and not at all representative of the dominant ethnic make-up of the EU itself).
However, after the dreaded ‘Schrödinger’s immigrant’, little in modern life threatens to take control away from people and traditional communities as much as robots. Like immigrants, robots are imagined to be mysterious beings with inexplicable motivations, and though we may at first welcome them and the promise they offer we will soon rue our decision as they move to steal our jobs and leave our communities empty and bereft of the power to sustain themselves. The robot even has a face that we can identify and fear: just as so many stories about immigrants – including UKIP’s shameless poster of refugees trying to enter Eastern Europe – are accompanied by pictures of Muslims in order to associate all immigration with the image most feared by the British public (the radical, mysterious, religious terrorist), so many stories about robots are accompanied by pictures of the Terminator, associating all robots in the minds of the British public with an mercilessly rational killing machine bent on the extermination of the human race.
This is why, for example, robots – with their humanoid appearance, not to mention how well they can be cast as villains in a Hollywood plot – cause more anxiety in the population than the much more immediate threats of, for example, climate change.
Part of the problem faced by the Remain camp was that their arguments, warning of the negative consequences of Brexit (many of which have already come to pass, incidentally, or look soon to do so) were easily branded as ‘Project Fear’ by Brexit supporters (ignoring the irony, of course, of the Leave campaign’s own scaremongering, particularly around the issue of immigration). By thus portraying the Remain camp’s warnings, pro-Brexit campaigners were not only able to largely discredit these potential consequences, but also to increase hostility towards Remain more generally.
The lesson from this for robotics is simply that hype breeds distrust. Big, shouty arguments, made in CAPITAL LETTERS! seem to be losing their effectiveness. Again, if we look at this historically, we might notice how we have become a society not only inured against sensationalism – the grandiose promise of New! Improved! and the dire warnings of the consequences of our clothes being anything less than spotlessly white – but that we may be becoming a society increasingly angered by this hysterical consumerism.
As academics, we are sometimes our own worst enemies in this regard, by hyping either what we can do (e.g. ‘if you provide us with the £80K grant, in two years we will have a fully cognizant humanoid robot’), or by overstating the threat is posed by what is on the horizon (e.g. we’ve often wondered how much money we could attract if we ran a Sheffield Centre for Robots Will Kill Us All, Run! Run for Your Lives!)
This apparent immunity to hype and hyperbole is compounded by an apparent confusion as to where reliable information can be found. Time and again in my crusades on social media people expressed a desire to find ‘objective’ facts. There was a frustration that the government were not providing such objectivity, and were instead, in leading the campaign to Remain, mired in the same games of spin and hyperbole as the Leave campaign. But while the government, and by extension the Remain campaign, were subject to blame and frustration for their campaign, the Leave campaign, much more demonstrably based on lies and falsehoods, were excused and indulged their fallacious arguments. It is as if such deceitful behaviour was expected of the pro-Brexiters, and the anger directed at the pro-EU leaders was less about what they said and more about their perceived betrayal of the electorate, for daring to take sides at all. This allowed the Brexiters to portray themselves – laughably, given their funding, leadership and core demographic – as ‘anti-establishment’, and somehow, therefore, blameless for all of the feelings of confusion, powerlessness and the anxiety that was behind much of the Brexit support among the electorate.
Sutcliffe noted that Brexit demonstrated that people simply aren’t listening to ‘experts’ anymore. The distrust of politicians and corporations, it seems, seems to have infected other professions, including economists and scientists (despite the courageous efforts of groups like Scientists for the EU). In this respect alone, academics seeking to engage the public about robotics research are facing an uphill struggle as we try to counter the mythologies that have so grabbed the popular imagination, from Frankenstein-styled genocidal killing machines to completely empathetic, mass-unemployment inducing humanoids with vastly superior intellects.
The loss of faith in expertise is a complex issue that has a long-festering history, little of which is easily remedied. Cultural theorists have pointed out for decades how, since the catastrophes of WWII, we have moved to a ‘postmodern condition‘, a general mistrust of ‘metanarratives‘ – any argument that poses as ‘The Truth’. (To what extent academics themselves, with a healthy ‘hermeneutics of suspicion‘ or promoting a radical relativism, has sown the seeds of this popular uprising is a topic to be investigated in more depth another time.)
This questioning of science also has historical foundations in the fear of hubris, long-nurtured in the Romantic imagination in the Frankenstein mythology and what Tony Stark – yes, Iron Man – describes in Avengers: Age of Ultron as the ‘man was not meant to meddle medley’. The public (such as ‘they’ can be described as a homogeneous entity) has had over 200 years to get used to the idea that scientists’ ambition knows no bounds, and that their arrogance will lead not only to their own downfall but, eventually, inevitably, to the downfall of the entire human race. Scientists, we have been told over and over again in novels, in film, and now in video games, are so enamoured of technology, of what is possible, that they never stop to ask if they should.
And while these fears were at the heart of the GMO debates, no ‘experts’ suffer more from this fear of hubris than roboticists, who are very literally building the machines that will, of course, destroy the entire race. We see parallels in the way that the Faustian & Frankenstein mythology has been employed so widely in the popular conception of both GMO, ‘Frankenstein-foods’, and robotics: time and again, for example, even our most recent films about robots are just versions of this mythology, from Metropolis to Terminator to last year’s Ex_Machina and Chappie.
(There has been a noticeable shift, which is interesting to note, too, from the fear of the hubris of the individual scientist – as evident in Frankenstein and Metropolis – to mistrust of corporations, such as in Chappie, which might give scientists some hope that public opinion is moving, or can be swayed, in their favour.)
Sutcliffe noted how both the anxiety about the loss of personal control and mistrust of expert opinion, can be seen as a nostalgia for simpler times, for an era when people enjoyed the comfort of metanarratives and felt that they had a more direct stake in the direction of their lives. What historical analyses demonstrates, however, is that such an era is a fantasy. Communities may have enjoyed the illusion of stability under earlier, less sophisticated forms of capitalism, but they were always ultimately subject to the whims of the market and the decision-making of industrial powers.
And while it may have taken time for the postmodern ideas of academics to seep down into the popular imagination, mistrust of those in power, or perceived to be in power, is a process that started with the hermeneutics of suspicion undertaken in the nineteenth century (thinking of Marx and Freud, for example). We could go even further back, and note how the lionisation of rebellion, opposition to authority and anti-establishment sentiment were to be found at the very heart of the Romantic project, at the very birth of our new, brave (isolated) modern individual. (Consider, for example, Blake’s re-conception of Milton’s Satan as a Romantic hero). All of this happened long before even the Second World War which, according to the ‘official’ postmodern line, marked the ‘death of metanarratives’ – so we’ve had at least 50 years, if not 250 years, to get used to these ideas.
There is perhaps little that we can do, immediately, to overturn 200 years of post-Romantic ideology (which would involve nothing less than a complete reconception of what it means to be human), however, there are some lessons from Brexit that we can more readily and easily implement with regard to robotics research and innovation:
As we’ve seen from Brexit, because of the coming together of these factors, of history, of popular mood and cultural climate, if you wait to engage the public and other stakeholders until there is a clearly defined problem that you need to counteract, it’s probably already too late.
But messages also need to be communicated authentically: many people seem unable to discriminate between competing claims of truth (e.g. can’t judge what is ‘true’ between an academic study supported by strong evidence and a screeching headline at the top of an opinion piece in an ideologically-interested tabloid). In the absence of skills that allow such distinctions to be made, people seem more prepared to believe that voice that is more akin to their own. Nigel Farage and his folksy populism is more readily perceived as ‘authentic’ than David Cameron’s convoluted and carefully focus-grouped statements. Boris the Bumbler is regarded as more honest than Ed Miliband’s carefully argued ideas. ‘Authenticity’, or the perception of authenticity, matter. People feel as though they are capable of sniffing out bullshit, even if they are, in fact, not.
This makes a strong case for more and varied people to enter into STEM subjects, so that we have more and varied voices speaking on their behalf. And, we need to learn how to speak in a language that is understood by wider audiences. This is not the same as ‘dumbing down’, and it is important not to be seen to be patronising those stakeholders with whom we need to engage. But, if we do not learn how to articulate ourselves in a language that can be understood then our messages will always fall on deaf ears, and we will always lose to those that are more effective at communicating.