Robohub.org
 

Lessons from Brexit and learning to better communicate robotics research and innovation


by
27 July 2016



share this:
Brexit-European-Union Eu Flag On Broken Wall And Half Great

As part of the UK’s National Robotics Week, The University of Sheffield hosted the 17th Towards Autonomous Systems (TAROS) conference from 28-30 June. Among the papers and discussions on the development of autonomous robotics research, two sessions on the last day looked at robots in the public eye and exploring the issue of responsible research in robotics.

Speakers at these panels included Sheffield Robotics’ Director Tony Prescott, Amanda Sharkey from Sheffield’s Department of Computer Science, and Bristol’s Professor Alan Winfield. The sessions looked broadly at the ethical issues confronting robotics research, but a particularly useful discussion, lead by Hilary Sutcliffe, Director of MATTER, examined how robots are regarded in the public imagination, and the vital need to confront these sometimes negative perceptions as we move forward with responsible research.

When I first started researching the public perception of robotics, I was tasked with answering a specific question: ‘Why are we afraid of robots?’ The – very legitimate – concern among many roboticists was that their research and innovations would crash upon the rocks of public resistance. Realising that robots are held in deep suspicion by a public fed a constant stream of dystopian science-fiction and newspapers that can’t get enough of THAT picture of the Terminator, many people researching robotics and AI were desperate to avoid the fraught battles faced by research into genetically modified organisms: dubbed ‘Frankenstein foods’, hostility in the UK and the EU more generally has been such that GMOs have been largely rejected by the public caught in the war of competing interests and the cacophony of voices, from scientists, corporations,  environmentalists, the media and politicians.

When we look at the case of GMOs, we are not seeking to determine whether GMOs are ‘good’ or ‘bad’, but merely, how the debate was shaped by particular interests, how the re-assurances of scientists and experts working in the field can be met by skeptical audiences and, especially, how public fear has been driven decision-making.

Hilary Sutcliffe and MATTER have been working with the University of Sheffield across a number of departments and faculties to create an agenda for future responsible research and innovation; more than merely putting plasters over public concerns, we are trying to genuinely work with both internal and external stakeholders, bringing together researchers, industry and specific public interests, to include all concerns at every level of decision-making and research initiatives. The case of GMOs has long been one of keen interest to us as we try to learn lessons from the past and move forward with new ideas about how to do things differently. But the case of Brexit – the British vote to leave the European Union – has provided a new set of questions, and some new lessons, and  made clear that there are a new set of challenges facing public attitudes to innovation.

This is a summary of some of Sutcliffe’s ideas, to which, I have added some more specific and historical contexts, the core of my own research into the cultural and social influences and impacts of robotics, in an attempt to understand not only where we are, but how we got here.

Anxieties about loss of control

Sutcliffe pointed out that ‘protests are proxies for other issues’. Brexit wasn’t just about the people of Britain being unhappy with the direction of the EU, or with the EU’s regulation of trade, the European Convention on Human Rights, or even, arguably, immigration to Britain from the rest of the EU. The vote in favour of Brexit, and the debate that lead up to that vote, was clearly about wider concerns among the British population: a feeling of loss of control, being ignored by a distant, elite class of politicians and ‘experts’ who too often have little time for the concerns of the public.

Boris Johnson at Conservative Party Conference, 2011. Credit: Flickr

Boris Johnson at Conservative Party Conference, 2011. Credit: Flickr

The principle slogan for the pro-Brexit campaigners was ‘Take Control‘ or ‘Take Back Control‘, and it’s clear that a primary motivation for people voting in favour of Brexit was reclaiming some imagined powers that were lost. Whether or not there has been a genuine loss of power or control is not the point: for now, let us set aside criticisms that the Leave campaign flatly lied, by misrepresented the powers claimed by the EU and the extent to which the European Commission, for example, controls aspects of life in Britain. (Though make no mistake – these were terrible, consciously manipulated falsehoods and the Leave campaign exploited both fear and ignorance in propagating these lies.) We also need to set aside, for now, the challenge that people did not have ‘control’, i.e. political authority, of their lives before Britain joined the EEC in 1973.

Whatever the validity of the feelings, sentiments and ideas that give rise to anxiety, it is important for us to remember that the anxiety itself is real. We can and should counter fears of immigration with the genuine statistics about the levels and impacts of immigration (especially since so many in Britain, and particularly Leave voters, overestimate the levels of immigration in their country), just as we should reassure people that the increasing automation of workplaces will mean more job opportunities elsewhere (or, perhaps even more leisure time) in a society that can be more, not less, equal. But to provide these arguments while dismissing the public’s fears, and assuming that reasonable arguments will easily triumph over the irrationality of anxiety is both to underestimate the power of storytelling and the imagination and to patronise the genuine concerns of whole sections of our societies that have genuine worries that the rosy utopia promised by politicians, policy-makers and scientists might not come to pass.

While it is clear that this loss of control, real or imagined, can be traced to the vicissitudes of global (late-)capitalism, that attribution to such a nebulous, complex force, complicates the argument, and so this genuine problem often escapes blame for much of what it has wrought upon modern societies because it is hard to explain and, most problematically, it is faceless. It is much easier to identify and blame a bogeyman – and it is much easier for politicians and the media to ‘sell’ bogeymen as responsible for stealing control away from people and their traditional communities. The greatest of these bogeymen, the one that bears the brunt of people’s frustration, is the immigrant, as the Brexit debate demonstrated. The face that has to bear the blame is dark-skinned, with a beard or a hijab (and not at all representative of the dominant ethnic make-up of the EU itself).

However, after the dreaded ‘Schrödinger’s immigrant’, little in modern life threatens to take control away from people and traditional communities as much as robots. Like immigrants, robots are imagined to be mysterious beings with inexplicable motivations, and though we may at first welcome them and the promise they offer we will soon rue our decision as they move to steal our jobs and leave our communities empty and bereft of the power to sustain themselves. The robot even has a face that we can identify and fear: just as so many stories about immigrants – including UKIP’s shameless poster of refugees trying to enter Eastern Europe – are accompanied by pictures of Muslims in order to associate all immigration with the image most feared by the British public (the radical, mysterious, religious terrorist), so many stories about robots are accompanied by pictures of the Terminator, associating all robots in the minds of the British public with an mercilessly rational killing machine bent on the extermination of the human race.

This is why, for example, robots – with their humanoid appearance, not to mention how well they can be cast as villains in a Hollywood plot – cause more anxiety in the population than the much more immediate threats of, for example, climate change.

The danger of hype

Part of the problem faced by the Remain camp was that their arguments, warning of the negative consequences of Brexit (many of which have already come to pass, incidentally, or look soon to do so) were easily branded as ‘Project Fear’ by Brexit supporters (ignoring the irony, of course, of the Leave campaign’s own scaremongering, particularly around the issue of immigration). By thus portraying the Remain camp’s warnings, pro-Brexit campaigners were not only able to largely discredit these potential consequences, but also to increase hostility towards Remain more generally.

The lesson from this for robotics is simply that hype breeds distrust. Big, shouty arguments, made in CAPITAL LETTERS! seem to be losing their effectiveness. Again, if we look at this historically, we might notice how we have become a society not only inured against sensationalism – the grandiose promise of New! Improved! and the dire warnings of the consequences of our clothes being anything less than spotlessly white – but that we may be becoming a society increasingly angered by this hysterical consumerism.

As academics, we are sometimes our own worst enemies in this regard, by hyping either what we can do (e.g. ‘if you provide us with the £80K grant, in two years we will have a fully cognizant humanoid robot’), or by overstating the threat is posed by what is on the horizon (e.g. we’ve often wondered how much money we could attract if we ran a Sheffield Centre for Robots Will Kill Us All, Run! Run for Your Lives!)

Lack of faith in ‘expertise’

This apparent immunity to hype and hyperbole is compounded by an apparent confusion as to where reliable information can be found. Time and again in my crusades on social media people expressed a desire to find ‘objective’ facts. There was a frustration that the government were not providing such objectivity, and were instead, in leading the campaign to Remain, mired in the same games of spin and hyperbole as the Leave campaign. But while the government, and by extension the Remain campaign, were subject to blame and frustration for their campaign, the Leave campaign, much more demonstrably based on lies and falsehoods, were excused and indulged their fallacious arguments. It is as if such deceitful behaviour was expected of the pro-Brexiters, and the anger directed at the pro-EU leaders was less about what they said and more about their perceived betrayal of the electorate, for daring to take sides at all. This allowed the Brexiters to portray themselves – laughably, given their funding, leadership and core demographic – as ‘anti-establishment’, and somehow, therefore, blameless for all of the feelings of confusion, powerlessness and the anxiety that was behind much of the Brexit support among the electorate.

Sutcliffe noted that Brexit demonstrated that people simply aren’t listening to ‘experts’ anymore. The distrust of politicians and corporations, it seems, seems to have infected other professions, including economists and scientists (despite the courageous efforts of groups like Scientists for the EU). In this respect alone, academics seeking to engage the public about robotics research are facing an uphill struggle as we try to counter the mythologies that have so grabbed the popular imagination, from Frankenstein-styled genocidal killing machines to completely empathetic, mass-unemployment inducing humanoids with vastly superior intellects.

The loss of faith in expertise is a complex issue that has a long-festering history, little of which is easily remedied. Cultural theorists have pointed out for decades how, since the catastrophes of WWII, we have moved to a ‘postmodern condition‘, a general mistrust of ‘metanarratives‘ – any argument that poses as ‘The Truth’. (To what extent academics themselves, with a healthy ‘hermeneutics of suspicion‘ or promoting a radical relativism, has sown the seeds of this popular uprising is a topic to be investigated in more depth another time.)

This questioning of science also has historical foundations in the fear of hubris, long-nurtured in the Romantic imagination in the Frankenstein mythology and what Tony Stark – yes, Iron Man – describes in Avengers: Age of Ultron as the ‘man was not meant to meddle medley’. The public (such as ‘they’ can be described as a homogeneous entity) has had over 200 years to get used to the idea that scientists’ ambition knows no bounds, and that their arrogance will lead not only to their own downfall but, eventually, inevitably, to the downfall of the entire human race. Scientists, we have been told over and over again in novels, in film, and now in video games, are so enamoured of technology, of what is possible, that they never stop to ask if they should.

your-scientists-were-so-preocc-K8Ym

And while these fears were at the heart of the GMO debates, no ‘experts’ suffer more from this fear of hubris than roboticists, who are very literally building the machines that will, of course, destroy the entire race. We see parallels in the way that the Faustian & Frankenstein mythology has been employed so widely in the popular conception of both GMO, ‘Frankenstein-foods’, and robotics: time and again, for example, even our most recent films about robots are just versions of this mythology, from Metropolis to Terminator to last year’s Ex_Machina and Chappie.

(There has been a noticeable shift, which is interesting to note, too, from the fear of the hubris of the individual scientist – as evident in Frankenstein and Metropolis – to mistrust of corporations, such as in Chappie, which might give scientists some hope that public opinion is moving, or can be swayed, in their favour.)

Nostalgia for simpler times

Sutcliffe noted how both the anxiety about the loss of personal control and mistrust of expert opinion, can be seen as a nostalgia for simpler times, for an era when people enjoyed the comfort of metanarratives and felt that they had a more direct stake in the direction of their lives. What historical analyses demonstrates, however, is that such an era is a fantasy. Communities may have enjoyed the illusion of stability under earlier, less sophisticated forms of capitalism, but they were always ultimately subject to the whims of the market and the decision-making of industrial powers.

And while it may have taken time for the postmodern ideas of academics to seep down into the popular imagination, mistrust of those in power, or perceived to be in power, is a process that started with the hermeneutics of suspicion undertaken in the nineteenth century (thinking of Marx and Freud, for example). We could go even further back, and note  how the lionisation of rebellion, opposition to authority and anti-establishment sentiment were to be found at the very heart of the Romantic project, at the very birth of our new, brave (isolated) modern individual. (Consider, for example, Blake’s re-conception of Milton’s Satan as a Romantic hero). All of this happened long before even the Second World War which, according to the ‘official’ postmodern line, marked the ‘death of metanarratives’ – so we’ve had at least 50 years, if not 250 years, to get used to these ideas.

The lessons, and what to do

There is perhaps little that we can do, immediately, to overturn 200 years of post-Romantic ideology (which would involve nothing less than a complete reconception of what it means to be human), however, there are some lessons from Brexit that we can more readily and easily implement with regard to robotics research and innovation:

  • Get a good soundbite! It may sound trivial, but in our media age this is a potent weapon in the war for hearts and minds. GMOs were permanently tarred with the label ‘Frankenfoods‘, and the Brexit campaign used the simple, if wholly vague and inaccurate, slogan of #TakeControl.
  • Develop a strong narrative. There is little roboticists can do to counter such terrific stories as Frankenstein, Metropolis, The Terminator and the scores of other films, books and video games with such mass appeal without slipping into the hyperbole of utopias, and so falling victim to hype. For every Andrew from Bicentennial Man, there are a thousand Terminators (and we all know who would win that fight). You do not need to be a relativist to understand and accept the importance of the popular imagination in constructing reality. Do not underestimate the power of a really good story.
  • Be careful what you promise. Sutcliffe cites Richard Jones and the ‘economy of promises’: the balance between ‘optimism’ and ‘hype’ from research proposals to headlines in popular tabloids.
  • It is vital, therefore, to communicate your vision carefully, authentically and early. It is important, to be engaging from the outset, to break up counter-narratives and prevent them from catching hold of the popular imagination, from becoming ‘common sense’. Try countering common beliefs such as ‘GMO foods harm the environment’ or that ‘The are too many immigrants in this country’. Once a belief takes hold, and has been supported by a large number of people, certain laws governing cognitive dissonance makes it very hard to shift opinions.

As we’ve seen from Brexit, because of the coming together of these factors, of history, of popular mood and cultural climate, if you wait to engage the public and other stakeholders until there is a clearly defined problem that you need to counteract, it’s probably already too late.

But messages also need to be communicated authentically: many people seem unable to discriminate between competing claims of truth (e.g. can’t judge what is ‘true’ between an academic study supported by strong evidence and a screeching headline at the top of an opinion piece in an ideologically-interested tabloid). In the absence of skills that allow such distinctions to be made, people seem more prepared to believe that voice that is more akin to their own. Nigel Farage and his folksy populism is more readily perceived as ‘authentic’ than David Cameron’s convoluted and carefully focus-grouped statements. Boris the Bumbler is regarded as more honest than Ed Miliband’s carefully argued ideas. ‘Authenticity’, or the perception of authenticity, matter. People feel as though they are capable of sniffing out bullshit, even if they are, in fact, not.

This makes a strong case for more and varied people to enter into STEM subjects, so that we have more and varied voices speaking on their behalf. And, we need to learn how to speak in a language that is understood by wider audiences. This is not the same as ‘dumbing down’, and it is important not to be seen to be patronising those stakeholders with whom we need to engage. But, if we do not learn how to articulate ourselves in a language that can be understood then our messages will always fall on deaf ears, and we will always lose to those that are more effective at communicating.



tags: , ,


Michael Szollosy Michael Szollosy is a research fellow at Sheffield Robotics, at the University of Sheffield, and is attached to the Department of Psychology.
Michael Szollosy Michael Szollosy is a research fellow at Sheffield Robotics, at the University of Sheffield, and is attached to the Department of Psychology.





Related posts :



Robot Talk Episode 103 – Keenan Wyrobek

  20 Dec 2024
In the latest episode of the Robot Talk podcast, Claire chatted to Keenan Wyrobek from Zipline about drones for delivering life-saving medicine to remote locations.

Robot Talk Episode 102 – Isabella Fiorello

  13 Dec 2024
In the latest episode of the Robot Talk podcast, Claire chatted to Isabella Fiorello from the University of Freiburg about bioinspired living materials for soft robotics.

Robot Talk Episode 101 – Christos Bergeles

  06 Dec 2024
In the latest episode of the Robot Talk podcast, Claire chatted to Christos Bergeles from King's College London about micro-surgical robots to deliver therapies deep inside the body.

Robot Talk Episode 100 – Mini Rai

  29 Nov 2024
In the latest episode of the Robot Talk podcast, Claire chatted to Mini Rai from Orbit Rise about orbital and planetary robots.

Robot Talk Episode 99 – Joe Wolfel

  22 Nov 2024
In the latest episode of the Robot Talk podcast, Claire chatted to Joe Wolfel from Terradepth about autonomous submersible robots for collecting ocean data.

Robot Talk Episode 98 – Gabriella Pizzuto

  15 Nov 2024
In the latest episode of the Robot Talk podcast, Claire chatted to Gabriella Pizzuto from the University of Liverpool about intelligent robotic manipulators for laboratory automation.

Online hands-on science communication training – sign up here!

  13 Nov 2024
Find out how to communicate about your work with experts from Robohub, AIhub, and IEEE Spectrum.

Robot Talk Episode 97 – Pratap Tokekar

  08 Nov 2024
In the latest episode of the Robot Talk podcast, Claire chatted to Pratap Tokekar from the University of Maryland about how teams of robots with different capabilities can work together.





Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association