Robohub.org
 

How do we regulate robo-morality?

by
15 December 2016



share this:
AI-human-robot-interaction

In April 2016, British Standards Institution (BSI) published the world’s first ethical standard for the design, production, sale, and usage of social robots. “BS 8611: 2016 Robots and robotic devices” gives guidelines to identify ethical hazards and reduce associated risks to acceptable levels. BS 8611 could be crucial in determining the nature of emerging human-robot relationships. However, the publication also raises many new questions. Are there any conflicts with ISO 13482 when incorporating BS 8611 into the design of safer robots? Should we consider regulating robot ethics? If so, how?

In this interview, we invite Prof. Joanna Bryson to share her insights on how to regulate future human-robot relationships.


Date: December 9th 2016.

Interviewee: Prof. Joanna J. Bryson, Reader at Department of Computer Science, University of Bath, and affiliate of the Center for Information Technology and Policy, Princeton University.

Interviewer: Dr. Yueh-Hsuan Weng, Co-founder of ROBOLAW.ASIA and Fellow at CLAST AI & Law Research Committee (Beijing) and Tech and Law Center (Milan).

joanna-bryson

Prof. Joanna Bryson. Image: University of Bath

Q1 WENG: Thank you for agreeing to an interview with us. Can you tell us a little about your background?

BRYSON: My primary academic interest is comparative intelligence.  I was particularly interested in animal behavior as a student, and took a liberal arts degree from Chicago with a major in the Behavioural Sciences.  I now have two degrees in Artificial Intelligence (including my PhD at MIT) and another masters degree in Psychology (both masters are from Edinburgh.)  I’m currently an Associate Professor at Bath where I founded the Intelligent Systems group, and organized Artificial Models of Natural Intelligence, which is basically my own laboratory.

During my PhD I was working on a humanoid robot project and was astonished that people thought they had moral obligations to the robot, even though it did not function at all.  It was just a bunch of motors welded together in the shape of a human.  I began publishing about this problem, thinking that people didn’t know enough about AI, but in recent years I’ve realised they don’t understand enough about ethics.  They are confused about what is really the root of ethical obligation, and they do not understand the nature of artefacts.  They do not understand that we can construct both AI and ethics so that we either are or aren’t obliged to AI.  I have become well-known for arguing that we are ethically obliged to construct AI we are not ethically obliged to.

 Q2 WENG: What is the meaning of “ethical hazards”? Except deception and anthropomorphism, what are other ethical hazards in BS 8611: 2016?

BRYSON: I have to admit I haven’t memorised that document and don’t have time to go back to read it now.  Maybe you are talking about a moral hazard?  A moral hazard is something likely to make you do the wrong thing.  In particular, I have argued that attributing ethical status to AI is a moral hazard, because it allows us to escape the responsibilities we ourselves have as the manufacturers, builders and operators of AI.  This is particularly obvious in the case of military weapons – we have enough trouble with distributing blame between soldiers, their commanders, and the government that commissions them.  Making weapons themselves “sinks” for guilt is not honest or beneficial.  But there are also moral hazards associated with owning “friends” and “lovers” – if children relate mostly with things they own, how will they grow to be able to be equal partners with their spouse or coworkers, or members of a democracy?

Q3 WENG: Some members from Bristol Robotics Laboratory are particularly interested in “embedded intelligence”? Why is this concept important? Does it have any impact on BS 8611: 2016?

BRYSON:  I’m sorry, I don’t think it matters particularly whether AI is embedded in an artefact or “channelled” through the cloud, except obviously there are privacy issues when we connect AI onto the Internet or into any shared digital memory.  In fact, one of the ways I recommend ensuring that we have no ethical obligation to robots or other AI is to ensure there is never a unique copy: that it should be backed up to remote sites at all times.

Q4 WENG: Prof. John Holland said, “..an internal model allows a system to look ahead to the future consequences of current actions, without actually committing itself to those actions”. Can you tell us about the concept of “internal model”? Does a passive dynamic robot have its own internal model as well?

BRYSON:  Ha, that’s a great question!  The idea of an internal model is just a representation that allows, as Prof Holland says, a means for planning around and anticipating the world.  Rodney Brooks famously recommended that AI should not have a model, should use the world as it’s own best model, but in fact that is not practical. There’s not enough information in a sensory signal to disambiguate a world sufficiently to create animal-like intelligence, we need prior expectations to interpret the sensory input.  Still, Brooks was a genius of AI and the field moved forward much faster when we realised we could use minimalist representations; just those necessary for action.  We realised that humans “flush” our memory quickly if sensory information contradicts it, and operate on partial knowledge all the time.  Previously we though AI had to have perfect knowledge so it could make perfect plans, but now we understand that that isn’t tractable.

So does a passive dynamic have an internal model? No.  It is a model of an animal walking system, but to some extent it embodies expectations about for example gravity, but by and large it can do no planning or control. It arrives only where it goes and never knows that it has gotten there or been anywhere else, so certainly cannot anticipate anything.  But its design anticipates particular regularities about the world, so its designers had models that are partly embodied in the walker.

Q5 WENG: I conducted an empirical case study on humanoid robots and regulation with researchers from Waseda University. We found that inside “Tokku” Special Zones there were strong demands for regulating robotics in road traffic, privacy, tele-communication, safety, tax, highway traffic, etc… However, it is not clear that whether robot ethics should be regulated. Do you have any comments on this?

BRYSON: Unfortunately I also haven’t had time to read your study yet, but I can talk a bit about regulation.  There are two important things to recognise.  First, government should not be thought of as an imposition, particularly in a democracy, but even in an autocracy government only functions to the extent it is tolerated.  Government and regulation is the way we citizens coordinate our desires and vision.  Many people in technology seem to think of regulation only as limits on their freedom, and indeed even the word compromise can be used in both a negative and a positive way.  But technologists need to understand that when people come together and work together, they often create something of mutual benefit, public goods that are greater than the summed contribution of those who are cooperating.  Regulations can include those that mandate investment of government resources (our taxes) in innovation.  They can be liberating as well as constraining. Coming to agreements with fellow citizens about essential matters like privacy and the distribution of wealth that AI has huge impact on is essential, and yes, agreements require compromise, but it is through these that we create peace, stability, and affluence.

Having said all that, what I get from surveys of citizens is not information about what regulations would really work.  Governance and regulation are extremely complex and have been studied for centuries; it is as crazy to suggest that ordinary citizens construct regulations as that they construct nuclear weapons or that random people be drafted to represent a country in the World Cup or the Olympics.  We need to respect expertise as well as the desires of ordinary citizens.  What the fears and goals of citizens tell us is how easy or hard it would be to implement particular regulations, how likely they are to be accepted and respected.  Most people on the planet believe that their ancestors still have concerns with their own behaviour, but I wouldn’t want to build policy relying on dead ancestors as actors.  But it does make sense to know people think this if you are governing.

 Q6 WENG: What’s your definition of “ethical robots”? Should we teach them right from wrong?

BRYSON:  I think it makes more sense to program than to teach. We have enough trouble teaching citizens and governments to do good.  Most actions have ethical consequences and there is no problem in building AI to recognise and reason about these, but I would never recommend holding the AI as morally responsible, even though I think AI will necessarily take actions with moral consequences.  As the British EPSRC Principles of Robotics make clear, the responsibility for commercial products lies either in their manufacturer (if they fail to meet an advertised specification) or in their owner / operator (if they are improperly used.)  There are ethical implementations of AI, for example protecting the privacy of the individual, or limiting the damage an individual can do with an autonomous vehicle, but the AI itself should not be designated the responsible party in any circumstance.

 Q7 WENG: Do you believe that Asimov’s Three Laws of Robotics will be guiding principles to ensure the safety and security of human-robot interactions?

BRYSON:  Asimov’s laws certainly impact how people think about AI ethics; the first three (of five) Principles of Robotics reflect (and correct) Asimov’s laws.  But we are working hard to make people understand that these laws are by no means necessary or sufficient for AI ethics.  First of all, Asimov’s laws are intractable. It’s computationally impossible to implement them as described in the books, we can never know all the consequences of our actions.  Second, the books themselves show repeatedly that the laws are insufficient to create truly moral behaviour, that’s the fun of the short stories.  And third, Asimov like most science fiction writers uses AI as a stand-in for humans, for examining what it is to be human. The laws describe the robot as the responsible party, and that’s as I said earlier a mistake.

 Q8 WENG: Do you agree that robots should not be legal persons? Why?

BRYSON:  I don’t know for sure whether people shall make them legal persons, but I am certain that they should not, and in the long run will not be.  A legal person is something that is subject to human justice, and I do not believe it is possible in a well-designed system to create the kind of suffering that is intrinsic to human punishment.  There is no necessity for robots or AI to know shame, pain, regret for the passage of time.  These things are completely intrinsic to being human, we do not consider people with deficits in these concerns to be mentally healthy.  This is because we have evolved to be social organisms and there is no way to extract these properties from our intelligence.  But well-designed AI is modular, anything that is deliberately added by a designer can be removed.  So that’s one reason it would be incoherent for AI to be a legal person.  Another reason is that there is no discrete unit of AI or robots like there is for humans.  We know what it is to be a human being, though we are sometimes a little challenged by people who are brain dead or conjoined twins.  But AI cannot be neatly counted like humans, and therefore the idea of taxing a robot like a person is ludicrous.  And, again, it is a moral hazard.  The companies who decide to employ robotics should be the ones who have legal and economic liability for whatever their robots do.  We should not let them wiggle out of their responsibility because of our love for science fiction or our deep desire to construct perfect, obedient, cyber “children”.

Q9 WENG: What is the definition of “well-designed robots” in your opinion?

BRYSON: A robot is well-designed if the company that produces it is willing to take full responsibility for its actions, and further if governments are willing to license it as safe for human society.  That second requirement is necessary because some corporations have more resources than many countries, so may be willing to accept risks that individual citizens would prefer not to be subject to.

Q10 WENG: Finally, what will be the biggest challenge to developing a human-robot co-existence Society in the following decade?

BRYSON:  We still have many challenges.  I think we will solve dexterity and physical safety before we solve cyber security.  I don’t think we have a full grasp yet about what impact we are already having on human society and human autonomy.  It’s difficult to measure this because so many things are changing at once; we also have the problems of climate change, of massive increases in human numbers and access to communication.  We are currently in a period of grotesque income inequality and political polarisation – some people blame these on AI, but in fact this happened only a century ago, and probably contributed to the two world wars and the great depression. So getting on top of understanding how we are influencing each other and compromising our health with our planets’ is essential to answering this question.


If you enjoyed this article, you may also also enjoy:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.



tags: , , , , ,


Yueh-Hsuan Weng is the Co-founder of ROBOLAW.ASIA Initiatives and Assistant Professor at Frontier Research Institute for Interdisciplinary Sciences, Tohoku University
Yueh-Hsuan Weng is the Co-founder of ROBOLAW.ASIA Initiatives and Assistant Professor at Frontier Research Institute for Interdisciplinary Sciences, Tohoku University





Related posts :



Robot Talk Episode 99 – Joe Wolfel

In the latest episode of the Robot Talk podcast, Claire chatted to Joe Wolfel from Terradepth about autonomous submersible robots for collecting ocean data.
22 November 2024, by

Robot Talk Episode 98 – Gabriella Pizzuto

In the latest episode of the Robot Talk podcast, Claire chatted to Gabriella Pizzuto from the University of Liverpool about intelligent robotic manipulators for laboratory automation.
15 November 2024, by

Online hands-on science communication training – sign up here!

Find out how to communicate about your work with experts from Robohub, AIhub, and IEEE Spectrum.
13 November 2024, by

Robot Talk Episode 97 – Pratap Tokekar

In the latest episode of the Robot Talk podcast, Claire chatted to Pratap Tokekar from the University of Maryland about how teams of robots with different capabilities can work together.
08 November 2024, by

Robot Talk Episode 96 – Maria Elena Giannaccini

In the latest episode of the Robot Talk podcast, Claire chatted to Maria Elena Giannaccini from the University of Aberdeen about soft and bioinspired robotics for healthcare and beyond.
01 November 2024, by

Robot Talk Episode 95 – Jonathan Walker

In the latest episode of the Robot Talk podcast, Claire chatted to Jonathan Walker from Innovate UK about translating robotics research into the commercial sector.
25 October 2024, by





Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association