Robohub.org
 

Thoughts on the EU’s draft report on robotics


by
08 May 2017



share this:

NAO robot. Photo courtesy: Paul Bremner/UWE

I was asked to write a short op-ed on the European Parliament Law Committee’s recommendations on civil law rules for robotics. In the end, the piece didn’t get published, so I am posting it here:


It is a great shame that most reports of the European Parliament’s Committee for Legal Affairs’ vote on its Draft Report on Civil Law Rules on Robotics headlined on ‘personhood’ for robots because the report has much else to commend it. Most important among its several recommendations is a proposed code of ethical conduct for roboticists, which explicitly asks designers to research and innovate responsibly. Some may wonder why such an invitation even needs to be made but, given that engineering and computer science education rarely includes classes on ethics (it should), it is really important that robotics engineers reflect on their ethical responsibilities to society – especially given how disruptive robot technologies are. This is not new – great frameworks for responsible research and innovation already exist. One such is the 2014 Rome Declaration on RRI, and in 2015 the Foundation for Responsible Robotics was launched.

Within the report’s draft Code of Conduct is a call for robotics funding proposals to include a risk assessment. This too is a very good idea and guidance already exists in British Standard BS 8611, published in April 2016. BS 8611 sets out a comprehensive set of ethical risks and offers guidance on how to mitigate them. It is very good also to see that the Code stresses that humans, not robots, are the responsible agents; this is something we regarded as fundamental when we drafted the Principles of Robotics in 2010.

For me, transparency (or the lack of it) is an increasing worry in both robots and AI systems. Labour’s industry spokesperson Chi Onwurah is right to say, “Algorithms are part of our world, so they are subject to regulation, but because they are not transparent, it’s difficult to regulate them effectively” (and don’t forget that it is algorithms that make intelligent robots intelligent). So it is very good to see the draft Code call for robotics engineers to “guarantee transparency … and right of access to information by all stakeholders”, and then in the draft ‘Licence for Designers’: you should ensure “maximal transparency” and even more welcome “you should develop tracing tools that … facilitate accounting and explanation of robotic behaviour… for experts, operators and users”.  Within the IEEE Standards Association Global Initiative on Ethics in AI and Autonomous Systems, launched in 2016, we are working on a new standard on Transparency in Autonomous Systems.

This brings me to standards and regulation.  I am absolutely convinced that regulation, together with transparency and public engagement, builds public trust. Why is it that we trust our tech? Not just because it’s cool and convenient, but also because it’s safe (and we assume that the disgracefully maligned experts will take care of assuring that safety). One of the reasons we trust airliners is that we know they are part of a highly regulated industry with an amazing safety record. The reason commercial aircraft are so safe is not just good design, it is also the tough safety certification processes and, when things do go wrong, robust processes of air accident investigation. So the Report’s call for a European Agency for Robotics and AI to recommend standards and regulatory framework is, as far as I’m concerned, not a moment too soon. We urgently need standards for safety certification of a wide range of robots, from drones and driverless cars to robots for care and assisted living.

Like many of my robotics colleagues, I am deeply worried by the potential for robotics and AI to increase levels of economic inequality in the world. Winnie Byanyima, executive director of Oxfam writes for the WEF, “We need fundamental change to our economic model. Governments must stop hiding behind ideas of market forces and technological change. They … need to steer the direction of technological development”. I think she is right – we need a serious public conversation about technological unemployment and how we ensure that the wealth created by AI and Automonous Systems is shared by all. A Universal Basic Income may or may not be the best way to do this – but it is very encouraging to see this question raised in the draft Report.

I cannot close the piece without at least mentioning artificial personhood. My own view is that personhood is the solution to a problem that doesn’t exist. I can understand why, in the context of liability, the Report raises this question for discussion, but – as the report itself later asserts in the Code of Conduct: humans, not robots, are the responsible agents. Robots are and should remain, artefacts.


If you enjoyed this article, you may also want to read:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.



tags: , , ,


Alan Winfield is Professor in robotics at UWE Bristol. He communicates about science on his personal blog.
Alan Winfield is Professor in robotics at UWE Bristol. He communicates about science on his personal blog.





Related posts :



Robot Talk Episode 110 – Designing ethical robots, with Catherine Menon

  21 Feb 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Catherine Menon from the University of Hertfordshire about designing home assistance robots with ethics in mind.

Robot Talk Episode 109 – Building robots at home, with Dan Nicholson

  14 Feb 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Dan Nicholson from MakerForge.tech about creating open source robotics projects you can do at home.

Robot Talk Episode 108 – Giving robots the sense of touch, with Anuradha Ranasinghe

  07 Feb 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Anuradha Ranasinghe from Liverpool Hope University about haptic sensors for wearable tech and robotics.

Robot Talk Episode 107 – Animal-inspired robot movement, with Robert Siddall

  31 Jan 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Robert Siddall from the University of Surrey about novel robot designs inspired by the way real animals move.

Robot Talk Episode 106 – The future of intelligent systems, with Didem Gurdur Broo

  24 Jan 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Didem Gurdur Broo from Uppsala University about how to shape the future of robotics, autonomous vehicles, and industrial automation.

Robot Talk Episode 105 – Working with robots in industry, with Gianmarco Pisanelli 

  17 Jan 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Gianmarco Pisanelli from the Advanced Manufacturing Research Centre about how to promote the safe and intuitive use of robots in manufacturing.

Robot Talk Episode 104 – Robot swarms inspired by nature, with Kirstin Petersen

  10 Jan 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Kirstin Petersen from Cornell University about how robots can work together to achieve complex behaviours.

Robot Talk Episode 103 – Delivering medicine by drone, with Keenan Wyrobek

  20 Dec 2024
In the latest episode of the Robot Talk podcast, Claire chatted to Keenan Wyrobek from Zipline about drones for delivering life-saving medicine to remote locations.





Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association