Icelandic research institute unveils ethical robotics policy
The Icelandic Institute of Intelligent Machines (IIIM) has become the first R&D centre in the world to adopt a policy that repudiates development of robotic technologies intended for military operations.
The IIIM’s new ethics policy has been unanimously agreed by its staff and Board of Directors and came into force at the end of last month. It aims for the peaceful use of artificial intelligence and draws a firm line against collaboration with any organisation “even partially funded by military means within the last five years.”
“It is only fitting that a research centre in Iceland should field such a policy – a nation without a standing army and virtually no history of war in its 1100 years”, said Kristinn R. Thórisson, IIIM’s Managing Director.
“Like any other technology, AI can be abused at everyone’s expense, escalating the dangers associated with tensions between groups, governments, and nations. Researchers stand at the threshold of new technology; they should actively participate by preventing the abuse of knowledge they produce. This is, in essence, what we are doing with our new policy.”
It’s clear that the non-profit IIIM has launched its policy not only to publicise its anti-military stance, but also to mobilise other researchers and R&D centres to take similar action.
At least 87 countries are now known to use military robotics of some sort and the IIIM’s Ethics Policy joins a growing opposition to what is seen by many as an inevitable development.
Many experts and various pressure groups, including the Campaign to Stop Killer Robots, adamantly and publicly oppose autonomous weapons.
Last May, a meeting was organised at the UN to assess the ethical and sociological questions that arise from their development and deployment, as well as the adequacy and challenges to international law.
In July, the Future of Life Institute released an open document calling for a ban on autonomous weapons. Stephen Hawking, Elon Musk and Steve Wozniak were among the prominent names on the list of those opposed.
There are, as yet, no agreements or even proposals to ban autonomous weapons, but discussions in the UN are ongoing.
Whether or not others will follow the IIIM’s lead remains to be seen but the institute has undoubtedly made a brave decision in rejecting the billions of dollars of contracts thrown at the military weapons industries.
The moral, ethical and legal arguments against weapons that can make decisions about who to kill will rubble on. But the exponential pace of change in robotics and AI calls for safeguards and controls to be put in place before the technology reaches fruition, so IIIM’s decision is surely a step in the right direction.
If you liked this post, you may also be interested in:
- Clearpath’s public stance on Killer Robots sets precedence in corporate responsibility
- Robots Podcast: Autonomous lethal weapons
- Ron Arkin and Noel Sharkey debate lethal autonomous weapons at UN
- Robots Podcast: Robot ethics (Part 1), with Noel Sharkey
- Robots Podcast: Robot ethics (Part 2), with Ron Arkin
- Robotics needs to get political
- Robo-Wars: The regulation of robotic weapons