Robohub.org
 

Isaac Asimov’s 3 laws of AI – updated


by
05 September 2017



share this:

In an OpEd piece in the NY Times, and in a TED Talk late last year, Oren Etzioni, PhD, author, and CEO of the Allen Institute for Artificial Intelligence, suggested an update for Isaac Asimov’s three laws of Artificial Intelligence. Given the widespread media attention emanating from Elon Musk’s (and others) warnings, these updates might be worth reviewing.

The Warnings

In an open letter to the U.N., a group of specialists from 26 nations and led by Elon Musk called for the United Nations to ban the development and use of autonomous weapons. The signatories included Musk and DeepMind co-founder Mustafa Suleyman, as well as 100+ other leaders in robotics and artificial-intelligence companies. They write that AI technology has reached a point where the deployment of such systems in the form of autonomous weapons is feasible within years, not decades, and many in the defense industry are saying that autonomous weapons will be the third revolution in warfare, after gunpowder and nuclear arms.

Another more political warning was recently broadcast on VoA: Russian President Vladimir Putin, speaking to a group of Russian students, called artificial intelligence “not only Russia’s future but the future of the whole of mankind… The one who becomes the leader in this sphere will be the ruler of the world. There are colossal opportunities and threats that are difficult to predict now.”

Asimov’s Three Rules

Isaac Asimov wrote “Runaround” in 1942 in which there was a government Handbook of Robotics (in 2058) which included the following three rules: A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Etzioni’s Updated Rules

Etzioni has updated those three rules in his NY Times op-ed piece to:

  1. An A.I. system must be subject to the full gamut of laws that apply to its human operator.
  2. An A.I. system must clearly disclose that it is not human.
  3. An A.I. system cannot retain or disclose confidential information without explicit approval from the source of that information.

Etzioni offered these updates to begin a discussion that would lead to a non-fictional Handbook of Robotics by the United Nations — and sooner than the 2058 sci-fi date. One that would regulate but not thwart the already growing global AI business.

And growing it is!

China’s Artificial Intelligence Manifesto

China has recently announced their long-term goal to become #1 in A.I. by 2030. They plan to grow their A.I. industry to over $22 billion by 2020, $59 billion by 2025 and $150 billion by 2030. They did this same type of long-term strategic planning for robotics – to make it an in-country industry and to transform the country from a low-cost labor source to a high-tech manufacturing resource… and it’s working.

With this major strategic long-term AI push, China is looking to rival U.S. market leaders such as Alphabet/Google, Apple, Amazon, IBM and Microsoft. China is keen not to be left behind in a technology that is increasingly pivotal — from online commerce to self-driving vehicles to energy to consumer products. China aims to catch up by solving issues including a lack of high-end computer chips, software that writes software, and trained personnel. Beijing will play a big role in policy support and regulation as well as providing and funding research, incentives and tax credits.

Premature or not, the time is now

Many in AI and robotics feel that the present state of development in AI, including improvements in machine and deep learning methods, is primitive and decades away from independent thinking. Siri and Alexa, as fun and capable as they are, are still programmed by humans and cannot even initiate a conversation or truly understand its content. Nevertheless, there is a reason why people have expressed that they sense what may be possible in the future when artificial intelligence decides what ‘it’ thinks is best for us. Consequently, global regulation can’t hurt.




Frank Tobe is the owner and publisher of The Robot Report, and is also a panel member for Robohub's Robotics by Invitation series.
Frank Tobe is the owner and publisher of The Robot Report, and is also a panel member for Robohub's Robotics by Invitation series.





Related posts :



Robot Talk Episode 119 – Robotics for small manufacturers, with Will Kinghorn

  02 May 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Will Kinghorn from Made Smarter about how to increase adoption of new tech by small manufacturers.

Multi-agent path finding in continuous environments

  01 May 2025
How can a group of agents minimise their journey length whilst avoiding collisions?

Interview with Yuki Mitsufuji: Improving AI image generation

  29 Apr 2025
Find out about two pieces of research tackling different aspects of image generation.

Robot Talk Episode 118 – Soft robotics and electronic skin, with Miranda Lowther

  25 Apr 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Miranda Lowther from the University of Bristol about soft, sensitive electronic skin for prosthetic limbs.

Interview with Amina Mević: Machine learning applied to semiconductor manufacturing

  17 Apr 2025
Find out how Amina is using machine learning to develop an explainable multi-output virtual metrology system.

Robot Talk Episode 117 – Robots in orbit, with Jeremy Hadall

  11 Apr 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Jeremy Hadall from the Satellite Applications Catapult about robotic systems for in-orbit servicing, assembly, and manufacturing.

Robot Talk Episode 116 – Evolved behaviour for robot teams, with Tanja Kaiser

  04 Apr 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Tanja Katharina Kaiser from the University of Technology Nuremberg about how applying evolutionary principles can help robot teams make better decisions.

AI can be a powerful tool for scientists. But it can also fuel research misconduct

  31 Mar 2025
While AI is allowing scientists to make technological breakthroughs, there’s also a darker side to the use of AI in science: scientific misconduct is on the rise.



 

Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


©2025.05 - Association for the Understanding of Artificial Intelligence


 












©2025.05 - Association for the Understanding of Artificial Intelligence