news    views    podcast    learn    |    about    contribute     republish    

safety critical systems

Part 2: Autonomous Systems and Transparency

In my previous post I argued that a wide range of AI and Autonomous Systems (from now on I will just use the term AS as shorthand for both) should be regarded as Safety Critical. I include both autonomous software AI systems and hard (embodied) AIs such as robots, drones and driverless cars. Many will be surprised that I include in the soft AI category apparently harmless systems such as search engines. Of course no-one is seriously inconvenienced when Amazon makes a silly book recommendation, but consider very large groups of people. If a truth (such as global warming) is – because of accidental or willful manipulation – presented as false, and that falsehood is believed by a very large number of people, then serious harm to the planet (and we humans who depend on it) could result.

With machine intelligence emerging as an essential tool in many aspects of modern life, Alan Winfield discusses autonomous sytems, safety and regulation.



Using Natural Language in Human-Robot Collaboration
November 11, 2019


Are you planning to crowdfund your robot startup?

Need help spreading the word?

Join the Robohub crowdfunding page and increase the visibility of your campaign