Robohub.org
 

The infrastructure of life part 1: Safety


by
26 January 2017



share this:
deep-learning

Part 1: Autonomous Systems and Safety

We all rely on machines. All aspects of modern life, from transport to energy, work to welfare, play to politics depend on a complex infrastructure of physical and virtual systems. How many of us understand how all of this stuff works? Very few I suspect. But it doesn’t matter, does it? We trust the good men and women (the disgracefully maligned experts) who build, manage and maintain the infrastructure of life. If something goes wrong they will know why. And (we hope) make sure it doesn’t happen again.

All well and good you might think. But the infrastructure of life is increasingly autonomous – many decisions are now made not by a human but by the systems themselves. When you search for a restaurant near you the recommendation isn’t made by a human, but by an algorithm. Many financial decisions are not made by people but by algorithms; and I don’t just mean city investments – it’s possible that your loan application will be decided by an AI. Machine legal advice is already available; a trend that is likely to increase. And of course if you take a ride in a driverless car, it is algorithms that decide when the car turns, brakes and so on. I could go on.

 

Algorithmic trading is now commonplace

Algorithmic trading is now commonplace

These are not trivial decisions. They affect lives. The real world impacts are human and economic, even political (search engine results may well influence how someone votes). In engineering terms these systems are safety critical. Examples of safety critical systems that we all rely on from time to time include aircraft autopilots or train braking systems. But – and this may surprise you – the difficult engineering techniques used to prove the safety of such systems are not applied to search engines, automated trading systems, medical diagnosis AIs, assistive living robots, delivery drones, or (I’ll wager) driverless car autopilots.

Why is this? Well, it’s partly because the field of AI and autonomous systems is moving so fast. But I suspect it has much more to do with an incompatibility between the way we have traditionally designed safety critical systems, and the design of modern AI systems. There is I believe one key problem: learning. There is a very good reason that current safety critical systems (like aircraft autopilots) don’t learn. Current safety assurance approaches assume that the system being certified will never change, but a system that learns does – by definition – change its behaviour, so any certification is rendered invalid after the system has learned.

And as if that were not bad enough, the particular method of learning which has caused such excitement – and rapid progress – in the last few years is based on Artificial Neural Networks (more often these days referred to as Deep Learning). A characteristic of ANNs is that after the ANN has been trained with datasets, any attempt to examine its internal structure in order to understand why and how the ANN makes a particular decision is impossible. The decision making process of an ANN is opaque. Alphago’s moves were beautiful but puzzling. We call this the black box problem.

Safety critical systems such as aircraft autopilots do not learn

Safety critical systems such as aircraft autopilots do not learn

Does this mean we cannot assure the safety of learning autonomous/AI systems at all? No it doesn’t. The problem of safety assurance of systems that learn is hard but not intractable, and is the subject of current research*. The black box problem may be intractable for ANNs, but could be avoided by using approaches to AI that do not use ANNs.

But – here’s the rub. This involves slowing down the juggernaut of autonomous systems and AI development. It means taking a much more cautious and incremental approach, and it almost certainly involves regulation (that, for instance, makes it illegal to run a driverless car unless the car’s autopilot has been certified as safe – and that would require standards that don’t yet exist). Yet the commercial and political pressure is to be more permissive, not less; no country wants to be left behind in the race to cash in on these new technologies.

This is why work toward AI/Autonomous Systems standards is so vital, together with the political pressure to ensure our policymakers fully understand the public safety risks of unregulated AI.

In my next blog post I will describe one current standards initiative, towards introducing transparency in AI and Autonomous Systems based on the simple principle that it should always be possible to find out why an AI/AS system made a particular decision.

The next few years of swimming against the tide is going to be hard work. As Luke
Muehlhauser writes in his excellent essay on transparency in safety-critical systems “…there is often a tension between AI capability and AI transparency. Many of AI’s most powerful methods are also among its least transparent”.

*some, but nowhere near enough. See for instance Verifiable Autonomy.

 


See all the latest robotics news on Robohub, or sign up for our weekly newsletter.



tags: , , , , , , , , , , ,


Alan Winfield is Professor in robotics at UWE Bristol. He communicates about science on his personal blog.
Alan Winfield is Professor in robotics at UWE Bristol. He communicates about science on his personal blog.





Related posts :



Social media round-up from #IROS2025

  27 Oct 2025
Take a look at what participants got up to at the IEEE/RSJ International Conference on Intelligent Robots and Systems.

Using generative AI to diversify virtual training grounds for robots

  24 Oct 2025
New tool from MIT CSAIL creates realistic virtual kitchens and living rooms where simulated robots can interact with models of real-world objects, scaling up training data for robot foundation models.

Robot Talk Episode 130 – Robots learning from humans, with Chad Jenkins

  24 Oct 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Chad Jenkins from University of Michigan about how robots can learn from people and assist us in our daily lives.

Robot Talk at the Smart City Robotics Competition

  22 Oct 2025
In a special bonus episode of the podcast, Claire chatted to competitors, exhibitors, and attendees at the Smart City Robotics Competition in Milton Keynes.

Robot Talk Episode 129 – Automating museum experiments, with Yuen Ting Chan

  17 Oct 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Yuen Ting Chan from Natural History Museum about using robots to automate molecular biology experiments.

What’s coming up at #IROS2025?

  15 Oct 2025
Find out what the International Conference on Intelligent Robots and Systems has in store.

From sea to space, this robot is on a roll

  13 Oct 2025
Graduate students in the aptly named "RAD Lab" are working to improve RoboBall, the robot in an airbag.

Robot Talk Episode 128 – Making microrobots move, with Ali K. Hoshiar

  10 Oct 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Ali K. Hoshiar from University of Essex about how microrobots move and work together.



 

Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


 












©2025.05 - Association for the Understanding of Artificial Intelligence