Robohub.org
 

The infrastructure of life part 1: Safety

by
26 January 2017



share this:
deep-learning

Part 1: Autonomous Systems and Safety

We all rely on machines. All aspects of modern life, from transport to energy, work to welfare, play to politics depend on a complex infrastructure of physical and virtual systems. How many of us understand how all of this stuff works? Very few I suspect. But it doesn’t matter, does it? We trust the good men and women (the disgracefully maligned experts) who build, manage and maintain the infrastructure of life. If something goes wrong they will know why. And (we hope) make sure it doesn’t happen again.

All well and good you might think. But the infrastructure of life is increasingly autonomous – many decisions are now made not by a human but by the systems themselves. When you search for a restaurant near you the recommendation isn’t made by a human, but by an algorithm. Many financial decisions are not made by people but by algorithms; and I don’t just mean city investments – it’s possible that your loan application will be decided by an AI. Machine legal advice is already available; a trend that is likely to increase. And of course if you take a ride in a driverless car, it is algorithms that decide when the car turns, brakes and so on. I could go on.

 

Algorithmic trading is now commonplace

Algorithmic trading is now commonplace

These are not trivial decisions. They affect lives. The real world impacts are human and economic, even political (search engine results may well influence how someone votes). In engineering terms these systems are safety critical. Examples of safety critical systems that we all rely on from time to time include aircraft autopilots or train braking systems. But – and this may surprise you – the difficult engineering techniques used to prove the safety of such systems are not applied to search engines, automated trading systems, medical diagnosis AIs, assistive living robots, delivery drones, or (I’ll wager) driverless car autopilots.

Why is this? Well, it’s partly because the field of AI and autonomous systems is moving so fast. But I suspect it has much more to do with an incompatibility between the way we have traditionally designed safety critical systems, and the design of modern AI systems. There is I believe one key problem: learning. There is a very good reason that current safety critical systems (like aircraft autopilots) don’t learn. Current safety assurance approaches assume that the system being certified will never change, but a system that learns does – by definition – change its behaviour, so any certification is rendered invalid after the system has learned.

And as if that were not bad enough, the particular method of learning which has caused such excitement – and rapid progress – in the last few years is based on Artificial Neural Networks (more often these days referred to as Deep Learning). A characteristic of ANNs is that after the ANN has been trained with datasets, any attempt to examine its internal structure in order to understand why and how the ANN makes a particular decision is impossible. The decision making process of an ANN is opaque. Alphago’s moves were beautiful but puzzling. We call this the black box problem.

Safety critical systems such as aircraft autopilots do not learn

Safety critical systems such as aircraft autopilots do not learn

Does this mean we cannot assure the safety of learning autonomous/AI systems at all? No it doesn’t. The problem of safety assurance of systems that learn is hard but not intractable, and is the subject of current research*. The black box problem may be intractable for ANNs, but could be avoided by using approaches to AI that do not use ANNs.

But – here’s the rub. This involves slowing down the juggernaut of autonomous systems and AI development. It means taking a much more cautious and incremental approach, and it almost certainly involves regulation (that, for instance, makes it illegal to run a driverless car unless the car’s autopilot has been certified as safe – and that would require standards that don’t yet exist). Yet the commercial and political pressure is to be more permissive, not less; no country wants to be left behind in the race to cash in on these new technologies.

This is why work toward AI/Autonomous Systems standards is so vital, together with the political pressure to ensure our policymakers fully understand the public safety risks of unregulated AI.

In my next blog post I will describe one current standards initiative, towards introducing transparency in AI and Autonomous Systems based on the simple principle that it should always be possible to find out why an AI/AS system made a particular decision.

The next few years of swimming against the tide is going to be hard work. As Luke
Muehlhauser writes in his excellent essay on transparency in safety-critical systems “…there is often a tension between AI capability and AI transparency. Many of AI’s most powerful methods are also among its least transparent”.

*some, but nowhere near enough. See for instance Verifiable Autonomy.

 


See all the latest robotics news on Robohub, or sign up for our weekly newsletter.



tags: , , , , , , , , , , , ,


Alan Winfield is Professor in robotics at UWE Bristol. He communicates about science on his personal blog.
Alan Winfield is Professor in robotics at UWE Bristol. He communicates about science on his personal blog.





Related posts :



Engineers devise a modular system to produce efficient, scalable aquabots

The system’s simple repeating elements can assemble into swimming forms ranging from eel-like to wing-shaped.
07 February 2023, by

Microelectronics give researchers a remote control for biological robots

First, they walked. Then, they saw the light. Now, miniature biological robots have gained a new trick: remote control.
05 February 2023, by

Robot Talk Episode 35 – Interview with Emily S. Cross

In this week's episode of the Robot Talk podcast, host Claire Asher chatted to Professor Emily S. Cross from the University of Glasgow and Western Sydney University all about neuroscience, social learning, and human-robot interaction.
03 February 2023, by

Sea creatures inspire marine robots which can operate in extra-terrestrial oceans

Scientists at the University of Bristol have drawn on the design and life of a mysterious zooplankton to develop underwater robots.
02 February 2023, by

Our future could be full of undying, self-repairing robots – here’s how

Could it be that future AI systems will need robotic “bodies” to interact with the world? If so, will nightmarish ideas like the self-repairing, shape-shifting T-1000 robot from the Terminator 2 movie come to fruition? And could a robot be created that could “live” forever?
01 February 2023, by

Sensing with purpose

Fadel Adib uses wireless technologies to sense the world in new ways, taking aim at sweeping problems such as food insecurity, climate change, and access to health care.
29 January 2023, by





©2021 - ROBOTS Association


 












©2021 - ROBOTS Association