Sunday, January 01, 2017

The infrastructure of life 1 - Safety

Part 1: Autonomous Systems and Safety

We all rely on machines. All aspects of modern life, from transport to energy, work to welfare, play to politics depend on a complex infrastructure of physical and virtual systems. How many of us understand how all of this stuff works? Very few I suspect. But it doesn't matter, does it? We trust the good men and women (the disgracefully maligned experts) who build, manage and maintain the infrastructure of life. If something goes wrong they will know why. And (we hope) make sure it doesn't happen again.

All well and good you might think. But the infrastructure of life is increasingly autonomous - many decisions are now made not by a human but by the systems themselves. When you search for a restaurant near you the recommendation isn't made by a human, but by an algorithm. Many financial decisions are not made by people but by algorithms; and I don't just mean city investments - it's possible that your loan application will be decided by an AI. Machine legal advice is already available; a trend that is likely to increase. And of course if you take a ride in a driverless car, it is algorithms that decide when the car turns, brakes and so on. I could go on.

These are not trivial decisions. They affect lives. The real world impacts are human and economic, even political (search engine results may well influence how someone votes). In engineering terms these systems are safety critical. Examples of safety critical systems that we all rely on from time to time include aircraft autopilots or train braking systems. But - and this may surprise you - the difficult engineering techniques used to prove the safety of such systems are not applied to search engines, automated trading systems, medical diagnosis AIs, assistive living robots, delivery drones, or (I'll wager) driverless car autopilots.

Why is this? Well, it's partly because the field of AI and autonomous systems is moving so fast. But I suspect it has much more to do with an incompatibility between the way we have traditionally designed safety critical systems, and the design of modern AI systems. There is I believe one key problem: learning. There is a very good reason that current safety critical systems (like aircraft autopilots) don't learn. Current safety assurance approaches assume that the system being certified will never change, but a system that learns does – by definition – change its behaviour, so any certification is rendered invalid after the system has learned.

And as if that were not bad enough, the particular method of learning which has caused such excitement - and rapid progress - in the last few years is based on Artificial Neural Networks (more often these days referred to as Deep Learning). A characteristic of ANNs is that after the ANN has been trained with datasets, any attempt to examine its internal structure in order to understand why and how the ANN makes a particular decision is impossible. The decision making process of an ANN is opaque. Alphago's moves were beautiful but puzzling. We call this the black box problem.

Does this mean we cannot assure the safety of learning autonomous/AI systems at all? No it doesn't. The problem of safety assurance of systems that learn is hard but not intractable, and is the subject of current research*. The black box problem may be intractable for ANNs, but could be avoided by using approaches to AI that do not use ANNs.

But - here's the rub. This involves slowing down the juggernaut of autonomous systems and AI development. It means taking a much more cautious and incremental approach, and it almost certainly involves regulation (that, for instance, makes it illegal to run a driverless car unless the car's autopilot has been certified as safe - and that would require standards that don't yet exist). Yet the commercial and political pressure is to be more permissive, not less; no country wants to be left behind in the race to cash in on these new technologies.

This is why work toward AI/Autonomous Systems standards is so vital, together with the political pressure to ensure our policymakers fully understand the public safety risks of unregulated AI.

In my next blog post I will describe one current standards initiative, towards introducing transparency in AI and Autonomous Systems based on the simple principle that it should always be possible to find out why an AI/AS system made a particular decision.

The next few years of swimming against the tide is going to be hard work. As Luke
Muehlhauser writes in his excellent essay on transparency in safety-critical systems "...there is often a tension between AI capability and AI transparency. Many of AI’s most powerful methods are also among its least transparent".

*some, but nowhere near enough. See for instance Verifiable Autonomy.

Related blog posts:
Ethically Aligned Design
How do we trust our Robots?
It's only a matter of time


  1. Your whole premise on ANNs is based on the false assumption that learning is applied in the field. An AI plane can be certified because the provable output of a series of learning is then frozen for the release. Data from the field can be taken and input as a new round of learning, which is then tested, proved and released. It can learn all the time but that does not need to be modified and applied on the fly.

    You've made another assumption that AI development is being pushed forward haphazardly without any supporting evidence that that is the case at all. In fact the reality is that scientists are very cautious and seek to test and validate everything.

    This whole piece is just spreading fear of the unknown. You use 'beautiful but puzzling' moves as an example of "the problem" when actually they are the benefit. If my car brakes in an unconventional fashion that allows it to stop in half the distance and avoid hitting a pedestrian then that is a good thing.

    At the end of the day AI is still just a computer following instructions set by a human. These CAN be validated, they CAN be limited, they WON'T uprise. You cry about the dangers of unregulated AI but just as any program or decision that a person makes, it already has consequences and rules against recklessness and negligence.

    1. Many thanks Ben for your feedback.

      First, I agree that we could train the ANN offline - that solves the learning problem, but doesn't address the black box problem.

      I might agree with you regarding cautious scientists developing AI, but much AI development is now being done in the private sector by both large companies, like Amazon and Google, and many startups.

      I am not spreading fear of the unknown - just the proper caution of an engineer who has spent much of my professional life researching and developing safety systems. And personally I would be very worried by a car that brakes in half the distance if we don't understand why it does that.

      And I am certainly not concerned by AI uprising - see my other blog posts on this subject. I am in fact an AI enthusiast, not a naysayer, but I worry that without standards and regulation AI and Autonomous Systems will fail to win public trust and the benefits of this amazing technology will not follow through.