Thursday, March 31, 2016

It's only a matter of time

Sooner or later there will be fatal accident caused by a driverless car. It's not a question of if, but when. What happens immediately following that accident could have a profound effect on the nascent driverless car industry.

Picture the scene. Emergency services are called to attend the accident. A teenage girl on a bicycle apparently riding along a cycle path was hit and killed by a car. The traffic police quickly establish that the car at the centre of the accident was operating autonomously at the moment of the fatal crash. They endeavour to find out what went wrong, but how? Almost certainly the car will have logged data on its behaviour leading up to the moment of the crash - data that is sure to hold vital clues about what caused the accident, but will that data be accessible to the investigating traffic police? And even if it is will the investigators be able to interpret the data..?

There are two ways the story could unfold from here.

Scenario 1: unable to investigate the accident themselves, the traffic police decide to contact the manufacturer and ask for help. As it happens a team from the manufacturer actually arrives on scene very quickly - it later transpires that the car had 'phoned home' automatically so the manufacturer actually knew of the accident within seconds of it taking place. Somewhat nonplussed the traffic police have little choice but to grant them full access to the scene of the accident. The manufacturer undertakes their own investigation and - several weeks later - issue a press statement explaining that the AI driving the car was unable to cope with an "unexpected situation" which "regrettably" led to the fatal crash. The company explain that the AI has been upgraded so that it cannot happen again. They also accept liability for the accident and offer compensation to the child's family. Despite repeated requests the company declines to share the technical details of what happened with the authorities, claiming that such disclosure would compromise its intellectual property.

A public already fearful of the new technology reacts very badly. Online petitions call for a ban on driverless cars and politicians enact knee-jerk legislation which, although falling short of an outright ban, sets the industry back years.

Scenario 2: the traffic police call the newly established driverless car accident investigation branch (DCAB), who send a team consisting of independent experts on driverless car technology, including its AI. The manufacturer's team also arrive, but - under a protocol agreed with the industry - their role is to support DCAB and provide "full assistance, including unlimited access to technical data". In fact the data logs stored by the car are in a new industry standard format thus access by DCAB is straightforward; software tools allow them to quickly interpret those data logs. Well aware of public concerns DCAB provide hourly updates on the progress of their investigation via social media and, within just a few days, call a press conference to explain their findings. They outline the fault with the AI and explain that they will require the manufacturer to recall all affected vehicles and update the AI, after submitting technical details of the update to DCAB for approval. DCAB will also issue an update to all driverless car manufacturers asking them to check for the same fault in their own systems, also reporting their findings back to DCAB.

A public fearful of the new technology is reassured by the transparent and robust response of the accident investigation team. Although those fears surface in the press and social media, the umbrella Driverless Car Authority (DCA) are quick to respond with expert commentators and data to show that driverless cars are already safer than manually driven cars.

There are strong parallels between driverless cars and commercial aviation. One of the reasons we trust airliners is that we know they are part of a highly regulated industry with an amazing safety record. The reason commercial aircraft are so safe is largely down to the very tough safety certification processes and, when things do go wrong, the rapid and robust processes of air accident investigation. There are emerging standards for driverless cars: ISO Technical Committee TC 204 on Intelligent Transport Systems already lists 213 standards. There isn't yet a standard for fully autonomous driverless car operation, but see for instance ISO 11270:2014 on Lane keeping assistance systems (LKAS). But standards need teeth, which is why we need standards-based certification processes for driverless cars managed by regulatory authorities - a driverless car equivalent of the FAA. In short, a governance framework for driverless cars.

Postscript: several people have emailed or tweeted me to complain that I seem to be anti driverless cars - nothing could be further from the truth. I am a strong advocate of driverless cars for many reasons, first and most importantly because they will save lives, second because they should lead to a reduction in the number of vehicles on the road - thus making our cities greener, and third because they might just cure humans of our unhealthy obsession with personal car ownership. My big worry is that none of these benefits will flow if driverless cars are not trusted. But trust in technology doesn't happen by magic and, in the early days, serious setbacks and a public backlash could set the nascent driverless car industry back years (think of GM foods in the EU). One way to counter such a backlash and build trust is to put in place robust and transparent governance as I have tried (not very well it seems) to argue in this post.


  1. I know it's a sub-issue, but finding fault vs finding a fault in an AI system is worthy of its own post. The simple aspect would be what data was captured by the system, and how the "control" AI reacts vs the car's AI response.

    The "authorities" initially on scene should only be the first part of the evidence chain, with no direct role in examination. There are many examples of police "gaining access" to a persons cell phone as part of a wider investigation, and the current search and seizure laws can barely deal with that.

    We also have examples of manufacturers, such as Toyota, that try to limit access to collected data, while in the same scenario, a wide range of "outside experts" attempting to make sense of it all.

    In your example, the car is impounded with digital forensics work to be done at a later time by other persons. Not addressed is the fate of the passenger(s) of said car. Could they have done something that led to the accident? Would any electronics (tablets/laptops, etc) need to be confiscated as part of the investigation?

    And what of 3rd party ownership? Smart cars are often touted as reducing the need for private ownership, so would Uber or even a city transit department need be part of the accountability loop? As with planes, how does the cars service and maintenance record come into play? Did the car just install a software patch (which would lead to a larger safety recall/rollback).

    Unfortunately, the least important aspect of the accident is going to drive most of the conversation, and likely near-term decisions: public perception.

    1. Thank you for your comments. You touch upon many important questions here. In response to your first point - yes I'm working on a paper on Safety Critical AI.

      And yes, I hardly scratched the surface of the kinds of protocols and processes that would be needed to investigate a driverless car accident, let alone the complications of ownership or insurance models. There's a huge amount of work that needs to be done to develop the standards, then the processes of certification, etc. I very much hope this work is going on.

      As you say, public perception is key. That really is the point of my article. Confidence and trust in a new technology doesn't happen overnight. Robust and transparent governance, to assure the safety of the technology, is key to building that trust.

    2. Hello Alan, are you any closer with paper on Safety Critical AI. Is it released or?