Sooner or later there will be fatal accident caused by a driverless car. It's not a question of if, but when. What happens immediately following that accident could have a profound effect on the nascent driverless car industry.
Picture the scene. Emergency services are called to attend the accident. A teenage girl on a bicycle apparently riding along a cycle path was hit and killed by a car. The traffic police quickly establish that the car at the centre of the accident was operating autonomously at the moment of the fatal crash. They endeavour to find out what went wrong, but how? Almost certainly the car will have logged data on its behaviour leading up to the moment of the crash - data that is sure to hold vital clues about what caused the accident, but will that data be accessible to the investigating traffic police? And even if it is will the investigators be able to interpret the data..?
There are two ways the story could unfold from here.
Scenario 1: unable to investigate the accident themselves, the traffic police decide to contact the manufacturer and ask for help. As it happens a team from the manufacturer actually arrives on scene very quickly - it later transpires that the car had 'phoned home' automatically so the manufacturer actually knew of the accident within seconds of it taking place. Somewhat nonplussed the traffic police have little choice but to grant them full access to the scene of the accident. The manufacturer undertakes their own investigation and - several weeks later - issue a press statement explaining that the AI driving the car was unable to cope with an "unexpected situation" which "regrettably" led to the fatal crash. The company explain that the AI has been upgraded so that it cannot happen again. They also accept liability for the accident and offer compensation to the child's family. Despite repeated requests the company declines to share the technical details of what happened with the authorities, claiming that such disclosure would compromise its intellectual property.
A public already fearful of the new technology reacts very badly. Online petitions call for a ban on driverless cars and politicians enact knee-jerk legislation which, although falling short of an outright ban, sets the industry back years.
Scenario 2: the traffic police call the newly established driverless car accident investigation branch (DCAB), who send a team consisting of independent experts on driverless car technology, including its AI. The manufacturer's team also arrive, but - under a protocol agreed with the industry - their role is to support DCAB and provide "full assistance, including unlimited access to technical data". In fact the data logs stored by the car are in a new industry standard format thus access by DCAB is straightforward; software tools allow them to quickly interpret those data logs. Well aware of public concerns DCAB provide hourly updates on the progress of their investigation via social media and, within just a few days, call a press conference to explain their findings. They outline the fault with the AI and explain that they will require the manufacturer to recall all affected vehicles and update the AI, after submitting technical details of the update to DCAB for approval. DCAB will also issue an update to all driverless car manufacturers asking them to check for the same fault in their own systems, also reporting their findings back to DCAB.
A public fearful of the new technology is reassured by the transparent and robust response of the accident investigation team. Although those fears surface in the press and social media, the umbrella Driverless Car Authority (DCA) are quick to respond with expert commentators and data to show that driverless cars are already safer than manually driven cars.
There are strong parallels between driverless cars and commercial aviation. One of the reasons we trust airliners is that we know they are part of a highly regulated industry with an amazing safety record. The reason commercial aircraft are so safe is largely down to the very tough safety certification processes and, when things do go wrong, the rapid and robust processes of air accident investigation. There are emerging standards for driverless cars: ISO Technical Committee TC 204 on Intelligent Transport Systems already lists 213 standards. There isn't yet a standard for fully autonomous driverless car operation, but see for instance ISO 11270:2014 on Lane keeping assistance systems (LKAS). But standards need teeth, which is why we need standards-based certification processes for driverless cars managed by regulatory authorities - a driverless car equivalent of the FAA. In short, a governance framework for driverless cars.
Postscript: several people have emailed or tweeted me to complain that I seem to be anti driverless cars - nothing could be further from the truth. I am a strong advocate of driverless cars for many reasons, first and most importantly because they will save lives, second because they should lead to a reduction in the number of vehicles on the road - thus making our cities greener, and third because they might just cure humans of our unhealthy obsession with personal car ownership. My big worry is that none of these benefits will flow if driverless cars are not trusted. But trust in technology doesn't happen by magic and, in the early days, serious setbacks and a public backlash could set the nascent driverless car industry back years (think of GM foods in the EU). One way to counter such a backlash and build trust is to put in place robust and transparent governance as I have tried (not very well it seems) to argue in this post.