Tuesday, September 17, 2019

What's the worst that could happen? Why we need robot/AI accident investigation.

Robots. What could possibly go wrong?

Imagine that your elderly mother, or grandmother, has an assisted living robot to help her live independently at home. The robot is capable of fetching her drinks, reminding her to take her medicine and keeping in touch with family. Then one afternoon you get a call from a neighbour who has called round and sees your grandmother collapsed on the floor. When the paramedics arrive they find the robot wandering around apparently aimlessly. One of its functions is to call for help if your grandmother stops moving, but it seems that the robot failed to do this. 

Fortunately your grandmother recovers but the doctors find bruising on her legs, consistent with the robot running into them. Not surprisingly you want to know what happened: did the robot cause the accident? Or maybe it didn't but made matters worse, and why did it fail to raise the alarm? 

Although this is a fictional scenario it could happen today. If it did you would be totally reliant on the goodwill of the robot manufacturer to discover what went wrong. Even then you might not get the answers you seek; it's entirely possible the robot and the company that made it are just not equipped with the tools and processes to undertake an investigation.

Right now there are no established processes for robot accident investigation. 

Of course accidents happen, and that just as true for robots as any other machinery [1].

Finding statistics is tough. But this web page shows serious accidents with industrial robots in the US since the mid 1980s. Driverless car fatalities of course make the headlines. There have been five (that we know about) since 2016. But we have next to no data on accidents in human robot interaction (HRI); that is for robots designed to interact directly with humans. Here is one - a security robot - that happened to be reported.

But a Responsible Roboticist must be interested in *all* accidents, whether serious or not. We should also be very interested in near misses; these are taken *very* seriously in aviation [2], and there is good evidence that reporting near misses improves safety.

So I am very excited to introduce our 5-year EPSRC funded project RoboTIPS – responsible robots for the digital economy. Led by Professor Marina Jirotka at the University of Oxford, we believe RoboTIPS to be the first project with the aim of systematically studying the question of how to investigate accidents with social robots.

So what are we doing in RoboTIPS..?

First we will look at the technology needed to support accident investigation.

In a paper published 2 years ago Marina and I argued the case for an Ethical Black Box (EBB) [3]. Our proposition is very simple: that all robots (and some AIs) should be equipped by law with a standard device which continuously records a time stamped log of the internal state of the system, key decisions, and sampled input or sensor data (in effect the robot equivalent of an aircraft flight data recorder). Without such a device finding out what the robot was doing, and why, in the moments leading up to an accident is more or less impossible. In RoboTIPS we will be developing and testing a model EBB for social robots.

But accident investigation is a human process of discovery and reconstruction. So in this project we will be designing and running three staged (mock) accidents, each covering a different application domain: 
  • assisted living robots, 
  • educational (toy) robots, and 
  • driverless cars.
In these scenarios we will be using real robots and will be seeking human volunteers to act in three roles, as the 
  • subject(s) of the accident, 
  • witnesses to the accident, and as 
  • members of the accident investigation team.
Thus we aim to develop and demonstrate both technologies and processes (and ultimately policy recommendations) for robot accident investigation. And the whole project will be conducted within the framework of Responsible Research and Innovation; it will, in effect, be a case study in Responsible Robotics.

The text above is the script for a very short (10 minute) TED-style talk I gave at the conference AI@Oxford today in the Impact of Trust in AI session, and here below are the slides.



References:

[1] Dhillon BS (1991) Robot Accidents. In: Robot Reliability and Safety. Springer, New York, NY
[2] Macrae C (2014) Close Calls: Managing risk and resilience in Airline flight safety, Palgrave macmillan.
[3] Winfield AFT and Jirotka M (2017) The Case for an Ethical Black Box. In: Gao Y, Fallah S, Jin Y, Lekakou C (eds) Towards Autonomous Robotic Systems. TAROS 2017. Lecture Notes in Computer Science, vol 10454. Springer, Cham.