Monday, May 16, 2022

A Draft Open Standard for an Ethical Black Box

About 5 years ago we proposed that all robots should be fitted with the robot equivalent of an aircraft Flight Data Recorder to continuously record sensor and relevant internal status data. We call this an ethical black box (EBB). We argued that an ethical black box will play a key role in the processes of discovering why and how a robot caused an accident, and thus an essential part of establishing accountability and responsibility.

Since then, within the RoboTIPS project, we have developed and tested several model EBBs, including one for an e-puck robot that I wrote about in this blog, and another for the MIRO robot. With some experience under our belts, we have now drafted an Open Standard for the EBB for social robots - initially as a paper submitted to the International Conference on Robots Ethics and Standards. Let me now explain first why we need a standard, and second why it should be an open standard.

Why do we need a standard specification for an EBB? As we outline in our new paper, there are four reasons:
  1. A standard approach to EBB implementation in social robots will greatly benefit accident and incident (near miss) investigations. 
  2. An EBB will provide social robot designers and operators with data on robot use that can support both debugging and functional improvements to the robot. 
  3. An EBB can be used to support robot ‘explainability’ functions to allow, for instance, the robot to answer ‘Why did you just do that?’ questions from its user. And,
  4. a standard allows EBB implementations to be readily shared and adapted for different robots and, we hope, encourage manufacturers to develop and market general purpose robot EBBs.

And why should it be an Open Standard? Bruce Perens, author of The Open Source Definition, outlines a number of criteria an open standard must satisfy, including:

  • Availability: Open standards are available for all to read and implement.
  • Maximize End-User Choice: Open Standards create a fair, competitive market for implementations of the standard.
  • No Royalty: Open standards are free for all to implement, with no royalty or fee.
  • No Discrimination: Open standards and the organizations that administer them do not favor one implementor over another for any reason other than the technical standards compliance of a vendor’s implementation.
  • Extension or Subset: Implementations of open standards may be extended, or offered in subset form.

These are *good* reasons.

The most famous and undoubtedly the most impactful Open Standards are those that specified Internet protocols, such as FTP and email. They were, and still are, called Requests for Comments (RFCs) to reflect the fact that they were - especially in the early years - drafts for revision. As a mark of respect we also regard our draft 0.1 Open Standard for an EBB for Social Robots, as an RFC. You can find draft 0.1 in Annex A of the paper on arXiv here.

Not only is this a first draft, it is also incomplete, covering only the specification of the data and its format, that should be saved in an EBB for social robots. Given that the EBB data specification is at the heart of the EBB standard, we feel that this is sufficient to be opened up for comments and feedback. We will continue to extend the specification, with subsequent versions also published on arXiv.

Please feel free to either submit comments to this blog post (best because everyone can see the comments), or by contacting me directly via email. All constructive comments that result in revisions to the standard will be acknowledged in the standard.

Tuesday, April 12, 2022

Our first mock social robot accident and investigation

Robot accidents are inevitable. These days the likelihood of serious accidents involving industrial robots is pretty low (but not zero), because such robots are generally inside safety cages. But a newer generation of social robots - robots designed to interact directly with people, including vulnerable elderly people or children - means that accidents are now much more likely. And if we also take into account ethical harms alongside physical harms, then the potential for accidents increases still further. Psychological harms include addiction, over trusting, or deception, and societal harms include privacy violations. For more on these ethical harms see my blog post outlining an ethical risk assessment of a smart robot teddy bear.

It has puzzled me for some years that there has been almost no research on robot accident investigation. In the RoboTIPS project we are addressing this deficit by developing both the technology - which we call an Ethical Black Box (EBB) - and the processes of robot accident investigation. One of the most exciting aspects of RoboTIPS is that we're running a series of mock, i.e. staged, social robot accidents in order to road test the EBB and investigation processes in as close to a real situation as is feasible in a research project. RoboTIPS started in March 2019, but then just as we were ready to trial our first mock accident the Covid pandemic hit, and closed down the lab.

So it was great that last week we finally managed to run the a pilot of our first (of three) mock accident scenarios. The scenario, based around an assisted living robot helping an elderly person to live independently, was sketched out in late 2019, and then - during the lockdown - rehearsed in a number of online events, including a podcast radio play for Oxford Sparks and CSI Robot during the UKRAS Festival of Robotics 2021.

Here is the scenario:

Imagine that your elderly mother, or grandmother, has an assisted living robot to help her live independently at home. The robot is capable of fetching her drinks, reminding her to take her medicine and keeping in touch with family. Then one afternoon you get a call from a neighbour who has called round and sees your grandmother collapsed on the floor. When the paramedics arrive they find the robot wandering around apparently aimlessly. One of its functions is to call for help if your grandmother stops moving, but it seems that the robot failed to do this
To enact this scenario we needed a number of volunteers: one to act as Rose - the subject of the accident, a second as the neighbour who discovers the accident and raises the alarm, a third as the paramedic who attends to Rose, a fourth who acts in the role of the cleaner and a fifth in the role of manager of the group of homes in which Rose lives. We also needed volunteers to act as members of the accident investigation team who are called in to try and discover what happened, why it happened and, if possible, what changes need to be made to how to ensure the accident doesn't happen again.

This is the mock accident taking place in the kitchen of our assisted living studio. Left shows the neighbour, acted by Paul, discovering Ross, acted by Alex, injured on the floor. (Note the chair on its side.) Right is the paramedic, role-played by Luc, attending to Ross. Meanwhile the Pepper robot is moving around somewhat aimlessly.

Our brilliant Research Fellow Dr Anouk van Maris, who organised the whole setup, persuaded five colleagues from the Bristol Robotics Lab. All were male, so Rose became Ross. Only one volunteer: Alex, who played the part of Ross, was fully briefed. The other four role played brilliantly and, although they were briefed on their roles, they were not told what was going to happened to Ross, or the part the Pepper robot played (or maybe didn't play) in the accident. Two colleagues from Oxford, Lars and Keri, kindly volunteered to act as the accident investigators. Lars and Keri also had no prior knowledge of the circumstances of the accident, and had to rely on (i) inspecting the robot and the scene of the accident, (ii) the data from the robot's EBB, and (iii) testimonies from Ross, the neighbour, the paramedic, the cleaner and the facility manager.

Here we see Lars interviewing Medhi, who acted as the house manager, while Ben, acting as the cleaner, waits to be interviewed. Inside the studio Keri is interviewing the neighbour and parademic.









So, what were the findings of our accident investigators? They did very well indeed. Close examination of the EBB data, alongside consideration of the (not always reliable) witness testimony enabled Lars and Keri to correctly deduce the role that the robot played in the accident. They were also able to make several recommendations on operational changes.  But I will not reveal their findings in detail here as we intend to run the same mock accident again soon with a different set of volunteers and - in case any of them should read this blog - I don't want to give the game away!

Acknowledgements

Very special thanks to Dr Anouk van Maris. Also Dr Pericle Salvini, who worked with Anouk in finalising the detail of the scenario and during the pilot itself. Also, huge thanks to BRL volunteers Dr Alex Smith, Dr Paul Bremner, Dr Luc Wijnen, Mehdi Sobhani and Dr Ben Ward-Cherrier. And last but not least a very big thank you to Dr Lars Kunze, Oxford Robotics Institute and Keri Grieman, Dept of Computer Science, Oxford.

From the left: Pericle, Ben, Lars, Alex, Keri, Medhi, Paul, Anouk, Luc, Lola and me. Pepper is looking nervously at Lola.