Thursday, August 20, 2020

"Why Did You Just Do That?" Explainability and Artificial Theory of Mind for Social Robots

This week I have been attending (virtually) the excellent RoboPhilosophy conference, and this morning gave a plenary talk "Why did you just do that?" Here is the abstract:
An important aspect of transparency is enabling a user to understand what a robot might do in different circumstances. An elderly person might be very unsure about robots, so it is important that her assisted living robot is helpful, predictable – never does anything that puzzles or frightens her – and above all safe. It should be easy for her to learn what the robot does and why, in different circumstances, so that she can build a mental model of her robot. An intuitive approach would be for the robot to be able to explain itself, in natural language, in response to spoken requests such as “Robot, why did you just do that?” or “Robot, what would you do if I fell down?” In this talk I will outline current work, within project RoboTIPS, to apply recent research on artificial theory of mind to the challenge of providing social robots with the ability to explain themselves. 
And here are the slides:


Here are links to the movies:


And here are the papers referenced in the talk, with links:
  1. Jobin, A., Ienca, M. & Vayena, E. (2019) The global landscape of AI ethics guidelines. Nat Mach Intell 1, 389–399
  2. Winfield, A. Ethical standards in robotics and AI. Nature Electronics 2, 46–48 (2019).  Pre-print here.
  3. Winfield, A. F. (2018) Experiments in Artificial Theory of Mind: from safety to story telling. Front. Robot. AI 5:75.
  4. Blum, C., Winfield, A. F. and Hafner, V. V. (2018) Simulation-based internal models for safer robots. Frontiers in Robotics and AI, 4 (74). pp. 1-17.
  5. Vanderelst, D. and Winfield, A. F. (2018) An architecture for ethical robots inspired by the simulation theory of cognition. Cognitive Systems Research, 48. pp. 56-66.
  6. Winfield AFT (2018) When Robots Tell Each Other Stories: The Emergence of Artificial Fiction. In: Walsh R., Stepney S. (eds) Narrating Complexity. Springer, Cham. Preprint here.
  7. Winfield, AF and Jirotka, M. (2017) The case for an ethical black box. In: Gao, Y. et al, eds. (2017) Towards Autonomous Robot Systems. LNCS 10454, pp. 262-273, Springer. Preprint here.
  8. Winfield AFT, Katie Winkle, Helena Webb, Ulrik Lyngs, Marina Jirotka and Carl Macrae, Robot Accident Investigation: a case study in Responsible Robotics, chapter submitted to RoboSoft.
and mentioned in the Q&A:
  1. Winfield, AF, K. Michael, J. Pitt and V. Evers (2019) Machine Ethics: The Design and Governance of Ethical AI and Autonomous Systems [Scanning the Issue], in Proceedings of the IEEE, vol. 107, no. 3, pp. 509-517.
  2. Vanderelst, D. and Winfield, A. (2018), The Dark Side of Ethical Robots, AIES '18: Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society Dec 2018 Pages 317–322. 

Monday, August 10, 2020

Back to robot coding part 1: hello world

One of the many things I promised myself when I retired nearly two years ago was to get back to some coding. Why? Two reasons: one is that writing and debugging code is hugely satisfying - for those like me not smart enough to do pure maths or theoretical physics - it's the closest you can get to working with pure mind stuff. But the second is that I want to prototype a number of ideas in cognitive robots which tie together work in artificial theory of mind and the ethical black box, with old ideas on how robots telling each other stories and new ideas on how social robots might be able to explain themselves in response to questions like "Robot: what would you do if I forget to take my medicine?"

But before starting to work on the robot (a NAO) I first needed to learn Python, so completed most of the Codecadamy's excellent Learn Python 2 course over the last few weeks. I have to admit that I started learning Python with big misgivings over the language. I especially don't like the way Python plays fast and loose with variable types, allowing you to arbitrarily assign a thing (integer, float, string, etc) to a variable and then assign a different kind of thing to the same variable; very different to the strongly typed languages I have used since student days: Algol 60, Algol 68, Pascal and C. However, there are things I do like: the use of indentation as part of the syntax for instance, and lots of nice built in functions like range(), so x = range(0,10) puts a list ('array' in old money) of integers from 0 to 9 in x. 

So, having got my head around Python I finally made a start with the robot on Thursday last week. I didn't get far and it was *very* frustrating. 

Act 1: setting up on my Mac

Attempting to set things up on my elderly Mac air was a bad mistake which sent me spiralling down a rabbit hole of problems. The first thing you have to do is download and unzip the NAO API, called naoqi, from Aldebaran. The same web page then suggests you simply try to import naoqi from within Python, and if there are no errors all's well.

As soon as I got the export path commands right,  import naoqi resulted in the following error

...
Reason: unsafe use of relative rpath libboost_python.dylib in /Users/alansair/Desktop/naoqi/pynaoqi-python2.7-2.1.4.13-mac64/_qi.so with restricted binary

According to stack overflow this problem is caused by Mac OSX system integrity protection (SIP)

Then (somewhat nervously) I tried turning SIP off, as instructed here.

But import naoqi still gives a different error. Perhaps its because my Python is in the wrong place, the Aldebaran page says it must be at /usr/local/bin/python (the default on the mac is /usr/bin. Ok so I So, reinstall python 2.7 from Python.org  so that it is in /usr/local/bin/python. But now I get another error message:

>> import naoqi
Fatal Python error: PyThreadState_Get: no current thread
Abort trap: 6

A quick search and I read: "this error shows up when a module tries to use a python library that is different than the one the interpreter uses, that is, when you mix two different pythons. I would run otool -L <dyld> on each of the dynamic libraries in the list of Binary Images, and see which ones is linked to the system Python."

At which point I admitted defeat.

Act 2: setting up on my Linux machine

Once I had established that the Python on my Linux machine was also the required version 2.7, I then downloaded and unzipped the NAO API, this time for Linux.

This time I was able to import naoqi with no errors, and within just a few minutes ran my first NAO program: hello world

from naoqi import ALProxy
tts = ALProxy("ALTextToSpeech", "164.168.0.17", 9559)
tts.say("Hello, world!")

whereupon my NAO robot spoke the words "Hello world". Success!

Friday, June 05, 2020

Robot Accident Investigation

Yesterday I gave an talk at the ICRA 2020 workshop Against Robot Dystopias. The workshop should have been in Paris but - like most academic meetings during the lockdown - was held online. In the zoom chat window toward the end of the workshop many of us were wistfully imagining continued discussions in a Parisian bar over a few glasses of wine. Next year I hope. The workshop was excellent and all of the talks should be online soon.

My talk was an extended version of last year's talk for AI@Oxford What could possibly go wrong. With results from our new paper Robot Accident Investigation, the talk outlines a fictional investigation of a fictional robot accident. We had hoped to stage the mock accident, in the lab, with human volunteers and report a real investigation (of a mock accident) but the lockdown put paid to that too. So we have had to use our imagination and construct - I hope plausibly - the process and findings of the accident investigation.

Here is the abstract of our paper.
Robot accidents are inevitable. Although rare, they have been happening since assembly-line robots were first introduced in the 1960s. But a new generation of social robots are now becoming commonplace. Often with sophisticated embedded artificial intelligence (AI) social robots might be deployed as care robots to assist elderly or disabled people to live independently. Smart robot toys offer a compelling interactive play experience for children and increasingly capable autonomous vehicles (AVs) the promise of hands-free personal transport and fully autonomous taxis. Unlike industrial robots which are deployed in safety cages, social robots are designed to operate in human environments and interact closely with humans; the likelihood of robot accidents is therefore much greater for social robots than industrial robots. This paper sets out a draft framework for social robot accident investigation; a framework which proposes both the technology and processes that would allow social robot accidents to be investigated with no less rigour than we expect of air or rail accident investigations. The paper also places accident investigation within the practice of responsible robotics, and makes the case that social robotics without accident investigation would be no less irresponsible than aviation without air accident investigation.
And the slides from yesterday's talk:




Special thanks to project colleagues and co-authors: Prof Marina Jirotka, Prof Carl Macrae, Dr Helena Webb, Dr Ulrik Lyngs and Katie Winkle.

Monday, April 20, 2020

Autonomous Robot Evolution: an update

It's been over a year since my last progress report from the Autonomous Robot Evolution (ARE) project, so an update on the ARE Robot Fabricator (RoboFab) is long overdue. There have been several significant advances. First is integration of each of the elements of RoboFab. Second is the design and implementation of an assembly fixture, and third significantly improved wiring. Here is a CAD drawing of the integrated RoboFab.

The ARE RoboFab has four major subsystems: up to three 3D printer(s), an organ bank, an assembly fixture and a centrally positioned robot arm (multi-axis manipulator). The purpose of each of these subsystems is outlined as follows:
  • The 3D printers are used to print the evolved robot’s skeleton, which might be a single part, or several. With more than one 3D printer we can speed up the process by 3D printing skeletons for several different evolved robots in parallel, or – for robots with multi-part skeletons – each part can be printed in parallel.
  • The organ bank contains a set of pre-fabricated organs, organised so that the robot arm can pick organs ready for placing within the part-built robot. For more on the organs see previous blog post(s).
  • The assembly fixture is designed to hold (and if necessary rotate) the robot’s core skeleton while organs are placed and wired up.
  • The robot arm is the engine of RoboFab. Fitted with special gripper the robot arm is responsible for assembling the complete robot.
And here is the Bristol RoboFab (there is a second identical RoboFab in York):


Note that the assembly fixture is mounted upside down at the top front of the RoboFab. This has the advantage that there is a reasonable volume of clear space for assembly of the robot under the fixture, which is reachable by the robot arm.

The fabrication and assembly sequence has six stages:
  1. RoboFab receives the required coordinates of the organs and one or more mesh file(s) of the shape of the skeleton.
  2. The skeleton is 3D printed.
  3. The robot arm fetches the core ‘brain’ organ from the organ bank and clips it into the skeleton on the print bed. This is a strong locking clip.
  4. The robot arm then lifts the core organ and skeleton assemblage off the print bed, and attaches it to the assembly fixture. The core organ has metal disks on its underside which are used to secure the assemblage to the fixture with electromagnets.
  5. The robot arm then picks and places the required organs from the organ bank, clipping them into place on the skeleton.
  6. Finally the robot arm wires each organ to the core organ, to complete the robot.



Here is a complete robot, fabricated, assembled and wired by the RoboFab. This evolved robot has a total of three organs: the core ‘brain’ organ, and two wheel organs.
Note especially the wires connecting the wheel organs to the core organ. My colleague Matt has come up with an ingenious design in which a coiled cable is contained within the organ. After the organs have been attached to the skeleton (stage 5), the robot arm in turn grabs each organ's jack plug and pulls the cable to plug into the core organ (stage 6). This design minimises the previously encountered problem of the robot gripper getting tangled in dangling loose wires during stage 6.

And here is a video clip of the complete process:



Credits

The work described here has been led by my brilliant colleague Matt Hale, very ably supported by York colleagues Edgar Buchanan and Mike Angus. The only credit I can take is that I came up with some of the ideas and co-wrote the bid that secured the EPSRC funding for the project.

References

For a much more detailed account of the RoboFab see this paper, which was presented at ALife 2019 last summer in Newcastle: The ARE Robot Fabricator: How to (Re)produce Robots that Can Evolve in the Real World.

Related blog posts

First automated robot assembly (February 2019)
Autonomous Robot Evolution: from cradle to grave (July 2018)
Autonomous Robot Evolution: first challenges (Oct 2018)