Thursday, August 20, 2020

"Why Did You Just Do That?" Explainability and Artificial Theory of Mind for Social Robots

This week I have been attending (virtually) the excellent RoboPhilosophy conference, and this morning gave a plenary talk "Why did you just do that?" Here is the abstract:
An important aspect of transparency is enabling a user to understand what a robot might do in different circumstances. An elderly person might be very unsure about robots, so it is important that her assisted living robot is helpful, predictable – never does anything that puzzles or frightens her – and above all safe. It should be easy for her to learn what the robot does and why, in different circumstances, so that she can build a mental model of her robot. An intuitive approach would be for the robot to be able to explain itself, in natural language, in response to spoken requests such as “Robot, why did you just do that?” or “Robot, what would you do if I fell down?” In this talk I will outline current work, within project RoboTIPS, to apply recent research on artificial theory of mind to the challenge of providing social robots with the ability to explain themselves. 
And here are the slides:


Here are links to the movies:


And here are the papers referenced in the talk, with links:
  1. Jobin, A., Ienca, M. & Vayena, E. (2019) The global landscape of AI ethics guidelines. Nat Mach Intell 1, 389–399
  2. Winfield, A. Ethical standards in robotics and AI. Nature Electronics 2, 46–48 (2019).  Pre-print here.
  3. Winfield, A. F. (2018) Experiments in Artificial Theory of Mind: from safety to story telling. Front. Robot. AI 5:75.
  4. Blum, C., Winfield, A. F. and Hafner, V. V. (2018) Simulation-based internal models for safer robots. Frontiers in Robotics and AI, 4 (74). pp. 1-17.
  5. Vanderelst, D. and Winfield, A. F. (2018) An architecture for ethical robots inspired by the simulation theory of cognition. Cognitive Systems Research, 48. pp. 56-66.
  6. Winfield AFT (2018) When Robots Tell Each Other Stories: The Emergence of Artificial Fiction. In: Walsh R., Stepney S. (eds) Narrating Complexity. Springer, Cham. Preprint here.
  7. Winfield, AF and Jirotka, M. (2017) The case for an ethical black box. In: Gao, Y. et al, eds. (2017) Towards Autonomous Robot Systems. LNCS 10454, pp. 262-273, Springer. Preprint here.
  8. Winfield AFT, Katie Winkle, Helena Webb, Ulrik Lyngs, Marina Jirotka and Carl Macrae, Robot Accident Investigation: a case study in Responsible Robotics, chapter submitted to RoboSoft.
and mentioned in the Q&A:
  1. Winfield, AF, K. Michael, J. Pitt and V. Evers (2019) Machine Ethics: The Design and Governance of Ethical AI and Autonomous Systems [Scanning the Issue], in Proceedings of the IEEE, vol. 107, no. 3, pp. 509-517.
  2. Vanderelst, D. and Winfield, A. (2018), The Dark Side of Ethical Robots, AIES '18: Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society Dec 2018 Pages 317–322. 

Monday, August 10, 2020

Back to robot coding part 1: hello world

One of the many things I promised myself when I retired nearly two years ago was to get back to some coding. Why? Two reasons: one is that writing and debugging code is hugely satisfying - for those like me not smart enough to do pure maths or theoretical physics - it's the closest you can get to working with pure mind stuff. But the second is that I want to prototype a number of ideas in cognitive robots which tie together work in artificial theory of mind and the ethical black box, with old ideas on how robots telling each other stories and new ideas on how social robots might be able to explain themselves in response to questions like "Robot: what would you do if I forget to take my medicine?"

But before starting to work on the robot (a NAO) I first needed to learn Python, so completed most of the Codecadamy's excellent Learn Python 2 course over the last few weeks. I have to admit that I started learning Python with big misgivings over the language. I especially don't like the way Python plays fast and loose with variable types, allowing you to arbitrarily assign a thing (integer, float, string, etc) to a variable and then assign a different kind of thing to the same variable; very different to the strongly typed languages I have used since student days: Algol 60, Algol 68, Pascal and C. However, there are things I do like: the use of indentation as part of the syntax for instance, and lots of nice built in functions like range(), so x = range(0,10) puts a list ('array' in old money) of integers from 0 to 9 in x. 

So, having got my head around Python I finally made a start with the robot on Thursday last week. I didn't get far and it was *very* frustrating. 

Act 1: setting up on my Mac

Attempting to set things up on my elderly Mac air was a bad mistake which sent me spiralling down a rabbit hole of problems. The first thing you have to do is download and unzip the NAO API, called naoqi, from Aldebaran. The same web page then suggests you simply try to import naoqi from within Python, and if there are no errors all's well.

As soon as I got the export path commands right,  import naoqi resulted in the following error

...
Reason: unsafe use of relative rpath libboost_python.dylib in /Users/alansair/Desktop/naoqi/pynaoqi-python2.7-2.1.4.13-mac64/_qi.so with restricted binary

According to stack overflow this problem is caused by Mac OSX system integrity protection (SIP)

Then (somewhat nervously) I tried turning SIP off, as instructed here.

But import naoqi still gives a different error. Perhaps its because my Python is in the wrong place, the Aldebaran page says it must be at /usr/local/bin/python (the default on the mac is /usr/bin. Ok so I So, reinstall python 2.7 from Python.org  so that it is in /usr/local/bin/python. But now I get another error message:

>> import naoqi
Fatal Python error: PyThreadState_Get: no current thread
Abort trap: 6

A quick search and I read: "this error shows up when a module tries to use a python library that is different than the one the interpreter uses, that is, when you mix two different pythons. I would run otool -L <dyld> on each of the dynamic libraries in the list of Binary Images, and see which ones is linked to the system Python."

At which point I admitted defeat.

Act 2: setting up on my Linux machine

Once I had established that the Python on my Linux machine was also the required version 2.7, I then downloaded and unzipped the NAO API, this time for Linux.

This time I was able to import naoqi with no errors, and within just a few minutes ran my first NAO program: hello world

from naoqi import ALProxy
tts = ALProxy("ALTextToSpeech", "164.168.0.17", 9559)
tts.say("Hello, world!")

whereupon my NAO robot spoke the words "Hello world". Success!