Friday, September 28, 2018

Experiments in Artificial Theory of Mind

Since setting out my initial thoughts on robots with simulation-based internal models about 5 years ago - initially in the context of ethical robots - I've had a larger ambition for these models: that they might provide us with a way of building robots with artificial theory of mind - something I first suggested when I outlined the consequence engine 4 years ago.

Since then we've been busy experimentally applying our consequence engine in the lab, in a range of contexts including ethics, safety and imitation, giving me little time to think about theory of mind. But then, in January 2017 I was contacted by Antonio Chella, inviting me to submit a paper to a special issue on Consciousness in Humanoid Robots. After some hesitation on my part and encouragement on Antonio's I realised that this was a perfect opportunity.

Of course theory of mind is not consciousness but it is for sure deeply implicated. And, as I discovered while researching the paper, the role of theory of mind in consciousness (or, indeed of consciousness in theory of mind) is both unclear and controversial. So, this paper, written in the autumn of 2017, submitted January 2018, and - after tough review and major revisions - accepted in June 2018, is my first (somewhat tentative) contribution to the machine consciousness literature.

Experiments in Artificial Theory of Mind: From Safety to Story-Telling, advances the hypothesis that simulation-based internal models offer a powerful and realisable, theory-driven basis for artificial theory of mind.

Here is the abstract
Theory of mind is the term given by philosophers and psychologists for the ability to form a predictive model of self and others. In this paper we focus on synthetic models of theory of mind. We contend firstly that such models—especially when tested experimentally—can provide useful insights into cognition, and secondly that artificial theory of mind can provide intelligent robots with powerful new capabilities, in particular social intelligence for human-robot interaction. This paper advances the hypothesis that simulation-based internal models offer a powerful and realisable, theory-driven basis for artificial theory of mind. Proposed as a computational model of the simulation theory of mind, our simulation-based internal model equips a robot with an internal model of itself and its environment, including other dynamic actors, which can test (i.e., simulate) the robot’s next possible actions and hence anticipate the likely consequences of those actions both for itself and others. Although it falls far short of a full artificial theory of mind, our model does allow us to test several interesting scenarios: in some of these a robot equipped with the internal model interacts with other robots without an internal model, but acting as proxy humans; in others two robots each with a simulation-based internal model interact with each other. We outline a series of experiments which each demonstrate some aspect of artificial theory of mind.
For an outline of the work of the paper see the slides below, presented first at the SPANNER workshop in York in September 2018, then at a workshop on Social Learning and Cultural Evolution at ALife 2019 in July 2019.



In fact all of the experiments outlined here have been described in some detail in previous blog posts (although not in the context of artificial theory of mind):
  1. The Corridor experiment 
  2. The Pedestrian experiment
  3. The Ethical robot experiments: with e-puck robots and with NAO robots
  4. Experiments on rational imitation (the imitation of goals)
  5. Story-telling robots**
The thing that ties all of these experiments together is that they all make use of a simulation-based internal model (which we call a consequence engine), which allows our robot to model and hence predict the likely consequences of each of its next possible actions, both for itself and for the other dynamic actors it is interacting with. In some of the experiments those actors are robots acting as proxy humans, so those experiments (in particular the corridor and ethical robot experiments) are really concerned with human-robot interaction.

Theory of mind is the ability to form a predictive model of ourselves and others; it's the thing that allows us to infer the beliefs and intentions of others. Curiously there are two main theories of mind: the 'theory theory' and the 'simulation theory'. The theory theory (TT) holds that one intelligent agent’s understanding of another’s mind is based on innate or learned rules, sometimes known as folk psychology. In TT these hidden rules constitute a 'theory' because they can be used to both explain and make predictions about others’ intentions.  The simulation theory (ST) instead holds that “we use our own mental apparatus to form predictions and explanations of someone by putting ourselves in the shoes of another person and simulating them” (Michlmayr, 2002).

When we hold our simulation-based internal model up against the simulation theory of mind, the two appear to mirror each other remarkably well. If a robot has a simulation of itself inside itself then it can explain and predict the actions of both itself, and others like itself by using its simulation-based internal model to model them. Thus we have an embodied computational model of theory of mind, in short artificial theory of mind.

So, what properties of theory of mind (ToM) are demonstrated in our five experiments?

Well, the first thing to note is that not all experiments implement full ST. In the corridor, pedestrian and ethical robot experiments robots predict their own actions using the simulation-based internal model, i.e. ST, but use a much simpler TT to model the other robots; we use a simple ballistic model for those other robots (i.e. by assuming the robot will continue to move at the speed and direction it is currently moving). Thus I describe these experiments as ST (self) + TT (other), or just ST+TT for short. I argue that this hybrid form of artificial ToM is perfectly valid, since you and I clearly don't model strangers we are trying to avoid in a crowded corridor as anything other than people moving in a particular direction and speed. We don't need to try and intuit their state of mind, only where they are going.

The rational imitation and story-telling experiments do however, use ST for both self and other, since a simple TT will not allow an imitating robot to infer the goals of the demonstrating robot, nor is it sufficient to allow a listener robot to 'imagine' the story told by the storytelling robot.

The table below summarises these differences and highlights the different aspects of theory of mind demonstrated in each of the five experiments.

*Theory Mode: ST (self) + TT/ST (other)

An unexpected real-world use for the approach set out in this paper, is to allow robots to explain themselves. I believe explainability will be especially important for social robots, i.e. robots designed to interact with people. Let me explain by quoting two paragraphs from the paper.

A major problem with human-robot interaction is the serious asymmetry of theory of mind. Consider an elderly person and her care robot. It is likely that a reasonably sophisticated near-future care robot will have a built-in (TT) model of an elderly human (or even of a particular human). This places the robot at an advantage because the elderly person has no theory of mind at all for the robot, whereas the robot has a (likely limited) theory of mind for her. Actually the situation may be worse than this, since our elderly person may have a completely incorrect theory of mind for the robot, perhaps based on preconceptions or misunderstandings of how the robot should behave and why. Thus, when the robot actually behaves in a way that doesn’t make sense to the elderly person, her trust in the robot will be damaged and its effectiveness diminished.

The storytelling model proposed here provides us with a powerful mechanism for the robot to be able to generate explanations for its actual or possible actions. Especially important is that the robot’s user should be able to ask (or press a button to ask) the robot to explain “why did you just do that?” Or, pre-emptively, to ask the robot questions such as “what would you do if I fell down?” Assuming that the care robot is equipped with an autobiographical memory, the first of these questions would require it to re-run and narrate the most recent action sequence to be able to explain why it acted as it did, i.e., “I turned left because I didn’t want to bump into you.” The second kind of pre-emptive query requires the robot to interpret the question in such a way it can first initialize its internal model to match the situation described, run that model, then narrate the actions it predicts it would take in that situation. In this case the robot acts first as a listener, then as the narrator (see slide 18 above). In this way the robot would actively assist its human user to build a theory-of-mind for the robot.


**This one remains, for the time-being, a thought experiment.

See also: When Robots Tell Each Other Stories: The Emergence of Artificial Fiction

Reference:

Michlmayr, M. (2002). Simulation Theory Versus Theory Theory: Theories Concerning the Ability to Read Minds. Master’s thesis, Leopold-Franzens- Universität Innsbruck.

1 comment: