When I wrote about story-telling robots nearly 7 years ago I had no idea how we could actually build robots that can tell each other stories. Now I believe I do, and my paper setting out how has just been published in a new volume called Narrating Complexity. You can find a pdf online here.
The book emerged from a hugely interesting series of workshops, led by Richard Walsh and Susan Stepney, which brought together several humanities disciplines including narratology, with complexity scientists, systems biologists and a roboticist (me). It was at one of those workshops that I realised that simulation-based internal models - the focus of much of my recent work - could form the basis for story-telling.
To recap: a simulation-based internal model is a computer simulation of a robot and its environment, including other robots, inside itself. Like animals all robots have a set of next possible actions, but unlike animals (and especially humans) robots have only a small repertoire of actions. With an internal model a robot can predict what might happen (in its immediate future) for each of those next possible actions. I call this model a consequence engine because it gives the robot a powerful way of predicting the consequences of its actions, for both itself and other robots.
So, how can we use the consequence engine to make story-telling robots?
When the robot runs its consequence engine it is asking itself a 'what if' question; 'what if I turned left?' or, 'what if I just stand here?'. Some researchers have called a simulation-based internal model a 'functional imagination' and it's not a bad metaphor. Our robot 'imagines' what might happen in different circumstances. And when the robot has imagined something it has a kind of internal narrative: 'if I turn left I will likely crash into the wall'. In a way the robot is telling itself a story about something that might happen. In Dennett's conceptual Tower-of-Generate-and-Test the robot is a Popperian creature.
Now consider the possibility that the robot converts that internal narrative into speech, and literally speaks it out loud. With current speech synthesis technology that should be relatively easy to do. Here is a diagram showing this.
in the paper.
Another robot (B) is equipped with exactly the same cognitive machinery as robot A, and - as shown below robot B listens to robot A's 'story' (using speech recognition), interprets that story as an action and a consequence, which it 'runs' in its consequence engine. In effect robot B 'imagines' robot A's story. It 'imagines' turning left and crashing into the wall - even though it might not be standing near a wall to its left.
You may be wondering 'ok but where is the meaning? Surely B cannot really understand A's simple stories..?' Here I am going to stick my neck out and suggest that the process of re-imagining is what understanding is. Of course you and I can imagine a vast range of things, including situations that no human has ever (or perhaps could ever) experience; Roy Batty's famous line "I've seen things you people wouldn't believe. Attack ships on fire off the shoulder of Orion..." comes to mind.
In contrast our robots have a profoundly limited imagination; their world (both real and imagined) contains only the objects and hazards of their immediate environment and they are capable only of imagining next possible actions and the immediate consequences of those actions. And that limited imagination does have the simple physics of collisions built in (providing the robot with a kind of common sense). But I contend that - within the constraints of that very limited imagination - our robots can properly be said to 'understand' each other.
But perhaps I'm getting ahead of myself, given that we haven't actually run the experiments yet.