Pages

Sunday, January 27, 2019

When Robots Tell Each Other Stories: The Emergence of Artificial Fiction

When I wrote about story-telling robots nearly 7 years ago I had no idea how we could actually build robots that can tell each other stories. Now I believe I do, and my paper setting out how has just been published in a new volume called Narrating Complexity. You can find a pdf online here.

The book emerged from a hugely interesting series of workshops, led by Richard Walsh and Susan Stepney, which brought together several humanities disciplines including narratology, with complexity scientists, systems biologists and a roboticist (me). It was at one of those workshops that I realised that simulation-based internal models - the focus of much of my recent work - could form the basis for story-telling.

To recap: a simulation-based internal model is a computer simulation of a robot and its environment, including other robots, inside itself. Like animals all robots have a set of next possible actions, but unlike animals (and especially humans) robots have only a small repertoire of actions. With an internal model a robot can predict what might happen (in its immediate future) for each of those next possible actions. I call this model a consequence engine because it gives the robot a powerful way of predicting the consequences of its actions, for both itself and other robots.

So, how can we use the consequence engine to make story-telling robots?

When the robot runs its consequence engine it is asking itself a 'what if' question; 'what if I turned left?' or, 'what if I just stand here?'. Some researchers have called a simulation-based internal model a 'functional imagination' and it's not a bad metaphor. Our robot 'imagines' what might happen in different circumstances. And when the robot has imagined something it has a kind of internal narrative: 'if I turn left I will likely crash into the wall'. In a way the robot is telling itself a story about something that might happen. In Dennett's conceptual Tower-of-Generate-and-Test the robot is a Popperian creature.

Now consider the possibility that the robot converts that internal narrative into speech, and literally speaks it out loud. With current speech synthesis technology that should be relatively easy to do. Here is a diagram showing this.

The blue box on the left is a simplified version of the consequence engine; it's the cognitive machinery that allows the robot to predict the consequences of a particular action. For an outline of how it works there's a description in the paper.

Another robot (B) is equipped with exactly the same cognitive machinery as robot A, and - as shown below robot B listens to robot A's 'story' (using speech recognition), interprets that story as an action and a consequence, which it 'runs' in its consequence engine. In effect robot B 'imagines' robot A's story. It 'imagines' turning left and crashing into the wall - even though it might not be standing near a wall to its left.

The new idea here is that the listener robot (B) converts the story it has heard into a 'what if' question, then 'runs' it in its own consequence engine. In a sense A has invited B to imagine itself in A's shoes. Although compared with the stories we humans tell each other, A's story is trivial, it does I suggest have all the key elements. And of course A and B are not limited to fictional stories: A could - just as easily - recount something that has actually happened to it, like 'I turned right to avoid crashing into the wall'.

You may be wondering 'ok but where is the meaning? Surely B cannot really understand A's simple stories..?' Here I am going to stick my neck out and suggest that the process of re-imagining is what understanding is. Of course you and I can imagine a vast range of things, including situations that no human has ever (or perhaps could ever) experience; Roy Batty's famous line "I've seen things you people wouldn't believe. Attack ships on fire off the shoulder of Orion..." comes to mind.

In contrast our robots have a profoundly limited imagination; their world (both real and imagined) contains only the objects and hazards of their immediate environment and they are capable only of imagining next possible actions and the immediate consequences of those actions. And that limited imagination does have the simple physics of collisions built in (providing the robot with a kind of common sense). But I contend that - within the constraints of that very limited imagination - our robots can properly be said to 'understand' each other.

But perhaps I'm getting ahead of myself, given that we haven't actually run the experiments yet.


No comments:

Post a Comment