Pages

Tuesday, July 24, 2012

When robots start telling each other stories...

About 6 years ago the late amazing Richard Gregory said to me, with a twinkle in his eye, "when your robots start telling each other stories, then you'll really be onto something". It was a remark with much deeper significance than I realised at the time.

Richard planted a seed that's been growing since. What I didn't fully appreciate then, but do now, is the profound importance of narrative. More than we perhaps imagine. Narrative is, I suspect, a fundamental property of both human societies and individual human beings. It may even be a universal property of all advanced societies of sentient social beings. Let me try and justify this outlandish claim. First, take human societies. We humans love to tell each other stories. Whether our stories are epic poems, love songs; stories told with sound (music), or movement (dance), or with stuff (sculpture or art). Stories about what we did today, or on our holidays, stories made with images (photos, or movies); true stories or fantasies, or stories about the Universe that strive to be true (science), or very formal abstract stories told with mathematics, stories are everywhere. Arguably human culture is mostly stories.

Since humans started remembering stories and passing them on orally, and more recently with writing, we have had history: the more-or-less-true grand stories of human civilisation. Even the many artefacts of our civilisation are kinds of stories. They are embodied stories, which narrate the process by which they were designed and made; the plans and drawings which we use to formally record those designs are literally stories which tell how to arrange and join materials in space to fashion the artefact. Project plans are narratives of a different kind: they tell the story of the future steps that must be taken to achieve a goal. Computer programs are stories too. Except that they contain multiple narratives (bifurcated with branches and reiterated with loops), whose paths are determined by input data, which are related over and over at blinding speed within the computer. 

Now consider individual humans. There is a persuasive view in psychology that each of us owes our identity, our sense of self, to our personal life stories. The physical stuff that makes us, the cells of our body, are regenerated and replaced continuously, so that there's very little of you that existed 5 years ago. (I just realised the fillings in my teeth are probably the oldest part of me!) Yet you are still you. You feel like the same you 10, 20 or in my case 50 years ago - since I first became self-aware. I think that it's the lived and remembered personal narrative of our lives that provides us with the feeling, the illusion if you like, of a persistent self. This is I think why degenerative brain diseases are so terrifying. They appear to eat away that personal narrative so devastatingly that the person is ultimately lost, even while their physical body continues living.

So I was tremendously excited to be invited to a cross-disciplinary workshop on Narrative and Complex Systems at the York Centre for Complex Systems Analysis a couple of weeks ago, co-organised by York Professors of English Richard Walsh, and Computer Science Susan Stepney. For the first time I found myself in a forum in which I could share and debate ideas on narrative.

In preparing for the workshop I realised that perhaps the idea of robots telling each other stories isn't as far fetched as it first appears. Think about a simple robot, like the e-puck. What does the story of its life consist of? Well, it is the complete history of all of the movements, including turns, etc, punctuated by interactions with its environment. Because the robot and its set of behaviours is simple, then those interactions are pretty simple too. It occurred to me that it is perfectly possible for a robot to remember everything that has ever happened to it. Now place a number of these robots together, in a simple 'society' of robots, and provide them with the mechanism to exchange 'life stories' (or more likely, fragments of life stories). This mechanism is something we already developed in the Artificial Culture project - it is social learning by imitation. These robots would be telling each other stories.

But, I hear you ask, would these stories have any meaning? Well, to start with I think we must abandon the notion that they would necessarily mean anything to us humans. After all, these are robots telling each other stories. Ok, so would the stories mean anything to the robots themselves, especially robots with limited 'cognition'? Now we are in the interesting territory of semiotics, or - to be more accurate - robosemiotics. What, for instance, would one robot's story signify to another? That signification would I think be the meaning. But I think to go any further we would need to do the robot experiment I have outlined here.

And what would be the point of my proposed robot experiment? It is, I suggest, this:
to explore, with an abstract but embodied model, the relationship between the narrative self and shared narrative, i.e. culture.
By doing this experiment would we be, as Richard Gregory suggested, really onto something?