Pages

Sunday, October 07, 2007

You really need to know what your bot(s) are thinking (about you)

The projected ubiquity of personal companion robots raises a range of interesting but also troubling questions.

There can be little doubt that an effective digital companion, whether embodied or not, will need to be both sensitive to the emotional state of its human partner and be able to respond sensitively. It will, in other words, need artificial empathy - such a digital companion would (need to) behave as if it has feelings. One current project at the Bristol Robotics Laboratory is developing such a robot, which will of course need some theory of mind if it is to respond appropriately. Robots with feelings takes us into new territory in human-machine interaction. We are of course used to temperamental machinery and many of us are machine junkies. We are addicted to our cars and dishwashers, our mobile phones and iPods. But what worries me about a machine with feelings (and frankly it doesn’t matter whether it really has feelings or not) is how it will change the way humans feel about the machine.

Human beings will develop genuine emotional attachments to companion bots. Recall Weizenbaum’s secretary’s sensitivity about her private conversation with ELIZA - arguably the worlds first chat-bot in the 1960s. For more recent evidence look no further than the AIBO pet owners clubs. Here is a true story from one such club to illustrate how blurred the line between pet and robot has already become. One AIBO owner complained that her robot pet kept waking her at night with its barking. She would “jump out of bed and try to calm the robo-pet, stroking its smooth metallic-gray back to ease it back to sleep”. She was saved from “going crazy” when it was suggested that she switch the dog off at night to prevent its barking.

It is inevitable that people will develop emotional attachments, even dependencies, on companion bots. This, of course, has consequences. But what interests me is if the bots acquire a shared cultural experience. Another BRL project called ‘the emergence of artificial culture in robot societies’ is investigating this possibility. Consider this scenario. Your home has a number of companion bots. Some may be embodied, others not. It is inevitable that they will be connected, networked via your home wireless LAN, and thus able to chat with each other at the speed of light. This will of course bring some benefits - the companion bots will be able to alert each other to your needs: “she’s home”, or “he needs help with getting out of the bath”. But what if your bots start talking about you?

Herein lies the problem that I wish to discuss. The bots shared culture will be quintessentially alien, in effect an exo-culture (and I don’t mean that to imply sinister). Bot culture could well be inscrutable to humans, which means that when bots start gossiping with each other about you, you will have absolutely no idea what they’re talking about because - unlike them - you have no theory of mind for your digital companions.

--------------------------------------------------------------
This is a short 'position statement' prepared for e-horizons forum Artificial Companions in Society: Perspectives on the Present and Future, 25th and 26th October, Oxford Internet Institute.

No comments:

Post a Comment