My position on this question has always been pretty straightforward. It's easy to make robots that behave as if they have feelings, but quite a different matter to make robots that really have feelings.
But now I'm not so sure. There are I think two major problems with this apparently clear distinction between as if and really have.
The first is what do we mean by really have feelings. I'm reminded that I once said to a radio interviewer who asked me if a robot have feelings: if you can tell me what feelings are, I'll tell you whether a robot can have them or not. Our instinct (feeling even) is that feelings are something to do with hormones, the messy and complicated chemistry that too often seems to get in the way of our lives. Thinking, on the other hand, we feel to be quite different; the cool clean process of neurons firing, brains working smoothly. Like computers. Of course this instinct, this dualism, is quite wrong. We now know, for instance, that damage to the emotional centre of the brain can lead to an inability to make decisions. This false dualism has led I think to the trope of the cold, calculating unfeeling robot.
I think there is also some unhelpful biological essentialism at work here. We prefer it to be true that only biological things can have feelings. But which biological things? Single celled organisms? No, they don't have feelings. Why not? Because they are too simple. Ah, so only complex biological things have feelings. Ok, what about sharks or crocodiles; they're complex biological things; do they have feelings? Well, basic feelings like hunger, but not sophisticated feelings, like love or regret. Ah, mammals then. But which ones? Well elephants seem to mourn their dead. And dogs of course. They have a rich spectrum of emotions. Ok, but how do we know? Well because of the way they behave; your dog behaves as if he's pleased to see you because he really is pleased to see you. And of course they have the same body chemistry as us, and since our feelings are real* so must theirs be.
And this brings me to the second problem. The question of as if. I've written before that when we (roboticists) talk about a robot being intelligent, what we mean is a robot that behaves as if it is intelligent. In other words an intelligent robot is not really intelligent, it is an imitation of intelligence. But for a moment let's not think about artificial intelligence, but artificial flight. Aircraft are, in some sense, an imitation of bird flight. And some recent flapping wing flying robots are clearly a better imitation - a higher fidelity simulation - than fixed-wing aircraft. But it would be absurd to argue that an aircraft, or a flapping wing robot, is not really flying. So how do we escape this logical fix? It's simple. We just have to accept that an artefact, in this case an aircraft or flying robot, is both an emulation of bird flight and really flying. In other words an artificial thing can be both behaving as if it has some property of natural systems and really demonstrating that property. A robot can be behaving as if it is intelligent and - at the same time - really be intelligent. Are there categories of properties for which this would not be true? Like feelings..? I used to think so, but I've changed my mind.
I'm now convinced that we could, eventually, build a robot that has feelings. But not by simply programming behaviours so that the robot behaves as if it has feelings. Or by having to invent some exotic chemistry that emulates bio-chemical hormonal systems. I think the key is robots with self-models. Robots that have simulations of themselves inside themselves. If a robot is capable of internally modelling the consequences of it's, or other's actions, on itself, then it seems to me it could demonstrate something akin to regret (about being switched off, for instance). A robot with a self-model has the computational machinery to also model the consequences of actions on conspecifics - other robots. It would have an artificial Theory of Mind and that, I think, is a prerequisite for empathy. Importantly we would also program the robot to model heterospecifics, in particular humans, because we absolutely require empathic robots to be empathic towards humans (and, I would argue, animals in general).
So, how would this robot have feelings? It would, I believe, have feelings by virtue of being able to model the consequences of actions, both its own and others' actions, on itself and others. This would lead to it making decisions about how to act, and behave, which would demonstrate feelings, like regret, guilt, pleasure or even love, with an authenticity which would make it impossible to argue that it doesn't really have feelings.
So your robot doggie could really be pleased to see you.
So your robot doggie could really be pleased to see you.
*except when they're not.
Postscript. A colleague has tweeted that I am confusing feelings and emotion here. Mea culpa. I'm using the word feelings here in a pop-psychology everyday sense of feeling hungry, or tired, or a sense of regret. Wikipedia defines feelings, in psychology, as a word is 'usually reserved for the conscious subjective experience of emotion'. The same colleague asserts that what I've outlined could lead to artificial empathy, but not artificial emotion (or presumably feelings). I'm not sure I understand what emotions are well enough to argue. But I guess the idea I'm really trying to explore here is artificial subjectivity. Surely a robot with artificial subjectivity who's behaviour expresses and reflects that subjective experience could be said to be behaving emotionally?
Related blog posts:
Robot know thyself
Could a robot have feelings?
Postscript. A colleague has tweeted that I am confusing feelings and emotion here. Mea culpa. I'm using the word feelings here in a pop-psychology everyday sense of feeling hungry, or tired, or a sense of regret. Wikipedia defines feelings, in psychology, as a word is 'usually reserved for the conscious subjective experience of emotion'. The same colleague asserts that what I've outlined could lead to artificial empathy, but not artificial emotion (or presumably feelings). I'm not sure I understand what emotions are well enough to argue. But I guess the idea I'm really trying to explore here is artificial subjectivity. Surely a robot with artificial subjectivity who's behaviour expresses and reflects that subjective experience could be said to be behaving emotionally?
Related blog posts:
Robot know thyself
Could a robot have feelings?
Hi Alan -- my group has been building & publishing models of emotions as part of action selection systems for game characters (mostly, more recently robots) for almost a decade now. Here's a list of papers (not all are models) http://www.cs.bath.ac.uk/~jjb/web/ai.html#emot and there are some more recent ones due to Swen Gaudl not linked there.
ReplyDeleteTo what extent are these really emotions? To the same extent a gripper is a robot hand, or a camera a robot eye. As usual, I think the problem is to understand the lack of magic yet also the presence importance or complexity of a human attribute, and then to look at what we really need for a robot to function.
I strongly disagree with Cynthia Brezeal's recent claims as I think you must too, that we need robots to be more human so that they treat us right. I prefer the fourth EPSRC principle of robotics – that we should avoid the humanoid and go for the transparent.
Thanks Joanna. Quite agree: what are really emotions? There's no magic - just huge and remarkably (as well as perplexing) complexity. We're also agreed re humanoid robots - in fact I've argued that to build robots that look like humans, but only behave in very simple ways, is itself unethical. I think robot pets are a better bet.
DeleteAssuming Cartesian dualism is wrong, it comes down to a computability question, and then not a very principled one. If we were able to simulate an organism and its environment at the molecular or sub-molecular level, based on reasonably precise real-world data, and the organism that we were modeling had feelings, then it follows that the simulation must have feelings too, unless someone can offer a convincing reason why such a simulation is impossible even in principle. All the objections I've heard so far are just Cartesian Dualism in disguise - a feeble clinging on to some kind of quantum magic or other reassuringly "uncomputable" property that makes "real" life different from a faithful simulation of it.
ReplyDeleteI agree that there's a distinction to be made between having feelings and having "feeeeelings", in the sense of being conscious of them. Feelings at the lower level are no different from sensations; they're just internal sensations rather than environmental ones. But in both cases we can either be consciously aware of them or not. Constructing artificial emotions is easy; using them for all of their biological purposes is easy; the question is what it takes for a robot to be conscious of them. And that, too, is a biological question. Give them the right equipment and they'll feel things in the same way that we do. This equipment almost certainly involves having a model of the world that includes a model of the self. Consciousness is (surely) just what happens when such a model - a planning mechanism that can be decoupled from the sensorimotor loop - is operating. I see no reason why consciousness shouldn't be implementable on different substrates - it's an issue about architectures, not substances - and if that's true then feeeeelings are implementable too.
Like you say, the mistake, as in most of robotics and AI, comes when people try to emulate the symptoms with no regard to the underlying causes.
Thank you Steve for your thoughtful comments. Your distinction between feelings and feelings-that-a-thing-is-conscious-of is really helpful. While I was writing here primarily just about regular feelings I think you are absolutely right that a conscious robot should be capable of being aware of its feelings in that way you and I are. And I 100% agree with you that artificial consciousness should be, in principle, possible. Immensely difficult of course, as well as fraught with ethical and societal worries.
DeleteTechnology gets other things to do some work for us, to make life easier.
ReplyDeleteIf we want that work to be done in ever more human, less machine-like ways, then eventually we start to find ourselves exploring not only how to recreate our humanity in some alternative form but also, in the process, the extent to which we may ourselves be like machines.
This progress could include insights from building machines that offer better speech, emotional, cultural or social recognition. Ethical bots have to have a way of judging how we expect others in our presence to behave, and so on.
Now, through art, academic and practical study, we're heaping up piles of disjointed information and theories about how we, animal, vegetables and minerals behave and why. Pepper uses ideas from, I believe, emotional rhythms in music?
So there are lots of possible directions that haven’t all joined up. But the history of science is that cultural evolution will make its own connections, surprising ourselves, I think, with what we sometimes come up with.
I think we're approaching the stage where we need to balance what more we want to know about ourselves with what more we could do to improve our lives.
Maybe we passed that stage thousands of years ago or maybe we'll never quite get there.
I think one place human-machine interaction is heading is working on is copying ourselves, as it is found that adding ever lower-level features does make a difference to the end result.
It might be best to accept that sooner rather than later and try to formalise not just what needs to be put in, but what we ought to leave out.
This could involve deciding what disabilities would be acceptable in (or to) a machine and then how to engineer them so that we're (and they're?) happy with the result.
One trick to writing a story – or a joke – is to start with a really good ending and then make sure everything fits and leads up to it.
A couple of days ago some friends found themselves doing this, after a bit of headscratching at the start line, in order to leave clues for a bike trail through the woods.
I think I’m sort of agreeing with Joanna but suggesting starting backwards?
Thanks Paul for your comments. I think you raise some interesting and difficult areas - you're not the first to suggest that robots, in some respects, are like severely disabled animals. Some, including philosopher Thomas Metzinger, argue persuasively that building conscious machines is unethical because (and I paraphrase) you are creating an entity that is likely to experience phenomenal suffering. This is a difficult ethical question that I think will need to be addressed sooner or later.
Delete