Monday, September 29, 2014

The feeling of what it is like to be a Robot

Philosopher Thomas Nagel famously characterised subjective experience as “something that it is like to be…” and suggested that for a bat, for instance, there must be something that it is like to be a bat [1]. Nagel also argued that, since we humans differ so much from bats in the way we perceive and interact with the world, then it is impossible for us to know what it is like for a bat to be a bat. I am fascinated, intrigued and perplexed by Nagel’s ideas in equal measure. And, since I think about robots, I have assumed that if a robot were ever to have conscious subjective experience then there must be something that it is like to be a robot that – even though we had designed that robot – we could not know.

But I now believe it may eventually be just possible for a human to experience something approaching what it is like to be a robot. To do this would require two advances: one in immersive robot tele-operation, the other in the neuroscience of body self-image manipulation.

Consider first, tele-operation. Tele-operated robots are, basically, remotely controlled robots. They are the unloved poor relations of intelligent autonomous robots. Neither intelligent nor autonomous, they are nevertheless successful and important first wave robots; think of remotely operated vehicles (ROVs) engaged in undersea exploration or oil-well repair and maintenance. Think also of off-world exploration: the Mars rovers are hugely successful; the rock-stars of tele-operated robots.

Roboticists are good at appropriating technologies or devices developed for other applications and putting them to good use in robots: examples are WiFi, mobile phone cameras and the Microsoft Kinnect. With the high profile launch of the Oculus Rift headset, and their acquisition by Facebook, and with competing devices from Sony and others, there are encouraging signs that immersive Virtual Reality (VR) is on the verge of becoming a practical, workable proposition. Of course VR’s big market is video games – but VR can and, I believe, will revolutionise tele-operated robotics.

Imagine a tele-operated robot with a camera linked to the remote operator’s VR headset, so that every time she moves her head to look in a new direction the robot’s camera moves in sync; so she sees and hears what the robot sees and hears in immersive high definition stereo. Of course the reality experienced by the robot’s operator is real, not virtual, but the head mounted VR technology is the key to making it work. Add haptic gloves for control and the robot’s operator has an intuitive and immersive interface with the robot.

Now consider body self-image modification. Using mirror visual feedback researchers have discovered that it is surprisingly easy to (temporarily) modify anyone’s body self-image. In the famous rubber hand illusion a small screen is positioned to hide a subject’s real hand. A rubber hand is positioned where her hand could be, in full view, then a researcher simultaneously strokes both the real and rubber hands with a soft brush. Within a minute or so she begins to feel the rubber hand is hers, and flinches when the researcher suddenly tries to hit it with a hammer.

Remarkably H.H. Ehrsson and his colleagues extended the technique to the whole body, in a study called ‘If I Were You: Perceptual Illusion of Body Swapping’ [2]. Here the human subject wears a headset and looks down at his own body. However, what he actually sees is a mannequin, viewed from a camera mounted on the mannequin’s head. Simultaneous tactile and visual feedback triggers the illusion that the mannequin’s body is his own. It seems to me that if this technique works for mannequins then it should also work for robots. Of course it would need to be developed to the point that elaborate illusions involving mirrors, cameras and other researchers providing tactile feedback are not needed.

Now imagine such a body self-image modification technology combined with fully immersive robot tele-operation based on advanced Virtual Reality technology. I think this might lead to the robot's human operator experiencing the illusion of being one with the robot, complete with a body self-image that matches the robot's possibly non-humanoid body. This experience may be so convincing that the robot's operator experiences, at least partially, something like what it is to be a robot. Philosophers of mind would disagree - and rightly so; after all, this robot has no independent subjective experience of the world, so there is no something that it is like to be. The human operator could not experience what it is like to think like a robot, but she could experience what it is like to sense and act in the world like a robot.

The experience may be so compelling that humans become addicted to the feeling of being a robot fish, or robot dragon or some other fantasy creature, that they prefer this to the quotidian experience of their own bodies.

[1] Nagel, Thomas. What is it like to be a bat?, Mortal Questions, Cambridge University Press, 1979.

[2] Petkova VI, Ehrsson HH (2008) If I Were You: Perceptual Illusion of Body Swapping. PLoS ONE 3(12): e3832. doi:10.1371/journal.pone.0003832


  1. Nice provocative title, interesting speculation, and a fascinating piece of research you quote.

    It took me a while to appreciate the point of Thomas Metzinger's and others' interest in phantom limbs, but I now sort of relate it to lip-synching in movies.

    We can accept a few frames difference between the sounds we hear and the lip movements we see (any more, though, and the actors begin to look oddly disconnected, disembodied, from what they are saying). We also seem quite relaxed that the loudspeakers in the cinema do not necessarily line up with the mouths on screen.

    Perhaps this is because we seem to have such a strong impulse to make immediate sense of our multimodal experiences.

    I was cycling to work the other day when there was a ghastly mechanical squealing noise in front of me. Without thinking, I glanced down at the wheel and found myself trying to match the rhythm of the squeals to the wheel's rotation - before it dawned on me it was coming from a car across the street.

    Why are we so anxious to marry cause and effect?

    I suppose that being able to combine information as quickly as possible - on the next threat, or on your next meal - would confer a healthy survival advantage.

    Getting back to your post (sorry!)... in particular we try to make sense of other people in terms of what it's like to be ourselves.

    I'm not sure that we wouldn't have some idea of what it was actually like to be a robot, in the example you give, because for the illusion to work, the differences have to be within acceptable margins, so we would need a good overlap with the robot's experience.

    You could argue that we don't really manage any better with ourselves, that we assign feelings to the things that fit. I liked this recent short video from Nicholas Humphrey where he talks about how we can't experience pain directly, but only an internal representation of pain. (Must get one of his books. Quite fancy 'Soul Dust'.)

    So, just possibly, if you could make a robot think (no pressure, Alan!) it was feeling pain, i.e. recreate a close enough internal illusion for it, rather than worrying about the different surface materials, then could we also, by donning the headgear and the gloves, really know what it felt like to be a robot?

    I'm not at all sure it's a very good idea to go that far down that route (for what sort of feeling being would you then have created?) but it might be worthwhile to explore, cautiously, just a little further - if you could ever work out how you would know when you wanted to stop.

    Virtual reality synaesthesia, maybe? People might pay for that!

  2. It seems to me that if this technique works for mannequins then it should also work for robots. Of course it would need to be developed to the point that elaborate illusions involving mirrors, cameras and other researchers providing tactile feedback are not needed.
    The key to these effects is information. We don't 'know' what our body is like and can do, we constantly 'perceive' these things. All we have is perception, there's no 'peeking behind the curtain' to see what's really going on. This means that if you change the information to specify a change to our bodies, we smoothly recalibrate to accomodate the new information.

    This is important; we are not static, we pick up tools and change our body's abilities; we get tired, or rested, or injured. All these things change our ability to act on the world and we have to be able to recalibrate or else our actions become dysfunctional very fast.

    We are not infinitely flexible, but the Ehrsson stuff clearly shows we can recalibrate extensively. There are followups giving people child sized virtual avatars, etc and getting matching changes in affordance perception.

    Long story short: I like your idea a lot and I think that it would be a very cool test of a lot of things. Don't let the philosophers bring you down: anyone who shifts the goal posts as the data roll in isn't to be trusted anyway :)

  3. I think that this is a very interesting perspective but I am not sure that I agree. I think that what is proposed here is the ability to perceive the world from the vantage point of the robot. However, what Nagel originally had in mind was that even if we could use immersive technologies to make us feel like we are positioned in exactly the same place as the bat and look at the world from wherever he is perched at that moment, since we do not perceive the world the same way as a bat does (relying heavily on vision and having no way to experience echolocation) we simply cannot know how the bat is experiencing the world - even if we have the same vantage point. And I think that this would apply to the robot as well.