Pages

Wednesday, August 31, 2011

Discussing Asimov's laws of robotics and a draft revision

This is me discussing robot ethics with Dallas Campbell for BBC1's Bang Goes The Theory. I outline the five new ethical principles for roboticists proposed by the EPSRC/AHRC working group. Click here for the working group's full report, including a commentary on these draft proposals.



With thanks to Simon Mackie, senior content producer for the Bang Goes The Theory website, for the code to embed this video clip.

Saturday, August 20, 2011

Robohype and why it's bad for robotics

You are technically literate, an engineer or scientist perhaps with a particular interest in robotics, but you've been stranded on a desert island for the past 30 years. Rescued and returned to civilisation you are keen to find out how far robotics science and technology has advanced and - rejoicing in the marvellous inventions of the Internet and its search engines - you scour the science press for robonews. Scanning the headlines you are thrilled to discover that robots are alive, and sending messages from space; robots can think or are "capable of human reasoning or learning"; robots have feelings, relate to humans, or demonstrate love, even behave ethically. Truly robots have achieved their promised potential.

Then of course you start to dig deeper and read the science behind these stories. The truth dawns. Although the robotics you are reading about is significant work, done by very good people, the fact is - you begin to realise - that now, in 2011, robots cannot properly be said to think, feel, empathise, love or be moral agents; and certainly no robot is, in any meaningful sense, alive, or sentient. Of course your disappointment is tempered by the discovery that astonishing strides have nevertheless been made.

So, robotics is subject to journalistic hype. Nothing new there then. So why am I writing about it here (apart from the fact it annoys the hell out of me)? I write because I think that robohype is a serious problem and an issue that the robotics community should worry about. The problem is this. Most people who read the press reports are lay readers who - perfectly reasonably - will not read much beyond the headline; certainly few will look for the source research. So every time a piece of robohype appears (pretty much every day) the level of mass-delusion about what robots do increases a bit more, and the expectation gap widens. Remember that the expectation gap - the gap between what people think robots are capable of and what they're really capable of - is already wide because of the grip robots have on our cultural imagination. We are at the same time fascinated and fearful of robots, and this fascination feeds the hype because we want (or dread) the robofiction to become true. Which is of course one of the reasons for the hype in the first place.

But the expectation gap is a serious problem. It's a problem because it makes our work as roboticists harder, not least because many of the hard problems we are working on are problems many people think already solved. It's a problem because it is, I believe, creating pressure on us to over-promise when writing grant applications, so solid important incremental research grants get rejected in favour of fantasy projects. Those projects inevitably fail to deliver and over time funding bodies will react by closing down robotics research initiatives - leading to the kind of funding winter that AI saw in the 1990s. And it's a problem because it creates societal expectations on robotics that cannot be met - think of the unrealistic promise of military robots with an artificial conscience.

Who's to blame for the robohype? Well we roboticists must share the blame. When we describe our robots and what they do we use anthropocentric words, especially when trying to explain our work to people outside the robotics community. Within the robotics and AI community we all understand that when we talk about an intelligent robot, what we mean is a robot that behaves as if it were intelligent; intelligent robot is a convenient shorthand. So when we talk to journalists we should not be too surprised when "this robot behaves, in some limited sense, as if it has feelings" gets translated to "this robot has feelings". But science journalists must, I think, do better than this.

Words in robotics, as in life, are important. When we describe our robots, their capabilities and their potential, and when science reporters and bloggers bring our work to wider public attention, we need to choose our words with great care. In humanoid robotics where, after all, the whole idea is to create robots that emulate human behaviours, capabilities and cognition, perhaps we just cannot avoid using anthropocentric words. Maybe we need a new lexicon for describing humanoid robots; perhaps we should stop using words like think, feel, imagine, belief, love, happy altogether? Whatever the answer, I am convinced that robohype is damaging to the robotics project and something must be done.