Wednesday, June 27, 2012

Robot know thyself

How can we measure self-awareness in artificial systems?

This was a question that came up during a meeting of the Awareness project advisory board two weeks ago at Edinburgh Napier University. Awareness is a project bringing together researchers and projects interested in self-awareness in autonomic systems. In philosophy and psychology self-awareness refers to the ability of an animal to recognise itself as an individual, separate from other individuals and the environment. Self-awareness in humans is, arguably, synonymous with sentience. A few other animals, notably elephants, dolphins and some apes appear to demonstrate self-awareness. I think far more species may well experience self-awareness - but in ways that are impossible for us to discern.

In artificial systems it seems we need a new and broader definition of self-awareness - but what that definition is remains an open question. Defining artificial self-awareness as self-recognition assumes a very high level of cognition, equivalent to sentience perhaps. But we have no idea how to build sentient systems, which suggests we should not set the bar so high. And lower levels of self-awareness may be hugely useful* and interesting - as well as more achievable in the near-term.

Let's start by thinking about what a minimally self-aware system would be like. Think of a robot able to monitor its own battery level. One could argue that, technically, that robot has some minimal self-awareness, but I think that to qualify as 'self-aware' the robot would also need some mechanism to react appropriately when its battery level falls below a certain level. In other words, a behaviour linked to its internal self-sensing. It could be as simple as switching on a battery-low warning LED, or as complex as suspending its current activity to go and find a battery charging station.

So this suggests a definition for minimal self-awareness:
A self-aware system is one that can monitor some internal property and react, with an appropriate behaviour, when that property changes.
So how would we measure this kind of self-awareness? Well if we know the internal mechanism because we designed it), then it's trivial to declare the system as (minimally) self-aware. But what if we don't? Then we have to observe the system's behaviour and deduce that it must be self-aware; it must be reasonably safe to assume an animal visits the watering hole to drink because of some internal sensing of 'thirst'.



But it seems to me that we cannot invent some universal test for self-awareness that encompasses all self-aware systems, from the minimal to the sentient; a kind of universal mirror test. Of course the mirror test is itself unsatisfactory. For a start it only works for animals (or robots) with vision and - in the case of animals - with a reasonably unambiguous behavioural response that suggests "it's me!" recognition.

And it would be trivially easy to equip a robot with a camera and image processing software that compares the camera image with a (mirror) image of itself, then lights an LED, or makes a sound (or something) to indicate "that's me!" if there's a match. Put the robot in front of a mirror and the robot will signal "that's me!". Does that make the robot self-aware? This thought experiment shows why we should be sceptical about claims of robots that pass the mirror test (although some work in this direction is certainly interesting). It also demonstrates that, just as in the minimally self-aware robot case, we need to examine the internal mechanisms.

So where does this leave us? It seems to me that self-awareness is, like intelligence, not one thing that animals or robots have more or less of. And it follows, again like intelligence, there cannot be one test for self-awareness, either at the minimal or the sentient ends of the self-awareness spectrum.



Related posts:
Machine Intelligence: fake or real?
How Intelligent are Intelligent Robots?
Could a robot have feelings?

* In the comments below Andrey Pozhogin asks the question:What are the benefits of being a self-aware robot? Will it do its job better for selfish reasons?

A minimal level of self-awareness, illustrated by my example of a robot able to sense its own battery level and stop what it's doing to go and find a recharging station when the battery level drops below a certain level, has obvious utility. But what about higher levels of self-awareness? A robot that is able to sense that parts of itself are failing and either adapt its behaviour to compensate, or fail safely is clearly a robot we're likely to trust more than a robot with no such internal fault detection. In short, its a safer robot because of this self-awareness.

But these robots, able to respond appropriately to internal changes (to battery level, or faults) are still essentially reactive. A higher level of artificial self-awareness can be achieved by providing a robot with an internal model of itself. Having an internal model (which mirrors the status of the real robot as self-sensed, i.e. it's a continuously updating self-model) allows a level of predictive control. By running its self-model inside a simulation of its environment the robot can then try out different actions and test the likely outcomes of alternative actions. (As an aside, this robot would be a Popperian creature of Dennett's Tower of Generate and Test - see my blog post here.) By assessing the outcomes of each possible action for its safety the robot would be able to choose the action most likely to be the safest. A self-model represents, I think, a higher level of self-awareness with significant potential for greater safety and trustworthiness in autonomous robots.

To answer the 2nd part of Andrey's question, the robot would do its job better, not for selfish reasons - but for self-aware reasons.
(postscript added 4 July 2012)

Tuesday, June 19, 2012

60 years of asking Can Robots Think?

Last week at the Cheltenham Science Festival we debated the question Can robots think? It's not a new question. Here, for instance, is a wonderful interview from 1961 on the very same question. So, the question hasn't changed. Has the answer?


Well it's interesting to note that I, and fellow panelists Murray Shanahan and Lilian Edwards, were much more cautious last week in Cheltenham, than our illustrious predecessors. Both on the question can present day robots think: answer No. And will robots (or computers) be able to think any time soon: answer, again No.

The obvious conclusion is that 50 years of Artificial Intelligence research has failed. But I think that isn't true. AI has delivered some remarkable advances, like natural speech recognition and synthesis, chess programs, conversational AI (chatbots) and lots of 'behind the scenes' AI (of the sort that figures out your preferences and annoyingly presents personalised advertising on web pages). But what is undoubtedly true was Weisner, Selfridge and Shannon were being very optimistic (after all AI had only been conceived a decade earlier by Alan Turing). Whereas today, perhaps chastened and humbled, most researchers take a much more cautious approach to these kinds of claims.

But I think there are more complex reasons.

One is that we now take a much stricter view of what we mean by 'thinking'. As I explained last week in Cheltenham, it's relatively easy to make a robot that behaves as if it is thinking (and, I'm afraid, also relatively easy to figure out that the robot is not really thinking). So, it seems that a simulation of thinking is not good enough*. We're now looking for the real thing.

That leads to the second reason. It seems that we are not much closer to understanding how cognition in animals and humans works than we were 60 years ago. Actually, that's unfair. There have been tremendous advances in cognitive neuroscience but - as far as I can tell - those advances have brought us little closer to being able to engineer thinking in artificial systems. That's because it's a very very hard problem. And, to add further complication, it remains a philosophical as well as a scientific problem.

In Cheltenham Murray Shanahan brilliantly explained that there are three approaches to solving the problem. The first is what we might call a behaviourist approach: don't worry about what thinking is, just try and make a machine that behaves as if it's thinking. The second is the computational modelling approach: try and construct, from first principles, a theoretical model of how thinking should work, then implement that. And third, the emulate real brains approach: scan real brains in sufficiently fine detail and then build a high fidelity model with all the same connections, etc, in a very large computer. In principle, the second and third approaches should produce real thinking.

What I find particularly interesting is that the first of these 3 approaches is more or less the one adopted by the conversational AI programs entered for the Loebner prize competition. Running annually since 1992, the Loebner prize is based on the test for determining if machines can think, famously suggested by Alan Turing in 1950 and now known as the Turing test. To paraphrase: if a human cannot tell whether she is conversing with a machine or another human - and it's a machine - then that machine must be judged to be thinking. I strongly recommend reading Turing's beautifully argued 1950 paper.

No chatbot has yet claimed the $100,000 first prize, but I suspect that we will see a winner sooner or later (personally I think it's a shame Apple hasn't entered Siri). But the naysayers will still argue that the winner is not really thinking (despite passing the Turing test). And I think I would agree with them. My view is that a conversational AI program, however convincing, remains an example of 'narrow' AI. Like a chess program a chatbot is designed to do just one kind of thinking: textual conversation. I believe that true artificial thinking ('general' AI) requires a body.

And hence a new kind of Turing test: for an embodied AI, AKA robot.

And this brings me back to Murray's 3 approaches. My view is that the 3rd approach 'emulate real brains' is at best utterly impractical because it would mean emulating the whole organism (of course, in any event, your brain isn't just the 1300 or so grammes of meat in your head, it's the whole of your nervous system). And, ultimately, I think that the 1st (behaviourist - which is kind of approaching the problem from the outside in) and 2nd (computational modelling - which is an inside out approach) will converge.

So when, eventually, the first thinking robot passes the (as yet undefined) Turing test for robots I don't think it will matter very much whether the robot is behaving as if it's thinking - or actually is, for reasons of its internal architecture, thinking. Like Turing, I think it's the test that matters.


*Personally I think that a good enough behavioural simulation will be just fine. After all, an aeroplane is - in some sense - a simulation of avian flight but no one would doubt that it is also actually flying.

Tuesday, May 08, 2012

The Symbrion swarm-organism lifecycle

I've blogged before about the Symbrion project: an ambitious 5-year project to build a swarm of independently mobile autonomous robots that have the ability - when required - to self-assemble into 3D 'multi-cellular' artificial organisms. The organisms can then - if necessary - disassemble back into their constituent individual robots. The idea is that robots in the system can choose when to operate in swarm mode, which might be the optimal strategy for searching a wide area, or in organism mode, to - for instance - negotiate an obstacle that cannot be overcome by a single robot. We can envisage future search and rescue robots that work like this - as imagined on this ITN news clip from 2008.

Our main contribution to the project to date has been the design of algorithms for autonomous self-assembly and disassembly - that is the process of transition between swarm and organism. This video shows the latest version of the algorithm developed by my colleague Dr Wenguo Liu. It is demonstrated with 2 Active Wheel robots (developed at the University of Stuttgart - who also lead the Symbrion project) and 1 Backbone robot (developed at the Karlsruhe Institute of Technology).


Let me explain how this works. The docking faces of the robots have infra-red (IR) transmitters and receivers. When a 'seed' robot - in this case the Active Wheel robot on the left - decides to form an organism with a particular body plan, it broadcasts a 'recruitment' signal from its IR transmitters, with the 'type' of robot it needs to recruit - in this case a Backbone robot. The IR transmitters then act as a beacon which the responding robot uses to approach the seed robot, and the same IR system is then used for final alignment prior to physical docking.

Once docked, wired (ethernet) communication is established between robots, and the seed robot communicates the body-plan for the organism with the newly recruited Backbone robot. Only then does the Backbone robot know what kind of organism it is now part of, and where in the organism it is. In this case the Backbone robot determines that the partially formed organism then needs another Active Wheel and it recruits this robot using the same IR system. After the third robot has docked it too discovers the overall body plan and where in the organism it is. In this case it is the final robot to be recruited and the organism self-assembly is complete.

Using control coordinated via the wired ethernet intranet across its three constituent robots, the organism then makes the transition from 2D planar form to 3D, which - in this case - means that the 2 Active Wheel robots activate their hinge motors to bend and lift the Backbone robot off the floor. The 3D organism is now complete and can move as a single unit. The process is completely reversible, and the complete 'lifecycle' from swarm -> organism -> swarm is shown in this video clip.

It is important to stress that the whole process is completely distributed and autonomous. These robots are not being remotely controlled, nor is there a central computer coordinating their actions. Each robot has the same controller, and determines its own actions on the basis of sensed IR signals, or data received over the wired ethernet. The only external signal sent was to tell the first robot to become the 'seed' robot to grow the whole organism. Later in the project we will extend the algorithm so that a robot will decide, itself, when to become a seed and which organism to grow.

The Symbrion system is not bio-mimetic in the sense that there are (as far as I know) no examples in nature of cells that spontaneously assemble to become functioning multi-cellular organisms and vice-versa. It is, however, bio-mimetic in a different sense. The robots, while in swarm mode, are analogous to stem cells. The process of self-assembly is analogous to morphogenesis, and - during morphogenesis - the process by which robot 'cells' discover their position, role and function within the organism is analogous to cell-differentiation.

While what I have described in this blog post is a milestone following several years of demanding engineering effort by a very talented team of roboticists, some of the ultimate goals of the project are scientific rather than technical. One is to address the question - using the Symbrion system as an embodied model - of under what environmental conditions is it better to remain as single cells, or symbiotically collaborate as multi-celled organisms. It seems far fetched but perhaps we could model - in some abstract sense - the conditions that might have triggered the major transition in biological evolution of some 1000 million years ago which saw the emergence of simple multi-cellular forms.

Saturday, April 21, 2012

What's wrong with Consumer Electronics?

When I was a boy the term consumer electronics didn't exist. Then the sum total of household electronics was a wireless, a radiogram and a telephone; pretty much everyone had a wireless, fewer a radiogram and on our (lower middle-class) street perhaps one in five houses had a telephone. (In an emergency it was normal to go round to the neighbour with the phone.) In the whole of my childhood we only ever had the same wireless set and gramophone and both looked more like furniture than electronics, housed in handsome polished wooden cabinets. Of course it was their inner workings, with the warm yellow glow of the thermionic valves that fascinated me and got me into trouble when I took them to pieces, that led to my chosen career in electronics.

How things have changed. Now most middle-class households have more computing power than existed in the world 50 years ago. Multiple TVs, mobile phones, computing devices (laptops, games consoles, iPads, Kindles and the like) and the supporting infrastructure of wireless routers, printer, and backup storage, are now normal. And most of this stuff will be less than five years old. If you're anything like me the Hi-Fi system will be the oldest bit of kit you own (unless you ditched it for the iPod and docking station). Of course this gear is wonderful. I often find myself shocked by the awesomeness of everyday technology. And understanding how it all works only serves to deepen my sense of awe. But, I'm also profoundly worried - and offended too - by the way we consume our electronics.

What offends me is this: modern solid-state electronics is unbelievably reliable - what's wrong with consumer electronics is nothing, yet we treat this magical stuff - fashioned of glass - as stuff to be consumed then thrown away. Think about the last time you replaced a gadget because the old one had worn out or become unrepairable. Hard isn't it. If you still possessed it the mobile phone you had 15 years ago would - I'd wager - still work perfectly. I have a cupboard here at home with all manner of obsolete kit. A dial-up modem for instance, circa 1993. It still works fine - but there's nothing to dial into. The fact is that we are compelled to replace perfectly good nearly-new electronics with the latest model either because the old stuff is rendered obsolete (because it's no longer compatible with current generation o/s, or applications or infrastructure - or unsupported), or worse still because the latest kit has 'must have' features or capabilities not present on the old.

I would like to see a shift in consumer electronics back to a model in which gadgets are designed to be repaired and consumers are encouraged to replace or upgrade every ten years or more, not every year. What I'm suggesting is of course exactly the opposite of what's happening now. Current devices are becoming less repairable, with batteries you can't replace and designs that even skilled technicians find difficult to take apart without risk of damage. The lastest iPad for example was given a very low repairability score (2/10) by iFixit.

And the business model most electronics companies operate is fixated on the assumption that profit, and growth, can only be achieved through very short product life cycles. But all of our stuff is not like this. We don't treat our houses, or gardens, or dining room tables, or central heating systems, or any number of things as consumer goods, but the companies that build and sell houses, or dining room tables, or landscape gardens, etc, still turn a profit. Why can't electronics companies find a business model that treats electronic devices more like houses and less like breakfast cereal?

I don't think consumer electronics should be consumed at all.

Wednesday, January 11, 2012

New experiments in the new lab

Last week my PhD student Mehmet started a new series of experiments in embodied behavioural evolution. The exciting new step is that we've now moved to active imitation. In our previous trials robot-robot imitation has been passive; in other words, when robot B imitates robot A, robot A receives no feedback at all - not even that its action has been imitated. With active imitation, robot A receives feedback - it receives information on which of its behaviours has been imitated, how well the behaviour been imitated and by whom.

The switch from passive to active imitation has required a major software rewrite, both for the robots' control code and for the infrastructure. We made the considered decision that the feedback mechanism - unlike the imitation itself - is not embodied. In other words the system infrastructure both figures out which robot has imitated which (not trivial to do) and radios the feedback to the robots themselves. The reason for this decision is that we want to see how that feedback can be used to - for instance - reinforce particular behaviours so that we can model the idea that agents are more likely to re-enact behaviours that have been imitated by other agents, over those that haven't. We are not trying to model active social learning (in which a learner watches a teacher, then the teacher watches the learner to judge how well they've learned, and so on) so we avoid the additional complexity of embodied feedback.

In the first tests with the new active imitation setup we've introduced a simple change to the behaviour selection mechanism. Every robot has a memory with all of its initialised or learned behaviours. Each one of those behaviours now has a counter that gets incremented each time that particular behaviour is imitated. A robot selects which of its stored behaviours to enact, at random, but with probabilities that are determined by the counter values so that a higher count behaviour is more likely to be selected. But, as I've discovered peering at the data generated from the initial runs, it's not at all straightforward to figure out what's going on and - most importantly - what it means. It's the hermeneutic challenge again.

So, for now here's a picture of the experimental setup in our shiny new* lab. Results to follow!















*In November 2011 the Bristol Robotics Lab moved from its old location, in the DuPont building, to T block on the extended Coldharbour Lane campus.

Thursday, January 05, 2012

Philip Larkin - a recollection

My first encounter with Philip Larkin was as a fresher in October 1974. As university librarian it was Larkin's annual duty to give an introductory lecture to the year's fresh intake. I recall seeing this tall, portly man with bottle-top glasses in a bank manager's suit. Not at all how I imagined a poet should look. (My Dad had told me about Larkin when I first announced I'd chosen to go to Hull University, otherwise I'm sure I'd have taken no notice at all.) To this audience of several hundred 18-year olds - more interested in eyeing each other for fanciableness than listening to some bloke in a suit - Larkin declared with a plummy, resonant voice and measured delivery, as if it was a line from Shakespeare, "...educated people should know three things: what words mean, where places are and when things happened".

My first encounter with his poetry was several months later. It was a vacation and I was at home with Mum and Dad, younger sister and brother. Larkin was to be featured in a TV documentary and the whole family gathered expectantly round the set at the appointed time. Then the first stanza was read "They fuck you up, your mum and dad/They may not mean to, but they do/They fill you with the faults they had/And add some extra, just for you." Cue acute, embarrassed silence. My Mum, I think, said "Well I don't think much of this" and without another word the TV was switched off. We didn't discuss this (in fact I don't think we ever discussed it). It was some years later that I got to know (and love) Larkin's poetry and to reflect on the idiot producer who chose to start that TV programme with arguably his worst poem, just for the shock value of the word fuck on the BBC (this was 1975). It's not that I'm prudish about the sentiment expressed, it's just not a good poem.

Fast forward about six years. I've accepted a junior lecturing post while finishing off my PhD, and find myself a member of the science faculty board. As librarian Larkin is an ex-officio member and I recall him contributing his opinions to the board's debates. I've long forgotten the subject of those debates but I vividly recall the manner of Larkin's contributions. He would stand, as if addressing parliament, and speak what I can only describe as Perfect English. His articulation, diction and metre was actor-perfect. If you had written down exactly what he said, and punctuation would have been easy for he paused in commas and semi-colons, you would get perfect prose; each word exactly the right word, each phrase perfectly turned. I was, at the time, going out with a girl who worked in the library and she told me Larkin's memoranda were the same: each a miniature essay, a perfectly formed construction of letters.

I never knew Larkin. Nobody did. He was a distant, unapproachable man and, by all accounts, not at all likeable. The closest he and I came to conversation was exchanging nods across the lunchtime staff common-room bar. I find it satisfyingly ironic therefore that a man so apparently detached and unemotional should have written what is, for me, the finest love poem of the 20th Century: An Arundel Tomb (1).

The poem starts: Side by side, their faces blurred, the earl and countess lie in stone, and then in the second verse the beautiful observation: Such plainness of the pre-baroque hardly involves the eye, until it meets his left-hand gauntlet, still clasped empty in the other; and one sees, with a sharp tender shock, his hand withdrawn, holding her hand. I love the words sharp tender shock; then in the next verse: Such faithfulness in effigy... A sculptor’s sweet commissioned grace.

In the fifth verse Larkin constructs a spine tingling evocation of the long passage of time: Rigidly they persisted, linked, through lengths and breadths of time. Snow fell, undated. Light each summer thronged the glass. A bright litter of birdcalls strewed the same bone-riddled ground. And then the remarkable conclusion of the poem: The stone fidelity they hardly meant has come to be their final blazon, and to prove our almost-instinct almost true: what will survive of us is love.

Forgive me for removing the line breaks in these extracts from the poem. In doing so I want to illustrate my observation that, in Larkin's writing, there is little distance between prose and poetry. When reading his poems I've reflected often on why it is that a man with such an apparently effortless ability to produce perfect English published so little, and agonised so much over his writing. I now realise that he didn't have a problem with writing, but with life. "The object of writing," Larkin once said, "is to show life as it is, and if you don't see it like that you're in trouble, not life.


(1) from The Whitsun Weddings, Faber and Faber, 1964. And here is both the full text of An Arundel Tomb and Larkin reading the poem.

Monday, December 05, 2011

Swarm robotics at the Science Museum

Just spent an awesomely busy weekend at the Science Museum, demonstrating Swarm Robotics. We were here as part of the Robotville exhibition, and - on the wider stage - European Robotics Week. I say we because it was a team effort, led by my PhD student Paul O'Dowd who heroically manned the exhibit all four days, and supported also by postdoc Dr Wenguo Liu. Here is a gallery of pictures from Robotville on the science museum blog, and some more pictures here (photos by Patu Tifinger):




Although exhausting, it was at the same time uplifting. We had a crowd of very interested families and children the whole time - in fact the organisers tell me that Robotville had just short of 8000 visitors over the 4 days of the exhibition. What was really nice was that the whole exhibition was hands-on, and our sturdy e-puck robots - at pretty much eye-level for 5-year olds, attracted lots of small hands interacting with the swarm. A bit like putting your hand into an ants nest (although I doubt the kids would have been so keen on that.)

Let me explain what the robots were doing. Paul had programmed two different demonstrations, one with fixed behaviours and the other with learning.

For the fixed behaviour demo the e-puck robots were programmed with the following low-level behaviours:
  1. Short-range avoidance. If a robot gets too close to another robot or an obstacle then it turns away to avoid it.
  2. Longer-range attraction. If a robot can sense other robots nearby but gets too far from the flock, then it turns back toward the flock. And while in a flock, move slowly.
  3. If a robot loses the flock then it speeds up and wanders at random in an effort to regain the flock (i.e. another robot).
  4. While in a flock, each robot will communicate (via infra-red) its estimate of the position of an external light source to nearby robots in the flock. While communicating the robot flashes its green body LED.
  5. Also while in a flock, each robot will turn toward the 'consensus' direction of the external light source.
The net effect of these low-level behaviours is that the robots will both stay together as a swarm (or flock), and over time, move as a swarm toward the external light source. Both of these swarm-level behaviours are emergent because they result from the low-level robot-robot and robot-environment interactions. While the flocking behaviour is evident in just a few minutes, the overall swarm movement toward the external light source is less obvious. In reality even the flocking behaviour appears chaotic, with robots losing each other, and leaving the flock, or several mini-flocks forming. The reason for this is that all of the low-level behaviours make use of the e-puck robots' multi-purpose Infra-red sensors, and the environment is noisy; in other words because we don't have carefully controlled lighting there is lots of ambient IR light constantly confusing the robots.

The learning demo is a little more complex and makes use of an embedded evolutionary algorithm, actually running within the e-puck robots, so that - over time - the robots learn how to flock. This demo is based on Paul's experimental work, which I described in some detail in an earlier blog post, so I won't go into detail here. It's the robots with the yellow hats in the lower picture above. What's interesting to observe is that initially, the robots are hopeless - constantly crashing into each other or the arena walls, but noticeably over 30 minutes or so we can see the robots learn to control themselves, using information from their sensors. The weird thing here is that, every minute or so, each robot's control software is replaced by a great-great-grand child of itself. The robot's body is not evolving, but invisibly its controller is evolving, so that later generations of controller are more capable.

The magical moment of the two days was when one young lad - maybe 12 years old, who very clearly understood everything straight away and seemed to intuit things I hadn't explained - stayed nearly an hour explaining and demonstrating to other children. Priceless.

Tuesday, September 20, 2011

TAROS lecture: The Ethical Roboticist

Here are the slides for the IET public lecture I gave in Sheffield on 2 September 2011 on the final day of the conference Towards Autonomous Robotic Systems (TAROS).

Wednesday, August 31, 2011

Discussing Asimov's laws of robotics and a draft revision

This is me discussing robot ethics with Dallas Campbell for BBC1's Bang Goes The Theory. I outline the five new ethical principles for roboticists proposed by the EPSRC/AHRC working group. Click here for the working group's full report, including a commentary on these draft proposals.



With thanks to Simon Mackie, senior content producer for the Bang Goes The Theory website, for the code to embed this video clip.

Saturday, August 20, 2011

Robohype and why it's bad for robotics

You are technically literate, an engineer or scientist perhaps with a particular interest in robotics, but you've been stranded on a desert island for the past 30 years. Rescued and returned to civilisation you are keen to find out how far robotics science and technology has advanced and - rejoicing in the marvellous inventions of the Internet and its search engines - you scour the science press for robonews. Scanning the headlines you are thrilled to discover that robots are alive, and sending messages from space; robots can think or are "capable of human reasoning or learning"; robots have feelings, relate to humans, or demonstrate love, even behave ethically. Truly robots have achieved their promised potential.

Then of course you start to dig deeper and read the science behind these stories. The truth dawns. Although the robotics you are reading about is significant work, done by very good people, the fact is - you begin to realise - that now, in 2011, robots cannot properly be said to think, feel, empathise, love or be moral agents; and certainly no robot is, in any meaningful sense, alive, or sentient. Of course your disappointment is tempered by the discovery that astonishing strides have nevertheless been made.

So, robotics is subject to journalistic hype. Nothing new there then. So why am I writing about it here (apart from the fact it annoys the hell out of me)? I write because I think that robohype is a serious problem and an issue that the robotics community should worry about. The problem is this. Most people who read the press reports are lay readers who - perfectly reasonably - will not read much beyond the headline; certainly few will look for the source research. So every time a piece of robohype appears (pretty much every day) the level of mass-delusion about what robots do increases a bit more, and the expectation gap widens. Remember that the expectation gap - the gap between what people think robots are capable of and what they're really capable of - is already wide because of the grip robots have on our cultural imagination. We are at the same time fascinated and fearful of robots, and this fascination feeds the hype because we want (or dread) the robofiction to become true. Which is of course one of the reasons for the hype in the first place.

But the expectation gap is a serious problem. It's a problem because it makes our work as roboticists harder, not least because many of the hard problems we are working on are problems many people think already solved. It's a problem because it is, I believe, creating pressure on us to over-promise when writing grant applications, so solid important incremental research grants get rejected in favour of fantasy projects. Those projects inevitably fail to deliver and over time funding bodies will react by closing down robotics research initiatives - leading to the kind of funding winter that AI saw in the 1990s. And it's a problem because it creates societal expectations on robotics that cannot be met - think of the unrealistic promise of military robots with an artificial conscience.

Who's to blame for the robohype? Well we roboticists must share the blame. When we describe our robots and what they do we use anthropocentric words, especially when trying to explain our work to people outside the robotics community. Within the robotics and AI community we all understand that when we talk about an intelligent robot, what we mean is a robot that behaves as if it were intelligent; intelligent robot is a convenient shorthand. So when we talk to journalists we should not be too surprised when "this robot behaves, in some limited sense, as if it has feelings" gets translated to "this robot has feelings". But science journalists must, I think, do better than this.

Words in robotics, as in life, are important. When we describe our robots, their capabilities and their potential, and when science reporters and bloggers bring our work to wider public attention, we need to choose our words with great care. In humanoid robotics where, after all, the whole idea is to create robots that emulate human behaviours, capabilities and cognition, perhaps we just cannot avoid using anthropocentric words. Maybe we need a new lexicon for describing humanoid robots; perhaps we should stop using words like think, feel, imagine, belief, love, happy altogether? Whatever the answer, I am convinced that robohype is damaging to the robotics project and something must be done.

Monday, July 25, 2011

Manifesto for a Robot Standard Interface Specification

This blog post could well turn out to be to be the most boring I've ever written - but I think it's important. I want to write about something that robotics desperately needs: an industry standard interface specification (see I told you it was going to be boring).

Let me explain what I mean by talking about a fantastically successful standard called MIDI, that has without doubt played a significant role in the success of music technology. MIDI stands for musical instrument digital interface. It provides an industry standard for connecting together electronic musical instruments, i.e. synthesisers, computers and all manner of electronic music gizmos. The important thing about MIDI is that it specifies everything including the physical plug and socket, the electrical signalling, the communications protocol and the messages that can be sent or received by MIDI connected devices. With great foresight MIDI's designers provided in the protocol both standard messages that all MIDI equipped electronic musical instruments would expect to send and receive - and recognise, but also customisable messages that manufacturers could specify for particular instruments and devices. In MIDI each instrument is able to identify itself to another device connected via MIDI; it can say, for example, I'm a Roland synthesiser model ABC. If the other device, a sequencer for instance, recognises the Roland ABC it can then access that instrument's custom features (in addition to the standard functions of all MIDI devices).

Robotics needs a MIDI specification. Let's call it RSIS for Robot Standard Interface Specification. Like MIDI, RSIS would need to specify everything from the physical plug and socket, to the structure and meaning of RSIS messages. Devising a spec for RSIS would not be trivial - my guess is that it would be rather more complex than MIDI because of the more diverse types of robot devices and peripherals. But the benefits would be immense. RSIS would allow robot builders to plug and play different complex sensors and actuators, from different manufacturers, to create new robot bodies and new functionality. Imagine, for instance, being able to take a Willow garage PR2 robot and fit a humanoid robot hand from the Shadow Robot Company. Of course there would need to be a mechanical mounting to physically attach the new hand, but that's not what I'm talking about here; I'm referring to the control interface which would be connected via RSIS. The PR2 would then, via the RSIS connection, sense that a new device had been connected and, using standard RSIS messages, ask the new device to identify itself. On discovering it has a handsome new Shadow hand the PR2 would then install the device driver (downloading it from the cloud if necessary) and, within a few seconds, the new hand becomes fully functional in true plug and play fashion.

Industry standards, and the people who create them, are the unsung heroes of technology. Without these standards, like UMTS, TCP/IP, HTTP or IEEE 802.11 (WiFi to you and me) we wouldn't have ubiquitous mobile phone, internet, web or wireless tech that just works. But more than that, standards are I think part of the essential underpinning infrastructure that kick starts whole new industry sectors. That's why I think standards are so critical to robotics.

Maybe a Robot Standard Interface Specification (or the effort to create it) already exists? If so, I'd very much like to hear about it.

Tuesday, May 31, 2011

Machine intelligence: fake or real?

A few days ago, at the excellent HowTheLightGetsIn festival, I took part in a panel debate called Rise of the Machines. Here was the brief:
From 2001 to The Matrix, intelligent machines and robots have played a central role in our fictions. Some now claim they are about to become fact. Is artificial intelligence possible or just a science fiction fantasy? And would it be a fundamental advance for humankind or an outcome to be feared?
Invited at the last minute, I found myself debating these questions with a distinguished panel consisting of philosophers Peter Hacker and Hilary Lawson, and law academic Lilian EdwardsHenrietta Moore brilliantly chaired.

I shan't attempt to summarise the debate here. I certainly couldn't do it, or the arguments of fellow panelists, justice. In any event it was filmed and should appear soon on IAI TV. What I want to talk about here is the question - which turned out to be central to the debate - of whether machines are, or could ever be regarded as, intelligent.

The position I adopted and argued in the debate is best summed up as simulationist. For the past 10 years or so I have believed our grand project as roboticists is to build robots that aim to be progressively higher fidelity imitations of life, and intelligence. This is a convenient and pragmatic approach: robots that behave as if they are intelligent are no less interesting (as working models of intelligence for instance), or potentially useful, than robots that really are intelligent, and the ethical questions that arise no less pressing*. But, I realised in Hay-on-Wye, the simulationist approach also plays to the arguments of philosophers, including Peter Hacker, that machines cannot ever be truly intelligent in principle.

Reflecting on that debate I realised that my erstwhile position in effect accepts that robots, or AI, will never be truly intelligent, never better than a simulation; that machines can never do more than pretend to be smart. However, I'm now not at all sure that position is logically tenable. The question that keeps going around my head is this: if a thing - biological or artificial - behaves as if it is intelligent, then why shouldn't it be regarded as properly intelligent? Surely behaving intelligently is the same as being intelligent. Isn't that what intelligence is?

Let me offer two arguments in support of this proposition.

There are those who argue that real intelligence is uniquely a property of living organisms. They admit that artificial systems might eventually demonstrate a satisfactory emulation of intelligence but will argue that nothing artificial can truly think, or feel. This is the anthropocentric (or perhaps more accurately, zoocentric) position. The fundamental problem with this position, in my view, is that it fails to explain which properties of biological systems make them uniquely intelligent. Is it that intelligence depends uniquely on exotic properties of biological stuff? The problem here is there's no evidence for such properties. Perhaps intelligence is uniquely an outcome of evolution? Well robot intelligence can be evolved, not designed. Perhaps advanced intelligence requires social structures in order to emerge? I would agree, and point to social robotics as a promising equivalent substrate. Advanced intelligence uniquely requires, perhaps, nurture because really smart animals are not born smart. Again I would agree, and point to the new field of developmental robotics. In short, I argue that it is impossible to propose a property of biological systems, required for intelligence, that is unique to those biological systems and cannot exist as a property of artificial systems.

My second argument is around the question of how intelligence is measured or determined. As I've blogged before, intelligence is a difficult thing to define let alone measure. But one thing is clear - no current measure of intelligence in humans or animals requires us to look inside their brains. We determine a human or animal to be intelligent exclusively on the basis of its actions. For simple animals we observe how they react and look at the sophistication of those responses (as prey or predator for instance). In humans we look formally to examinations (to measure cognitive intelligence) or more generally to ingenuity in social discourse (Machiavellian intelligence), or creativity (artistic or technical intelligence). For advanced animal intelligence we devise ever more ingenious tests, the results from which sometimes challenge or prejudices about where those animals sit on our supposed intelligence scale. We heard from Lilian Edwards during the debate that, in common law, civil responsibility is likewise judged exclusively on actions. A judge may have to make a judgement about the intentions of a defendant but they have to do so only on the evidence of their actions**. I argue, therefore, that it is inconsistent to demand a different test of intelligence for artificial systems. Why should we expect to determine whether a robot is truly intelligent or not on the basis of some not-yet-determined properties of its internal cognitive structures, when we do not require that test of animals or humans?

The counter-intuitive and uncomfortable conclusion: machine intelligence is not fake, it's real.


*perhaps even more so given that such robots are essentially fraudulent.
**with thanks to Lilian for correcting my wording here.

Friday, May 06, 2011

Revisiting Asimov: the Ethical Roboticist

Well it's taken awhile, but the draft revised 'laws of robotics' have now been published. New Scientist article Roboethics for Humans, reporting on the EPSRC/AHRC initiative in roboethics, appears in this week's issue (Issue 2811, 7 May 2011). These new draft ethical principles emerged from a workshop on ethical, legal and societal issues in robotics.

The main outcome from the workshop was a draft statement aimed at initiating a debate within the robotics research and industry community, and more widely. That statement is framed by, first, a set of high-level messages for researchers and the public which encourage responsibility from the robotics community, and hence (we hope) trust in the work of that community. And second, a revised and updated version of Asimov’s three laws of robotics for designers and users of robots; not laws for robots, but guiding principles for roboticists.

The seven high-level messages are:
  1. We believe robots have the potential to provide immense positive impact to society. We want to encourage responsible robot research.
  2. Bad practice (in robotics) hurts us all.
  3. Addressing obvious public concerns (about robots) will help us all make progress.
  4. It is important to demonstrate that we, as roboticists, are committed to the best possible standards of practice.
  5. To understand the context and consequences of our research we should work with experts from other disciplines including: social sciences, law, philosophy and the arts.
  6. We should consider the ethics of transparency: are there limits to what should be openly available?
  7. When we see erroneous accounts in the press, we commit to take the time to contact the reporting journalists.
Isaac Asimov's famous 'laws of robotics' first appeared in 1942 in his short story Runaround. They are (1) A robot may not injure a human being or, through inaction, allow a human being to come to harm; (2) A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law, and (3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.



Asimov’s laws updated: instead of 'laws for robots' our revision is a set of five draft 'ethical principles for robotics', i.e. moral precepts for researchers, designers, manufacturers, suppliers and maintainers of robots. We propose:
  1. Robots are multi-use tools. Robots should not be designed solely or primarily to kill or harm humans, except in the interests of national security.
  2. Humans, not robots, are responsible agents. Robots should be designed & operated as far as is practicable to comply with existing laws and fundamental rights & freedoms, including privacy.
  3. Robots are products. They should be designed using processes which assure their safety and security.
  4. Robots are manufactured artefacts. They should not be designed in a deceptive way to exploit vulnerable users; instead their machine nature should be transparent.
  5. The person with legal responsibility for a robot should be attributed.
Now it's important to say, firstly, that these are the work of a group of people so the wording represents a negotiated compromise*. Secondly, they are a first draft. The draft was circulated within the UK robotics community in February, then last month presented to a workshop on Ethical Legal and Societal issues at the European Robotics Forum in Sweden. So, we already have great feedback - which is being collected by EPSRC - but that feedback has not yet been incorporated into any revisions. Thirdly, there is detailed commentary - especially explaining the thinking and rationale for the 7 messages and 5 ethical principles above. That commentary can be found here.

Comments and criticism welcome! To feedback either,
  • post a comment in response to this blog,
  • email EPSRC at RoboticsRetreat@epsrc.ac.uk, or
  • directly contact myself or any of the workshop members listed in the commentary.

*So, while I am a passionate advocate of ethical robotics and very happy to defend the approach that we've taken here, there are some detailed aspects of these principles that I'm not 100% happy with.

Friday, April 29, 2011

Ill robots might get a temperature too

Just spent 4 days at the beautiful Schloss Dagstuhl in SW Germany attending a seminar on Artificial Immune Systems. The Dagstuhl is a remarkable concept – a place dedicated to residential retreats on advanced topics in computer science. Everything you need is there to discuss, think and learn. And learn is what I just did – to the extent that by lunchtime today when the seminar closed I felt like the small boy who asks to be excused from class because “miss, my brain is full”.

Knowing more or less nothing about artificial immune systems it was, for me like sitting in class, except that my teachers are world experts in the subject. A real privilege. So, what are artificial immune systems? They are essentially computer systems inspired by and modelled on biological immune systems. AISs are, I learned, both engineering systems for detecting and perhaps repairing and recovering from faults in artificial systems (in effect system maintenance), and scientific systems for modelling and/or visualising natural immune systems.

I learned that real immune systems are not just one system but several complex and inter-related systems, the biology of which is not fully understood. Thus, interestingly, AISs are modelled on (and models of) our best understanding so far of real immune systems. This of course means that biologists almost certainly have something to gain from engaging with the AIS community. (There are interesting parallels here with my experience of biologists working with roboticsts in Swarm Intelligence.)

The first thing I learned was about the lines of defence to external attack on bodies. The first is physical: the skin. If something gets past this then bodies apply a brute force approach by, for instance raising the temperature. If that doesn’t work then more complex mechanisms in the innate immune system kick-in: white blood cells that attempt to ‘eat’ the invaders. But more sophisticated pathogens require a response from the last line of defence: the adaptive immune system. Here the immune system ‘learns’ how to neutralise a new pathogen with a process called clonal selection. I was astonished to learn that clonal selection actually ‘evolves’ a response. Amazing – embodied evolution going on super-fast inside your body within the adaptive immune system, taking just a couple of days to complete. Now as a roboticist I’m very interested in embodied evolution – and by coincidence I attended a workhop on that very subject just a month ago. But I’d always assumed that embodied evolution was biologically implausible – an engineering trick if you like.  But no – there it is going on inside adaptive immune systems. (As an aside, it appears that we don’t understand the processes that prompted the evolution of adaptive immune systems some 400 million years ago – in jawed vertebrates).

Of course while listening to this fascinating stuff I was all the while wondering what this might mean for robotics. For instance what hazards would require the equivalent of an innate immune response in robots, and which would need an adaptive response. And what exactly is the robot equivalent of an ‘infection’. Would a robot, for instance, get a temperature if it was fighting an infection. Quite possibly yes – the additional computation needed for the robot to figure out how to counter the hazard might indeed need more energy – so the robot would have to slow down its motors to direct its battery power instead to its computer. Sounds familiar doesn’t it: slowing down and getting a temperature!

Swarm robots with faults is something I’ve been worrying about for awhile and, based on the work I blogged about here, at the Dagstuhl I presented my hunch that – while swarm of 100 robots might work ok – swarms of 100,000 robots definitely wouldn’t without something very much like an immune system. That led to some very interesting discussions about the feasibility of co-evolving swarm function and swarm immunity. And, given that we think we’re beginning to understand how to embed and embody evolution across a swarm of robots, this is all beginning to look surprisingly feasible.

Wednesday, April 13, 2011

Why Slow Science may well be A Very Good Thing

A few weeks ago I spent a very enjoyable Saturday at the Northern Arts and Science Network annual conference Dialogues, in Leeds. The morning sessions including two outstanding keynote talks. The first from Julian Kiverstein on synthetic synaesthesia and the second from David James on technology enhanced sports. Significant food for thought in both talks. Then Jenny Tennant Jackson and I ran an afternoon workshop on the Artificial Culture project (aided and abetted by 8 e-puck robots) which generated lots of questions and interest.

But apart from singing the praises of NASN and the conference I want to reflect here on something that emerged from the panel discussion at the end of the conference. There was quite a bit of debate around the question of open research (in both science and the arts) and public engagement. In recent years I've become a strong advocate of a unified open science + public engagement approach. In other words doing research transparently - ideally using an open notebook approach so that the whole of the process as well as the experimental outcomes are open to all - combined with proactive public engagement in (hopefully) a virtuous circle*.

So there I was pontificating about the merits of this approach in the panel discussion at NASN when someone asked rather pointedly "but isn't that all going to slow down the process of advancing science?" Without thinking I retorted "Good! If the cost of openness is slowing down science then that has to be a price worth paying." The questioner was clearly somewhat taken aback and to you sir, if you should read this blog, I offer sincere apologies for the abruptness of my reply. In fact I owe you not only apologies but thanks, for that exchange has really got me thinking about Slow Science.

So, having reflected a little, here's why I think slowing down science might not be as crazy as it sounds.

First the ethical dimension. Science or engineering research that is worth doing, i.e. is important and has value, has - by definition - an ethical dimension. The ethical and societal impact of science and engineering research needs to be acknowledged and understood by researchers themselves then widely and transparently debated, and not left to bad science journalism, science denialism or corporate interests. This takes time.

Next, unintended consequences. High impact research always has implications, and the larger the impact, the greater the potential for unintended consequences (no matter how well intentioned the work). Of course negative unintended consequences (scientific, economic, philosophical) almost always end up becoming a problem for society - so they too should be properly considered and discussed during a project's lifetime.

Finally the open science, public engagement dimension. I would argue that the time and effort costs of building open science and public engagement into research projects will reap manifold dividends in the long run. First take the open science aspect, openness - while it can take some courage to actually do - can surely only bring long term benefits in increased trust (in the work of the project, and in science in general). Second, running an integrated open science - public engagement approach alongside the research brings direct educational benefit to the next generation. And the additional real cost (in time and effort) has to be much less than it would be for an isolated project seeking the same educational outcomes.

Critics will of course argue that Slow Science would be uncompetitive. In a limited sense they would be right, but it seems to me important not to confuse commercialisation of spin out products with the much longer time span of research, nor to allow the tail of exploitation to wag the dog of research. Big science that takes decades can still spin out lots of wealth creating stuff along the way. Another criticism of Slow Science is to do with pressing problems that desperately need solutions. This is harder to counter but - perhaps - the unintended consequences argument might hold sway.

Slow Science: a Good Thing, or not?


*science communicator and PhD student Ann Grand is researching exactly this subject and has already published several papers on it.

Thursday, March 31, 2011

Telling all on I'm a Scientist

In future if anyone wants to know what I think - about almost anything scientific and quite alot else - all I have to do is point them to my profile and my collected answers on I'm a Scientist get me out of here. It's been a week now since IAS concluded and the winners announced and I've had time to collect my thoughts, catch up on the day job, and reflect on taking part in this most excellent event.


I'm a Scientist get me out of here is aptly named. By Thursday on the second week I was - on balance - more relieved than disappointed to be evicted from the virtual jungle clearing, called the Chlorine Zone, that I'd been sharing with four other scientists. (Beyond the eviction thing the analogy with I'm a Celebrity breaks down. We five were not required to undertake challenges designed to freak-out the squeamish nor rewarded with discomfort reducing morsels.)

No. I'm a Scientist is an altogether more civilised affair. It's a direct engagement with school children; meet-the-scientist on-line in which school children can ask the scientists questions on more or less anything they like. There are two types of engagement, chat and ask. The live chat sessions are booked by teachers and scheduled during school science lessons - a bit like having a panel of scientists sitting at the front of the classroom answering questions, except it's on-line. Ask allows the children to submit their questions through the web page for the scientists to answer in their own time. Both types of engagement are moderated by the good people who run I'm a Scientist.

Why then - if I'm a Scientist is so wonderful (which it is) - was I relieved to be evicted? Well, it's because after nearly 2 weeks the questions just keep coming and trying to keep up (especially given that we all have day jobs) became, if I'm completely honest, something of a test of endurance. Not counting the live chat school sessions I answered about 175 questions altogether. Other I'm a Scientist scientists who read this will scoff and say "pah, only 175!". And they'd be right - Sarah Thomas in my zone answered over 300 questions, and the awesome David Pyle in the potassium zone around 600! But even my paltry 175 questions took I reckon about 30 hours to answer, at an average 10 minutes per question (which is going fast).

But I'm not going to whinge here about my inability to keep up (although I do strongly advise future I'm a Scientists to set aside plenty of question answering time). I really want to reflect on the questions themselves. Firstly I was slightly surprised there were so few on my specialist subject of robotics. Only 22 out of the 175. But they were good ones! Here are some of my favourites:
Some of these will form the basis of future blog posts. But it was the general science questions that were the most interesting, for instance:
Brilliant - it was a kind of science soap box! I got to pontificate on life on Mars, the end of the world and human extinction, global warming, nuclear power, dreams, light years, my favourite animal, my favourite car, string theory, the Higgs Boson and dark matter. But the non-science questions make you stop and think - hmm how much do I want to reveal about what I think about antidisestablishmentarianism, my religous beliefs, resurrection or the meaning of life..?

By far the biggest category of questions was about doing science: why and how you do science, what's the best thing about being a scientist, what you think you have achieved, or will achieve and so on (and quite a few on what you will do with the prize money if you win). These are great questions because they allow you to explode some myths about science: for instance that you have to be super smart to do science, or that one scientist can change the world on their own. I was especially flattered by
If you're thinking of putting yourself forward for I'm a Scientist I would say yes go for it. It's hugely good fun and massively worthwhile. But (1) set aside plenty of time, (2) be prepared to answer questions on more or less anything and (3) be honest about yourself and what you really think about stuff.

Here are some great blog posts from other March 2011 I'm a Scientists:
Suzie Sheehy's Reflections on I'm a Scientist
David Pyle's I'm a Scientist: 600 questions later
I'm a Scientist and I'm out of here

Sunday, March 13, 2011

Dilemmas of an ethical consumer

I have a dilemma and it is this. I'm torn between lusting after an iPad 2 and serious worries over the ethics of its manufacture.

There's no doubt that the iPad is a remarkable device (Jobs' hyperbole about magical and revolutionary is quite unnecessary). Several academic friends have told me that the iPad and one application in particular - called iAnnotate - has changed their working lives. Having seen them demonstrate iAnnotate there's no doubt it's the academic's killer iPad app. You see, something we have to do all the time is read, review and edit papers, book chapters, grant applications and working documents. For me that normally means printing a paper out, writing all over it, then either tediously scanning the marked up pages - uploading them to Google docs - then emailing the link, or constructing a large email with a list of all my changes and comments. What my friends showed me was them reviewing a paper on the iPad, writing all over it with a stylus, then just emailing back the marked up document. Amazing - this could save me hours every week.

But here's the problem. The iPad may well be a marvel of design and technology but - like most high tech stuff these days - it's profoundly unsustainable and it's manufacture is ethically questionable. Now to be fair to Apple, this is not a problem that's unique to them - and I'm prepared to believe that Apple does genuinely care about the conditions under which it's products are manufactured and is doing all it can to pressure it's subcontractors to provide the best working conditions for their employees. But the problem is systemic - the only reason that we can buy an iPad, or laptop, or flat screen TV, or any number of consumer electronics products for a few hundred pounds is that they're manufactured in developing countries where labour is cheap and working conditions are a million miles from what we would regard as acceptable. And I'm not even going to start here about the sustainability of those products - in terms of the true energy costs, and costs to the environment, of their manufacture during incredibly complex supply chains, or the environmental costs of their disposal after we've finished with them.

This may sound odd given that I'm a professional electronics engineer and elder-nerd. But I'm a late adopter of new technology. Always have been. (My excuse is that I was an early adopter of the transistor.) I also keep stuff for a very long time. My Hi-Fi system is 25 years old and is working just fine. My car is now 6 years old and I fully expect to run it for another 10 years - a modern well-built and maintained car can easily last for 250,000 miles. The most recent high tech thing I bought was a new electric piano. It replaced my old one, bought in 1983, which had become unplayable because the mechanics of the keys had worn out and I fully expect to keep my beautiful new Roland piano for 25 years. My MacBook pro (yes I do like Apple stuff) is now 5 years old and works just fine - not bad for something that's probably had 10,000 hours use. In short I aim to practice what's sometimes called Bangernomics - except I try and apply the philosophy to everything, not just cars. (I'm not exactly a model consumer.)

Maybe that's part of the answer to my dilemma - get an iPad and run it for 20 years..? But even applying Bangernomics still won't salve my conscience when it comes to the ethics or sustainability of its manufacture. So, what am I to do?

Tuesday, March 01, 2011

Making sense of robots: the hermeneutic challenge

One of the challenges of the artificial culture project that we knew we would face from the start is that of making sense of the free running experiments in the lab. One of the project investigators - philosopher Robin Durie - called this the hermeneutic challenge. In the project proposal Robin wrote:
what means will we be able to develop by which we can identify/recognise meaningful/cultural behaviour [in the robots]; and, then, what means might we go on to develop for interpreting or understanding this behaviour and/or its significance?
Now, more than 3 years on, we come face to face with that question. Let me clarify: we are not - or at least not yet - claiming to have identified or recognised emerging robot culture. We do, however, more modestly claim to have demonstrated new behavioural patterns (memes) that emerge and - for awhile at least - are dominant. It's an open-ended evolutionary process in which the dominant 'species' of memes come and go. Maybe these clusters of closely related memes could be labelled behavioural traditions?

Leaving that speculation aside, a more pressing problem in recent months has been to try and understand how and why certain behavioural patterns emerge at all. Let me explain. We typically seed each robot with a behavioural pattern; it is literally a sequence of movements. Think of it as a dance. But we choose these initial dances arbitrarily - movements that describe a square or triangle for instance - without any regard whatsoever for whether these movement sequences are easy or hard for the robots to imitate.

Not surprisingly then, the initial dances quickly mutate to different patterns, sometimes more complex and sometimes less. But what is it about the robot's physical shape, its sensorium, and the process of estimation inherent in imitation that gives rise to these mutations? Let me explain why this is important. Our robots and you, dear reader, have one thing in common: you both have bodies. And bodies bring limitations: firstly because you body doesn't allow you to make any movement imaginable - only ones that your shape, structure and muscles allow, and secondly because if you try to watch and imitate someone else's movements you have to guess some of what they're doing (because you don't have a perfect 360 degree view of them). That's why your imitated copy of someone else's behaviour is always a bit different. Exactly the same limitations give rise to variation in imitated behaviours in the robots.

Now it may seem a relatively trivial matter to watch the robots imitate each other and then figure out how the mutations in successive copies (and copies of copies) are determined by the robots' shape, sensors and programming. But it's not, and we find ourselves having to devise new ways of visualising the experimental data in order to make sense of what's going on. The picture below is one such visualisation; it's actually a family tree of memes, with parent memes at the top and child memes (i.e. copies) shown branching below parents.

Unlike a human family tree each child meme has only one parent. In this 'memeogram' there are two memes at the start, numbered 1 and 2. 1 is a triangle movement pattern, and 2 is a square movement pattern. In this experiment there are 4 robots, and it's easy to see here that the triangle meme dominates - it and its descendants are seen much more often.

The diagram also shows which child-memes are high quality copies of their parents - these are shown in brown with bold arrows connecting them to their parent-memes. This allows us to easily see clusters of similar memes, for instance in the bottom-left there are 7 closely related and very similar memes (numbered 36, 37, 46, 49, 50, 51 and 55). Does this cluster represent a dominant 'species' of memes?


Also posted on the Artificial Culture project blog.