Monday, December 05, 2011

Swarm robotics at the Science Museum

Just spent an awesomely busy weekend at the Science Museum, demonstrating Swarm Robotics. We were here as part of the Robotville exhibition, and - on the wider stage - European Robotics Week. I say we because it was a team effort, led by my PhD student Paul O'Dowd who heroically manned the exhibit all four days, and supported also by postdoc Dr Wenguo Liu. Here is a gallery of pictures from Robotville on the science museum blog, and some more pictures here (photos by Patu Tifinger):




Although exhausting, it was at the same time uplifting. We had a crowd of very interested families and children the whole time - in fact the organisers tell me that Robotville had just short of 8000 visitors over the 4 days of the exhibition. What was really nice was that the whole exhibition was hands-on, and our sturdy e-puck robots - at pretty much eye-level for 5-year olds, attracted lots of small hands interacting with the swarm. A bit like putting your hand into an ants nest (although I doubt the kids would have been so keen on that.)

Let me explain what the robots were doing. Paul had programmed two different demonstrations, one with fixed behaviours and the other with learning.

For the fixed behaviour demo the e-puck robots were programmed with the following low-level behaviours:
  1. Short-range avoidance. If a robot gets too close to another robot or an obstacle then it turns away to avoid it.
  2. Longer-range attraction. If a robot can sense other robots nearby but gets too far from the flock, then it turns back toward the flock. And while in a flock, move slowly.
  3. If a robot loses the flock then it speeds up and wanders at random in an effort to regain the flock (i.e. another robot).
  4. While in a flock, each robot will communicate (via infra-red) its estimate of the position of an external light source to nearby robots in the flock. While communicating the robot flashes its green body LED.
  5. Also while in a flock, each robot will turn toward the 'consensus' direction of the external light source.
The net effect of these low-level behaviours is that the robots will both stay together as a swarm (or flock), and over time, move as a swarm toward the external light source. Both of these swarm-level behaviours are emergent because they result from the low-level robot-robot and robot-environment interactions. While the flocking behaviour is evident in just a few minutes, the overall swarm movement toward the external light source is less obvious. In reality even the flocking behaviour appears chaotic, with robots losing each other, and leaving the flock, or several mini-flocks forming. The reason for this is that all of the low-level behaviours make use of the e-puck robots' multi-purpose Infra-red sensors, and the environment is noisy; in other words because we don't have carefully controlled lighting there is lots of ambient IR light constantly confusing the robots.

The learning demo is a little more complex and makes use of an embedded evolutionary algorithm, actually running within the e-puck robots, so that - over time - the robots learn how to flock. This demo is based on Paul's experimental work, which I described in some detail in an earlier blog post, so I won't go into detail here. It's the robots with the yellow hats in the lower picture above. What's interesting to observe is that initially, the robots are hopeless - constantly crashing into each other or the arena walls, but noticeably over 30 minutes or so we can see the robots learn to control themselves, using information from their sensors. The weird thing here is that, every minute or so, each robot's control software is replaced by a great-great-grand child of itself. The robot's body is not evolving, but invisibly its controller is evolving, so that later generations of controller are more capable.

The magical moment of the two days was when one young lad - maybe 12 years old, who very clearly understood everything straight away and seemed to intuit things I hadn't explained - stayed nearly an hour explaining and demonstrating to other children. Priceless.

Tuesday, September 20, 2011

TAROS lecture: The Ethical Roboticist

Here are the slides for the IET public lecture I gave in Sheffield on 2 September 2011 on the final day of the conference Towards Autonomous Robotic Systems (TAROS).

Wednesday, August 31, 2011

Discussing Asimov's laws of robotics and a draft revision

This is me discussing robot ethics with Dallas Campbell for BBC1's Bang Goes The Theory. I outline the five new ethical principles for roboticists proposed by the EPSRC/AHRC working group. Click here for the working group's full report, including a commentary on these draft proposals.



With thanks to Simon Mackie, senior content producer for the Bang Goes The Theory website, for the code to embed this video clip.

Saturday, August 20, 2011

Robohype and why it's bad for robotics

You are technically literate, an engineer or scientist perhaps with a particular interest in robotics, but you've been stranded on a desert island for the past 30 years. Rescued and returned to civilisation you are keen to find out how far robotics science and technology has advanced and - rejoicing in the marvellous inventions of the Internet and its search engines - you scour the science press for robonews. Scanning the headlines you are thrilled to discover that robots are alive, and sending messages from space; robots can think or are "capable of human reasoning or learning"; robots have feelings, relate to humans, or demonstrate love, even behave ethically. Truly robots have achieved their promised potential.

Then of course you start to dig deeper and read the science behind these stories. The truth dawns. Although the robotics you are reading about is significant work, done by very good people, the fact is - you begin to realise - that now, in 2011, robots cannot properly be said to think, feel, empathise, love or be moral agents; and certainly no robot is, in any meaningful sense, alive, or sentient. Of course your disappointment is tempered by the discovery that astonishing strides have nevertheless been made.

So, robotics is subject to journalistic hype. Nothing new there then. So why am I writing about it here (apart from the fact it annoys the hell out of me)? I write because I think that robohype is a serious problem and an issue that the robotics community should worry about. The problem is this. Most people who read the press reports are lay readers who - perfectly reasonably - will not read much beyond the headline; certainly few will look for the source research. So every time a piece of robohype appears (pretty much every day) the level of mass-delusion about what robots do increases a bit more, and the expectation gap widens. Remember that the expectation gap - the gap between what people think robots are capable of and what they're really capable of - is already wide because of the grip robots have on our cultural imagination. We are at the same time fascinated and fearful of robots, and this fascination feeds the hype because we want (or dread) the robofiction to become true. Which is of course one of the reasons for the hype in the first place.

But the expectation gap is a serious problem. It's a problem because it makes our work as roboticists harder, not least because many of the hard problems we are working on are problems many people think already solved. It's a problem because it is, I believe, creating pressure on us to over-promise when writing grant applications, so solid important incremental research grants get rejected in favour of fantasy projects. Those projects inevitably fail to deliver and over time funding bodies will react by closing down robotics research initiatives - leading to the kind of funding winter that AI saw in the 1990s. And it's a problem because it creates societal expectations on robotics that cannot be met - think of the unrealistic promise of military robots with an artificial conscience.

Who's to blame for the robohype? Well we roboticists must share the blame. When we describe our robots and what they do we use anthropocentric words, especially when trying to explain our work to people outside the robotics community. Within the robotics and AI community we all understand that when we talk about an intelligent robot, what we mean is a robot that behaves as if it were intelligent; intelligent robot is a convenient shorthand. So when we talk to journalists we should not be too surprised when "this robot behaves, in some limited sense, as if it has feelings" gets translated to "this robot has feelings". But science journalists must, I think, do better than this.

Words in robotics, as in life, are important. When we describe our robots, their capabilities and their potential, and when science reporters and bloggers bring our work to wider public attention, we need to choose our words with great care. In humanoid robotics where, after all, the whole idea is to create robots that emulate human behaviours, capabilities and cognition, perhaps we just cannot avoid using anthropocentric words. Maybe we need a new lexicon for describing humanoid robots; perhaps we should stop using words like think, feel, imagine, belief, love, happy altogether? Whatever the answer, I am convinced that robohype is damaging to the robotics project and something must be done.

Monday, July 25, 2011

Manifesto for a Robot Standard Interface Specification

This blog post could well turn out to be to be the most boring I've ever written - but I think it's important. I want to write about something that robotics desperately needs: an industry standard interface specification (see I told you it was going to be boring).

Let me explain what I mean by talking about a fantastically successful standard called MIDI, that has without doubt played a significant role in the success of music technology. MIDI stands for musical instrument digital interface. It provides an industry standard for connecting together electronic musical instruments, i.e. synthesisers, computers and all manner of electronic music gizmos. The important thing about MIDI is that it specifies everything including the physical plug and socket, the electrical signalling, the communications protocol and the messages that can be sent or received by MIDI connected devices. With great foresight MIDI's designers provided in the protocol both standard messages that all MIDI equipped electronic musical instruments would expect to send and receive - and recognise, but also customisable messages that manufacturers could specify for particular instruments and devices. In MIDI each instrument is able to identify itself to another device connected via MIDI; it can say, for example, I'm a Roland synthesiser model ABC. If the other device, a sequencer for instance, recognises the Roland ABC it can then access that instrument's custom features (in addition to the standard functions of all MIDI devices).

Robotics needs a MIDI specification. Let's call it RSIS for Robot Standard Interface Specification. Like MIDI, RSIS would need to specify everything from the physical plug and socket, to the structure and meaning of RSIS messages. Devising a spec for RSIS would not be trivial - my guess is that it would be rather more complex than MIDI because of the more diverse types of robot devices and peripherals. But the benefits would be immense. RSIS would allow robot builders to plug and play different complex sensors and actuators, from different manufacturers, to create new robot bodies and new functionality. Imagine, for instance, being able to take a Willow garage PR2 robot and fit a humanoid robot hand from the Shadow Robot Company. Of course there would need to be a mechanical mounting to physically attach the new hand, but that's not what I'm talking about here; I'm referring to the control interface which would be connected via RSIS. The PR2 would then, via the RSIS connection, sense that a new device had been connected and, using standard RSIS messages, ask the new device to identify itself. On discovering it has a handsome new Shadow hand the PR2 would then install the device driver (downloading it from the cloud if necessary) and, within a few seconds, the new hand becomes fully functional in true plug and play fashion.

Industry standards, and the people who create them, are the unsung heroes of technology. Without these standards, like UMTS, TCP/IP, HTTP or IEEE 802.11 (WiFi to you and me) we wouldn't have ubiquitous mobile phone, internet, web or wireless tech that just works. But more than that, standards are I think part of the essential underpinning infrastructure that kick starts whole new industry sectors. That's why I think standards are so critical to robotics.

Maybe a Robot Standard Interface Specification (or the effort to create it) already exists? If so, I'd very much like to hear about it.

Tuesday, May 31, 2011

Machine intelligence: fake or real?

A few days ago, at the excellent HowTheLightGetsIn festival, I took part in a panel debate called Rise of the Machines. Here was the brief:
From 2001 to The Matrix, intelligent machines and robots have played a central role in our fictions. Some now claim they are about to become fact. Is artificial intelligence possible or just a science fiction fantasy? And would it be a fundamental advance for humankind or an outcome to be feared?
Invited at the last minute, I found myself debating these questions with a distinguished panel consisting of philosophers Peter Hacker and Hilary Lawson, and law academic Lilian EdwardsHenrietta Moore brilliantly chaired.

I shan't attempt to summarise the debate here. I certainly couldn't do it, or the arguments of fellow panelists, justice. In any event it was filmed and should appear soon on IAI TV. What I want to talk about here is the question - which turned out to be central to the debate - of whether machines are, or could ever be regarded as, intelligent.

The position I adopted and argued in the debate is best summed up as simulationist. For the past 10 years or so I have believed our grand project as roboticists is to build robots that aim to be progressively higher fidelity imitations of life, and intelligence. This is a convenient and pragmatic approach: robots that behave as if they are intelligent are no less interesting (as working models of intelligence for instance), or potentially useful, than robots that really are intelligent, and the ethical questions that arise no less pressing*. But, I realised in Hay-on-Wye, the simulationist approach also plays to the arguments of philosophers, including Peter Hacker, that machines cannot ever be truly intelligent in principle.

Reflecting on that debate I realised that my erstwhile position in effect accepts that robots, or AI, will never be truly intelligent, never better than a simulation; that machines can never do more than pretend to be smart. However, I'm now not at all sure that position is logically tenable. The question that keeps going around my head is this: if a thing - biological or artificial - behaves as if it is intelligent, then why shouldn't it be regarded as properly intelligent? Surely behaving intelligently is the same as being intelligent. Isn't that what intelligence is?

Let me offer two arguments in support of this proposition.

There are those who argue that real intelligence is uniquely a property of living organisms. They admit that artificial systems might eventually demonstrate a satisfactory emulation of intelligence but will argue that nothing artificial can truly think, or feel. This is the anthropocentric (or perhaps more accurately, zoocentric) position. The fundamental problem with this position, in my view, is that it fails to explain which properties of biological systems make them uniquely intelligent. Is it that intelligence depends uniquely on exotic properties of biological stuff? The problem here is there's no evidence for such properties. Perhaps intelligence is uniquely an outcome of evolution? Well robot intelligence can be evolved, not designed. Perhaps advanced intelligence requires social structures in order to emerge? I would agree, and point to social robotics as a promising equivalent substrate. Advanced intelligence uniquely requires, perhaps, nurture because really smart animals are not born smart. Again I would agree, and point to the new field of developmental robotics. In short, I argue that it is impossible to propose a property of biological systems, required for intelligence, that is unique to those biological systems and cannot exist as a property of artificial systems.

My second argument is around the question of how intelligence is measured or determined. As I've blogged before, intelligence is a difficult thing to define let alone measure. But one thing is clear - no current measure of intelligence in humans or animals requires us to look inside their brains. We determine a human or animal to be intelligent exclusively on the basis of its actions. For simple animals we observe how they react and look at the sophistication of those responses (as prey or predator for instance). In humans we look formally to examinations (to measure cognitive intelligence) or more generally to ingenuity in social discourse (Machiavellian intelligence), or creativity (artistic or technical intelligence). For advanced animal intelligence we devise ever more ingenious tests, the results from which sometimes challenge or prejudices about where those animals sit on our supposed intelligence scale. We heard from Lilian Edwards during the debate that, in common law, civil responsibility is likewise judged exclusively on actions. A judge may have to make a judgement about the intentions of a defendant but they have to do so only on the evidence of their actions**. I argue, therefore, that it is inconsistent to demand a different test of intelligence for artificial systems. Why should we expect to determine whether a robot is truly intelligent or not on the basis of some not-yet-determined properties of its internal cognitive structures, when we do not require that test of animals or humans?

The counter-intuitive and uncomfortable conclusion: machine intelligence is not fake, it's real.


*perhaps even more so given that such robots are essentially fraudulent.
**with thanks to Lilian for correcting my wording here.

Friday, May 06, 2011

Revisiting Asimov: the Ethical Roboticist

Well it's taken awhile, but the draft revised 'laws of robotics' have now been published. New Scientist article Roboethics for Humans, reporting on the EPSRC/AHRC initiative in roboethics, appears in this week's issue (Issue 2811, 7 May 2011). These new draft ethical principles emerged from a workshop on ethical, legal and societal issues in robotics.

The main outcome from the workshop was a draft statement aimed at initiating a debate within the robotics research and industry community, and more widely. That statement is framed by, first, a set of high-level messages for researchers and the public which encourage responsibility from the robotics community, and hence (we hope) trust in the work of that community. And second, a revised and updated version of Asimov’s three laws of robotics for designers and users of robots; not laws for robots, but guiding principles for roboticists.

The seven high-level messages are:
  1. We believe robots have the potential to provide immense positive impact to society. We want to encourage responsible robot research.
  2. Bad practice (in robotics) hurts us all.
  3. Addressing obvious public concerns (about robots) will help us all make progress.
  4. It is important to demonstrate that we, as roboticists, are committed to the best possible standards of practice.
  5. To understand the context and consequences of our research we should work with experts from other disciplines including: social sciences, law, philosophy and the arts.
  6. We should consider the ethics of transparency: are there limits to what should be openly available?
  7. When we see erroneous accounts in the press, we commit to take the time to contact the reporting journalists.
Isaac Asimov's famous 'laws of robotics' first appeared in 1942 in his short story Runaround. They are (1) A robot may not injure a human being or, through inaction, allow a human being to come to harm; (2) A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law, and (3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.



Asimov’s laws updated: instead of 'laws for robots' our revision is a set of five draft 'ethical principles for robotics', i.e. moral precepts for researchers, designers, manufacturers, suppliers and maintainers of robots. We propose:
  1. Robots are multi-use tools. Robots should not be designed solely or primarily to kill or harm humans, except in the interests of national security.
  2. Humans, not robots, are responsible agents. Robots should be designed & operated as far as is practicable to comply with existing laws and fundamental rights & freedoms, including privacy.
  3. Robots are products. They should be designed using processes which assure their safety and security.
  4. Robots are manufactured artefacts. They should not be designed in a deceptive way to exploit vulnerable users; instead their machine nature should be transparent.
  5. The person with legal responsibility for a robot should be attributed.
Now it's important to say, firstly, that these are the work of a group of people so the wording represents a negotiated compromise*. Secondly, they are a first draft. The draft was circulated within the UK robotics community in February, then last month presented to a workshop on Ethical Legal and Societal issues at the European Robotics Forum in Sweden. So, we already have great feedback - which is being collected by EPSRC - but that feedback has not yet been incorporated into any revisions. Thirdly, there is detailed commentary - especially explaining the thinking and rationale for the 7 messages and 5 ethical principles above. That commentary can be found here.

Comments and criticism welcome! To feedback either,
  • post a comment in response to this blog,
  • email EPSRC at RoboticsRetreat@epsrc.ac.uk, or
  • directly contact myself or any of the workshop members listed in the commentary.

*So, while I am a passionate advocate of ethical robotics and very happy to defend the approach that we've taken here, there are some detailed aspects of these principles that I'm not 100% happy with.

Friday, April 29, 2011

Ill robots might get a temperature too

Just spent 4 days at the beautiful Schloss Dagstuhl in SW Germany attending a seminar on Artificial Immune Systems. The Dagstuhl is a remarkable concept – a place dedicated to residential retreats on advanced topics in computer science. Everything you need is there to discuss, think and learn. And learn is what I just did – to the extent that by lunchtime today when the seminar closed I felt like the small boy who asks to be excused from class because “miss, my brain is full”.

Knowing more or less nothing about artificial immune systems it was, for me like sitting in class, except that my teachers are world experts in the subject. A real privilege. So, what are artificial immune systems? They are essentially computer systems inspired by and modelled on biological immune systems. AISs are, I learned, both engineering systems for detecting and perhaps repairing and recovering from faults in artificial systems (in effect system maintenance), and scientific systems for modelling and/or visualising natural immune systems.

I learned that real immune systems are not just one system but several complex and inter-related systems, the biology of which is not fully understood. Thus, interestingly, AISs are modelled on (and models of) our best understanding so far of real immune systems. This of course means that biologists almost certainly have something to gain from engaging with the AIS community. (There are interesting parallels here with my experience of biologists working with roboticsts in Swarm Intelligence.)

The first thing I learned was about the lines of defence to external attack on bodies. The first is physical: the skin. If something gets past this then bodies apply a brute force approach by, for instance raising the temperature. If that doesn’t work then more complex mechanisms in the innate immune system kick-in: white blood cells that attempt to ‘eat’ the invaders. But more sophisticated pathogens require a response from the last line of defence: the adaptive immune system. Here the immune system ‘learns’ how to neutralise a new pathogen with a process called clonal selection. I was astonished to learn that clonal selection actually ‘evolves’ a response. Amazing – embodied evolution going on super-fast inside your body within the adaptive immune system, taking just a couple of days to complete. Now as a roboticist I’m very interested in embodied evolution – and by coincidence I attended a workhop on that very subject just a month ago. But I’d always assumed that embodied evolution was biologically implausible – an engineering trick if you like.  But no – there it is going on inside adaptive immune systems. (As an aside, it appears that we don’t understand the processes that prompted the evolution of adaptive immune systems some 400 million years ago – in jawed vertebrates).

Of course while listening to this fascinating stuff I was all the while wondering what this might mean for robotics. For instance what hazards would require the equivalent of an innate immune response in robots, and which would need an adaptive response. And what exactly is the robot equivalent of an ‘infection’. Would a robot, for instance, get a temperature if it was fighting an infection. Quite possibly yes – the additional computation needed for the robot to figure out how to counter the hazard might indeed need more energy – so the robot would have to slow down its motors to direct its battery power instead to its computer. Sounds familiar doesn’t it: slowing down and getting a temperature!

Swarm robots with faults is something I’ve been worrying about for awhile and, based on the work I blogged about here, at the Dagstuhl I presented my hunch that – while swarm of 100 robots might work ok – swarms of 100,000 robots definitely wouldn’t without something very much like an immune system. That led to some very interesting discussions about the feasibility of co-evolving swarm function and swarm immunity. And, given that we think we’re beginning to understand how to embed and embody evolution across a swarm of robots, this is all beginning to look surprisingly feasible.

Wednesday, April 13, 2011

Why Slow Science may well be A Very Good Thing

A few weeks ago I spent a very enjoyable Saturday at the Northern Arts and Science Network annual conference Dialogues, in Leeds. The morning sessions including two outstanding keynote talks. The first from Julian Kiverstein on synthetic synaesthesia and the second from David James on technology enhanced sports. Significant food for thought in both talks. Then Jenny Tennant Jackson and I ran an afternoon workshop on the Artificial Culture project (aided and abetted by 8 e-puck robots) which generated lots of questions and interest.

But apart from singing the praises of NASN and the conference I want to reflect here on something that emerged from the panel discussion at the end of the conference. There was quite a bit of debate around the question of open research (in both science and the arts) and public engagement. In recent years I've become a strong advocate of a unified open science + public engagement approach. In other words doing research transparently - ideally using an open notebook approach so that the whole of the process as well as the experimental outcomes are open to all - combined with proactive public engagement in (hopefully) a virtuous circle*.

So there I was pontificating about the merits of this approach in the panel discussion at NASN when someone asked rather pointedly "but isn't that all going to slow down the process of advancing science?" Without thinking I retorted "Good! If the cost of openness is slowing down science then that has to be a price worth paying." The questioner was clearly somewhat taken aback and to you sir, if you should read this blog, I offer sincere apologies for the abruptness of my reply. In fact I owe you not only apologies but thanks, for that exchange has really got me thinking about Slow Science.

So, having reflected a little, here's why I think slowing down science might not be as crazy as it sounds.

First the ethical dimension. Science or engineering research that is worth doing, i.e. is important and has value, has - by definition - an ethical dimension. The ethical and societal impact of science and engineering research needs to be acknowledged and understood by researchers themselves then widely and transparently debated, and not left to bad science journalism, science denialism or corporate interests. This takes time.

Next, unintended consequences. High impact research always has implications, and the larger the impact, the greater the potential for unintended consequences (no matter how well intentioned the work). Of course negative unintended consequences (scientific, economic, philosophical) almost always end up becoming a problem for society - so they too should be properly considered and discussed during a project's lifetime.

Finally the open science, public engagement dimension. I would argue that the time and effort costs of building open science and public engagement into research projects will reap manifold dividends in the long run. First take the open science aspect, openness - while it can take some courage to actually do - can surely only bring long term benefits in increased trust (in the work of the project, and in science in general). Second, running an integrated open science - public engagement approach alongside the research brings direct educational benefit to the next generation. And the additional real cost (in time and effort) has to be much less than it would be for an isolated project seeking the same educational outcomes.

Critics will of course argue that Slow Science would be uncompetitive. In a limited sense they would be right, but it seems to me important not to confuse commercialisation of spin out products with the much longer time span of research, nor to allow the tail of exploitation to wag the dog of research. Big science that takes decades can still spin out lots of wealth creating stuff along the way. Another criticism of Slow Science is to do with pressing problems that desperately need solutions. This is harder to counter but - perhaps - the unintended consequences argument might hold sway.

Slow Science: a Good Thing, or not?


*science communicator and PhD student Ann Grand is researching exactly this subject and has already published several papers on it.

Thursday, March 31, 2011

Telling all on I'm a Scientist

In future if anyone wants to know what I think - about almost anything scientific and quite alot else - all I have to do is point them to my profile and my collected answers on I'm a Scientist get me out of here. It's been a week now since IAS concluded and the winners announced and I've had time to collect my thoughts, catch up on the day job, and reflect on taking part in this most excellent event.


I'm a Scientist get me out of here is aptly named. By Thursday on the second week I was - on balance - more relieved than disappointed to be evicted from the virtual jungle clearing, called the Chlorine Zone, that I'd been sharing with four other scientists. (Beyond the eviction thing the analogy with I'm a Celebrity breaks down. We five were not required to undertake challenges designed to freak-out the squeamish nor rewarded with discomfort reducing morsels.)

No. I'm a Scientist is an altogether more civilised affair. It's a direct engagement with school children; meet-the-scientist on-line in which school children can ask the scientists questions on more or less anything they like. There are two types of engagement, chat and ask. The live chat sessions are booked by teachers and scheduled during school science lessons - a bit like having a panel of scientists sitting at the front of the classroom answering questions, except it's on-line. Ask allows the children to submit their questions through the web page for the scientists to answer in their own time. Both types of engagement are moderated by the good people who run I'm a Scientist.

Why then - if I'm a Scientist is so wonderful (which it is) - was I relieved to be evicted? Well, it's because after nearly 2 weeks the questions just keep coming and trying to keep up (especially given that we all have day jobs) became, if I'm completely honest, something of a test of endurance. Not counting the live chat school sessions I answered about 175 questions altogether. Other I'm a Scientist scientists who read this will scoff and say "pah, only 175!". And they'd be right - Sarah Thomas in my zone answered over 300 questions, and the awesome David Pyle in the potassium zone around 600! But even my paltry 175 questions took I reckon about 30 hours to answer, at an average 10 minutes per question (which is going fast).

But I'm not going to whinge here about my inability to keep up (although I do strongly advise future I'm a Scientists to set aside plenty of question answering time). I really want to reflect on the questions themselves. Firstly I was slightly surprised there were so few on my specialist subject of robotics. Only 22 out of the 175. But they were good ones! Here are some of my favourites:
Some of these will form the basis of future blog posts. But it was the general science questions that were the most interesting, for instance:
Brilliant - it was a kind of science soap box! I got to pontificate on life on Mars, the end of the world and human extinction, global warming, nuclear power, dreams, light years, my favourite animal, my favourite car, string theory, the Higgs Boson and dark matter. But the non-science questions make you stop and think - hmm how much do I want to reveal about what I think about antidisestablishmentarianism, my religous beliefs, resurrection or the meaning of life..?

By far the biggest category of questions was about doing science: why and how you do science, what's the best thing about being a scientist, what you think you have achieved, or will achieve and so on (and quite a few on what you will do with the prize money if you win). These are great questions because they allow you to explode some myths about science: for instance that you have to be super smart to do science, or that one scientist can change the world on their own. I was especially flattered by
If you're thinking of putting yourself forward for I'm a Scientist I would say yes go for it. It's hugely good fun and massively worthwhile. But (1) set aside plenty of time, (2) be prepared to answer questions on more or less anything and (3) be honest about yourself and what you really think about stuff.

Here are some great blog posts from other March 2011 I'm a Scientists:
Suzie Sheehy's Reflections on I'm a Scientist
David Pyle's I'm a Scientist: 600 questions later
I'm a Scientist and I'm out of here

Sunday, March 13, 2011

Dilemmas of an ethical consumer

I have a dilemma and it is this. I'm torn between lusting after an iPad 2 and serious worries over the ethics of its manufacture.

There's no doubt that the iPad is a remarkable device (Jobs' hyperbole about magical and revolutionary is quite unnecessary). Several academic friends have told me that the iPad and one application in particular - called iAnnotate - has changed their working lives. Having seen them demonstrate iAnnotate there's no doubt it's the academic's killer iPad app. You see, something we have to do all the time is read, review and edit papers, book chapters, grant applications and working documents. For me that normally means printing a paper out, writing all over it, then either tediously scanning the marked up pages - uploading them to Google docs - then emailing the link, or constructing a large email with a list of all my changes and comments. What my friends showed me was them reviewing a paper on the iPad, writing all over it with a stylus, then just emailing back the marked up document. Amazing - this could save me hours every week.

But here's the problem. The iPad may well be a marvel of design and technology but - like most high tech stuff these days - it's profoundly unsustainable and it's manufacture is ethically questionable. Now to be fair to Apple, this is not a problem that's unique to them - and I'm prepared to believe that Apple does genuinely care about the conditions under which it's products are manufactured and is doing all it can to pressure it's subcontractors to provide the best working conditions for their employees. But the problem is systemic - the only reason that we can buy an iPad, or laptop, or flat screen TV, or any number of consumer electronics products for a few hundred pounds is that they're manufactured in developing countries where labour is cheap and working conditions are a million miles from what we would regard as acceptable. And I'm not even going to start here about the sustainability of those products - in terms of the true energy costs, and costs to the environment, of their manufacture during incredibly complex supply chains, or the environmental costs of their disposal after we've finished with them.

This may sound odd given that I'm a professional electronics engineer and elder-nerd. But I'm a late adopter of new technology. Always have been. (My excuse is that I was an early adopter of the transistor.) I also keep stuff for a very long time. My Hi-Fi system is 25 years old and is working just fine. My car is now 6 years old and I fully expect to run it for another 10 years - a modern well-built and maintained car can easily last for 250,000 miles. The most recent high tech thing I bought was a new electric piano. It replaced my old one, bought in 1983, which had become unplayable because the mechanics of the keys had worn out and I fully expect to keep my beautiful new Roland piano for 25 years. My MacBook pro (yes I do like Apple stuff) is now 5 years old and works just fine - not bad for something that's probably had 10,000 hours use. In short I aim to practice what's sometimes called Bangernomics - except I try and apply the philosophy to everything, not just cars. (I'm not exactly a model consumer.)

Maybe that's part of the answer to my dilemma - get an iPad and run it for 20 years..? But even applying Bangernomics still won't salve my conscience when it comes to the ethics or sustainability of its manufacture. So, what am I to do?

Tuesday, March 01, 2011

Making sense of robots: the hermeneutic challenge

One of the challenges of the artificial culture project that we knew we would face from the start is that of making sense of the free running experiments in the lab. One of the project investigators - philosopher Robin Durie - called this the hermeneutic challenge. In the project proposal Robin wrote:
what means will we be able to develop by which we can identify/recognise meaningful/cultural behaviour [in the robots]; and, then, what means might we go on to develop for interpreting or understanding this behaviour and/or its significance?
Now, more than 3 years on, we come face to face with that question. Let me clarify: we are not - or at least not yet - claiming to have identified or recognised emerging robot culture. We do, however, more modestly claim to have demonstrated new behavioural patterns (memes) that emerge and - for awhile at least - are dominant. It's an open-ended evolutionary process in which the dominant 'species' of memes come and go. Maybe these clusters of closely related memes could be labelled behavioural traditions?

Leaving that speculation aside, a more pressing problem in recent months has been to try and understand how and why certain behavioural patterns emerge at all. Let me explain. We typically seed each robot with a behavioural pattern; it is literally a sequence of movements. Think of it as a dance. But we choose these initial dances arbitrarily - movements that describe a square or triangle for instance - without any regard whatsoever for whether these movement sequences are easy or hard for the robots to imitate.

Not surprisingly then, the initial dances quickly mutate to different patterns, sometimes more complex and sometimes less. But what is it about the robot's physical shape, its sensorium, and the process of estimation inherent in imitation that gives rise to these mutations? Let me explain why this is important. Our robots and you, dear reader, have one thing in common: you both have bodies. And bodies bring limitations: firstly because you body doesn't allow you to make any movement imaginable - only ones that your shape, structure and muscles allow, and secondly because if you try to watch and imitate someone else's movements you have to guess some of what they're doing (because you don't have a perfect 360 degree view of them). That's why your imitated copy of someone else's behaviour is always a bit different. Exactly the same limitations give rise to variation in imitated behaviours in the robots.

Now it may seem a relatively trivial matter to watch the robots imitate each other and then figure out how the mutations in successive copies (and copies of copies) are determined by the robots' shape, sensors and programming. But it's not, and we find ourselves having to devise new ways of visualising the experimental data in order to make sense of what's going on. The picture below is one such visualisation; it's actually a family tree of memes, with parent memes at the top and child memes (i.e. copies) shown branching below parents.

Unlike a human family tree each child meme has only one parent. In this 'memeogram' there are two memes at the start, numbered 1 and 2. 1 is a triangle movement pattern, and 2 is a square movement pattern. In this experiment there are 4 robots, and it's easy to see here that the triangle meme dominates - it and its descendants are seen much more often.

The diagram also shows which child-memes are high quality copies of their parents - these are shown in brown with bold arrows connecting them to their parent-memes. This allows us to easily see clusters of similar memes, for instance in the bottom-left there are 7 closely related and very similar memes (numbered 36, 37, 46, 49, 50, 51 and 55). Does this cluster represent a dominant 'species' of memes?


Also posted on the Artificial Culture project blog.

Sunday, February 27, 2011

A sick robot dog called Max

A friend has asked me to check out her Aibo robot dog, called Max. Here he is:

Cute eh?

He's an early ERS-210 model. Charges up ok, but there's no response on switching-on. Hmm. I suspect the programme memory stick might have become corrupted. This will need a deeper examination...

Monday, February 21, 2011

FIRA 2012 Robot World Cup to be hosted by the Bristol Robotics Lab

We're all very excited because FIRA (the Federation of International Robot soccer Association), which runs an annual competition for robot soccer (and other robot sports), has awarded the 2012 event to the Bristol Robotics Lab. The 2010 event was held in Bangalore, India: check here for the web pages with 2010 results and some terrific videos. This year FIRA 2011 will be in Kaohsiung, Taiwan.

FIRA 2012 will run from 20 - 25 August 2012, just a week or so after London 2012. Alongside FIRA 2012 will be two robotics conferences: the FIRA Congress and TAROS 2012 (Towards Autonomous Robotic Systems). Here is the (under development) FIRA-TAROS 2012 web site. Here is the joint University of Bristol, UWE press release announcing the event.

The FIRA robot world cup games currently fall into 7 categories and each category is defined by the type of robot and, typically, has its own set of rules. The first six categories are all real physical robots, the 7th - SimuroSot - is all in simulation. Here's a brief summary of the 6 real robot categories with links to the full descriptions and rules on the FIRA web pages.
  • HuroSot is the main category for bipedal (walking and running) humanoid robots. It is also the most comprehensive category - in addition to soccer the category includes competitions for basketball, wall climbing, weight lifting and marathon running. HuroSot robots can be up to 130cm in height, and weigh up to 30kg. We will be entering a Bristol team for HuroSot. Here are some nice videos of HuroSot competitions in 2009.
  • Amiresot is a simple one-a-side soccer game for the small Amire wheeled robot, which must be fully autonomous with its own vision system. AmireSot robots play with a yellow tennis ball.
  • MiroSot is the Micro Robot soccer game for wheeled robots. It's a three-a-side game (one player can be a goalkeeper), in which an external vision system tracks the position of robots - and the ball - and an external computer system computes and relays moves to the robots. Robots cannot be larger than 7.5cm x 7.5cm x 7.5cm and they play with an orange golf ball. Here is a page with a video of a 2009 MiroSot game.
  • NaroSot is similar to MiroSot but with smaller wheeled robots (4cm x 4cm x 5.5cm) and is a five-a-side game. NaroSot robots play with an orange ping-pong ball.
  • AndroSot is a three-a-side game for fully autonomous 'android' robots between 30 and 60cm in height. Here is a video of a 2009 AndroSot game
  • RoboSot is a game for larger wheeled robots (20cm x 20cm x any height). It's a three-a-side game and the robots must use on-board vision, although computation may be off-board. RoboSot robots play with a yellow/green tennis ball.
Here are some of the robots entered in past competitions (from the FIRA web pages):
HuroSot
MiroSot












NaroSot
RoboSot

Wednesday, February 16, 2011

On Twitter and Machiavellian Intelligence

Four short months and 135 tweets ago I wrote about joining Twitter. Slightly reluctant, confused by how it worked and - if I'm completely honest - a bit sniffy about whether I could be bothered with it at all.

But just a week ago I stunned myself by realising that Twitter is now the first thing I check in the morning. Not email. After the best part of 15 years of first ritually checking my email Twitter has knocked email off the top spot(1). So what happened? What is it about Twitter that is so compelling, so addictive? Why do I love Twitter?

Actually that was just the first surprise. The second was to realise I was so pleased when my number of followers reached 50, then 60 - and last week 70.

But the thing that shocked me rigid a week ago was this. I found myself wondering how I might contrive person X (who I admired) to notice me and become my follower. What the hell was I thinking! Who's in control here - me or Twitter; do I have obsessive compulsive twitter syndrome? Do I need help - maybe go cold turkey for awhile?

But then I started thinking about it and realised that there is a very ancient instinct at work here, and Twitter is just tickling that instinct in me. Perfectly. I'm talking about Machiavellian intelligence. The kind of social intelligence that is present to some degree in all primates and well developed in Chimpanzee and monkeys such as Rhesus Macaques. So what is this kind of social intelligence? Well, if you find yourself thinking: I'm going to make friends with him and pretend I like him, but not because I want to be his friend. Oh no: he has a friend that I really want to be friends with and - through this deception - I might achieve that goal. Then you are engaged in the social politics of Machiavellian intelligence. For anyone interested in intelligence, the evolution of human intelligence, or indeed AI, Machiavellian intelligence is very interesting because it requires Theory of Mind. It was probably already well developed in the most recent common ancestor of humans and chimpanzee, around 6 million years ago.

I don't know whether it was intentional, but the very smart people who created Twitter have somehow built an ecosystem perfectly suited for this kind of game. The basic ingredients are these: firstly everyone has followers and people they follow (following). The fact that you can easily see the number of followers and following for those you follow, or who follow you, means that very quickly you establish exactly where you are in the Twitter social pecking order. These numbers mean alot to us. The alpha-tweeters are those with huge numbers of followers. They are, in the terminology of memetics, meme-founts - leaders of fashion. But even for those of us with modest circles of followers and following, the balance of numbers is significant. Our Machiavellian instinct tells us that those with a greater number of followers than following are, on balance, leaders whereas those whose following outnumbers their followers are, on balance, followers and therefore of lower Twitter status. Please understand I'm absolutely not saying they are less worthy individuals, only that this is what our Machiavellian instinct tells us in the game of Twitter.

The second ingredient that is, I think, significant is the fact that you can easily see which followers or following you have in common with someone. So it is not just a matter of numbers, it's personal. Among those you follow, and those who follow you, you really can work out very quickly who is connected to who - and the connections have social structure. If I and someone else follow each other, then we are - in a sense - equal. If, on the other hand, I see that I'm following someone else, but they don't follow me, then my Machiavellian instinct places them higher up the Twitter social scale than me. Again this may not correlate at all to real-life standing. The point I'm making is that we can't help making these Machiavellian inferences - and Twitter makes it so easy.

This brings me to the third and most brilliant ingredient: Re-tweeting. The politics of re-tweeting are fascinating and complex. Having one of your tweets re-tweeted is the equivalent of being stroked, and we love being stroked. I certainly experience a quantum of happiness(2) when one of my tweets is re-tweeted, and I'm even happier if it's re-tweeted several times. Conversely, I'm disappointed if a tweet that I thought was especially witty, insightful or apposite to current events fails to be re-tweeted. Indeed it appears to be good manners to thank those who have RT'd a tweet - which says alot of how much we value RTs. And of course to be re-tweeted by a Twitter celebrity is a precious honour, the equivalent of a favour by one of the princesses of the Twitter court.

So Twitter is powerful stuff. It's not just a micro-blogging site, it is a quite remarkable place in which we can play out to the full our ancient instinct for Machiavellian social politics.

And of course Twitter has proven itself to be a marvellous vehicle for grass-roots political activism. Is that something to do with Machiavellian intelligence too?

So now I don't feel quite so bad about my new-found Twitter addiction.


(1) Apart from a short spell of Guardian Soulmates 3 years ago:))
(2) I propose a new unit for a quantum of happiness: the RT (re-tweet).

Wednesday, February 02, 2011

How Intelligent are Intelligent Robots?

When giving talks about intelligent robots I've often been faced with the question "how intelligent is your robot?" with a tone of voice that suggests "...and should we be alarmed" It's a good question but one that is extremely difficult - if not impossible - to answer properly. I usually end up giving the rather feeble answer "not very", and I might well add "perhaps about as intelligent as a lobster" (or some other species that my audience will regard as reassuringly not very smart). I'm always left with an uneasy sense that I (and robotics in general) ought to be able to give an answer to this perfectly reasonable question. (Sooner or later I'm going to get caught out when someone follows up with "and exactly how intelligent is a lobster?")

Given that the study of Artificial Intelligence is over 60 years old, and that of embodied AI (i.e. intelligent robotics) not much younger, the fact that roboticists can't properly answer the question "how intelligent are intelligent robots" is, to say the least, embarrassing. It is I believe a problem that needs some serious attention.

Let's look at the question again. There is an implied abbreviation here: what my interlocutor means is: how intelligent are intelligent robots when compared with animals and humans? What's more we all assume a kind of 'scale' of intelligence - with humans (decidedly) at the top and, furthermore, a sense that a crocodile is smarter than a lobster, and a cat smarter than a crocodile. Where, then, would we place a robot vacuum cleaner, for instance, on this scale of animal intelligence?

Ok. To answer the question we clearly need to find a single measure, or test, for intelligence that is general enough it can be applied to robots, animals or humans. It needs to have a single scale broad enough to accommodate human intelligence and simple animals. This metric - let's call it GIQ for General (non-species-specific) Intelligence Quotient - would need to be extensible downwards - to accommodate single celled organisms (or plants for that matter) and of course robots because they're not very smart. Thinking ahead it should also be extensible upwards for super-human AI (which we keep being told is only a few decades away). Does such a measure exist already? No, I don't think it does, but I did come across this news posting on physorg.com a few days ago with the promising opening line How do you use a scientific method to measure the intelligence of a human being, an animal, a machine or an extra-terrestrial? It refers to a paper titled Measuring Universal Intelligence: Towards an Anytime Intelligence Test. I haven't been able to read the paper (it is behind a paywall) but - even from the abstract - it's pretty clear the problem isn't solved. In any event I'm doubtful because the news writeup talks of "interactive exercises in settings with a difficulty level estimated by calculating the so-called Kolmogorov complexity", which suggests a test that the agent being tested has to engage in. Well that's not going to work if you're testing the intelligence of a spider is it?

So let's set aside the problem of comparing the intelligence of robots with animals (or ET) for a moment. Are there existing non-species specific intelligence measures? This interesting essay by Jonathan Ball: The question of animal intelligence outlines several existing measures based on neural physiology. In summary they include:
  • Encephalization Quotient (EQ): which measures whether the brain of a given species is bigger or smaller than would be expected, compared with that of other animals its size (winner: Humans)
  • Cortical Folding: a measure based on the degree of cortical folding (winner: Dolphins)
  • Connectivity: a measure based on comparing the average number of connections per neuron (winner: Humans)
Interestingly, if we take the connectivity measure - which Jonathan Ball suggests offers the greatest degree of correlation with intelligence - then if our robot is controlled by an artificial neural network we might actually have a common basis for comparison of human and robot intelligence.

So, even if none of them are entirely satisfactory it's clear that there has been a great deal of work on measures of animal intelligence. What about the field of robotics - are there intelligence metrics for comparing one robot with another (say a vacuum cleaning robot with a toy robot dog)? As far as I'm aware the answer is a resounding no. (Of course the same is not true in the field of AI where passing the Turing Test has become the iconic - if controversial - holy grail.)

But all of this presupposes, firstly, that we can agree on what we mean by 'intelligence' - which we do not. And secondly, that intelligence is a single thing that any one animal, or robot, can have more or less of* - which is also very doubtful.


*An observation made by an anonymous reviewer of one of my papers, for which I am very grateful.

Monday, January 24, 2011

New experiments in embodied evolutionary swarm robotics

My PhD student Paul has started a new series of experiments in embodied evolution in the swarm robotics lab. Here's a picture showing his experiment with 3 Linux e-puck robots in a small circular arena together with an infra-red beacon (at about 2 o'clock).

IMG_8016

The task the robots are engaged in collective foraging for food. Actually there's nothing much to see here because the food items are virtual (i.e. invisible) blobs in the arena that the robots have to 'find', then 'pick up' and 'transport' to the nest (again virtually). The nest region is marked by the infra-red beacon - so the robots 'deposit' the food items in the pool of IR light in the arena just in front of the beacon. The reason we don't bother making physical food items and grippers, etc, is that this would entail engineering work that's really not important here. You see, here we are not so interested in collective foraging - it's just a test problem for investigating the thing we're really interested in, which is embodied evolution.

The point of the experiment is this: at the start the robots don't know how to forage for food; during the experiment they must collectively 'evolve' the ability to forage. Paul is here researching the process of collective evolution. Before explaining what's going on 'under the hood' of these robots, let me give some background. Evolutionary robotics has been around for nearly 20 years. The idea is that instead of hand-designing the robot's control system we use an artificial process inspired by Darwinian evolution, called a genetic algorithm. It's really a way of automating the design. Evolutionary algorithms have been shown to be a very efficient way of searching the so-called design space and, in theory, will come up with (literally evolve) better solutions than we can invent by hand. Much more recent is the study of evolutionary swarm robotics (which is why there's no Wikipedia entry yet), which tackles the harder problem of evolving the controllers for individual robots in a swarm such that, collectively, the swarm will self-organise to solve the overall task.

Still with me? Good. Now let me explain what's going on in the robots of Paul's experiment. Each robot has inside it a simulation of itself and it's environment (food, other robots and the nest). That simulation is not run once, but many times over within a genetic algorithm inside the robot. Thus each robot is running a simulation of the process of evolution, of itself, in itself. When that process completes (about once a minute), the best performing evolved controller is transferred into the real robot's controller. Since the embodied evolutionary process runs through several (simulated) generations of robot controller, then the final winner of each evolutionary competition is, in effect, a great great... grandchild of the robot controller at the start of each cycle. While the real robots are driving around in the arena foraging (virtual) food and returning it to the nest, simulated evolution is running - in parallel - as a background process. Every minute or so the real robot's controllers are updated with the latest generation of (hopefully improved) evolved controllers so what we observe is the robots gradually getting better and better at collective foraging. If you think this sounds complicated – it is. The software architecture that Paul has built to accomplish this is ferociously complex and all the more remarkable because it fits within a robot about the size of a salt shaker. But in essence it is like this: what’s going on inside the robots is a bit like you imagining lots of different ways of riding a bike over and over, inside your head, while actually riding a bike.

Putting a simulation inside a robot is something roboticists refer to as ‘robots with internal models’ and if we are to build real-world robots that are more autonomous, more adaptable – in short smarter, this kind of complexity is something we will have to master.

If you’ve made it this far, you might well ask the question, “what if the simulation inside the robot is an inaccurate representation of the real world – won’t that mean the evolved controller will be rubbish?” You would be right. One of the problems that has dogged evolutionary robotics is known as the 'reality gap'. It is the gap between the real world and the simulated world, which means that a controller evolved (and therefore optimised) in simulation typically doesn't work very well - or sometimes not at all - when transferred to the real robot and run in the real world. Paul is addressing this hard problem by also evolving the embedded simulators at the same time as evolving the robot's controllers; a process called co-evolution. This is where having a swarm of robots is a real advantage: just as we have a population of simulated controllers evolving inside each robot, we have a population of simulators - one per robot - evolving collectively across the swarm.



Related blog posts:
Environment-driven distributed evolutionary adaptation
Walterian creatures