Wednesday, January 28, 2009

WWR @ Disneyland in January

Just gave a Walking with Robots talk for about 450 school children, at Disneyland, Paris. The Gaumont cinema to be precise, on the Disneyland complex. This was the first time I've given a talk in a cinema - with my slides projected onto the giant sized cinema screen behind me!

My audience, who I discovered had been bussed from various schools across the UK, were attending a Royal Institution Study Experience - a kind of science winter camp. Even allowing for the cold and grey January weather - what a great way to spend a few days of intensive hands-on science.

Wednesday, January 14, 2009

Robots for Risky Interventions

Returning on the Eurostar from a really interesting workshop in Brussels, on Robots for Risky Interventions and Environmental Surveillance (RISE 09). The focus of the workshop was a number of EU funded projects aimed at developing multi-robot systems in safety-critical applications. One project called GUARDIANS, led by Jacques Penders at Sheffield-Hallam, is aimed at providing firefighters with robot outriders, providing sensing and navigation that - in effect - give the firefighter extended super-senses. I learned that one of the most dangerous situations they have to deal with is large warehouse fires which quickly fill with smoke, making it very easy for firefighters to become lost and disoriented in the labyrinth of aisles between storage racks. But the flat smooth warehouse floor and grid like layout is of course ideal for mobile robots, making this a really good application for robots to prove themselves useful in a serious and worthwhile real-world task.

I gave a talk setting out the potential of using a swarm robotics approach to safety-critical applications. The swarm approach differs from the conventional multi-robot systems approach in its control paradigm. A multi-robot system will typically use a centralised command and control system to both direct the actions of individual robots and coordinate the whole group. In contrast a swarm uses a completely decentralised, distributed approach, in which each robot decides how to act autonomously - using local sensing and communication with neighbouring robots - so that the swarm self-organises to achieve the overall task or mission. Although the robots may look the same in both cases, the swarm approach is radically different from a systems control point of view. But the swarm approach offers the potential of much higher resilience to failure (of individual robots, for instance).

Thursday, November 20, 2008

Swarm Tolerance to Failure

Jan Dyre Bjerknes has made a terrific breakthrough with his PhD research that I'm very excited about: see the YouTube video (courtesy of Jan Dyre) below.

Let me explain what's going on here, and why I'm so excited about it. The swarm of 10 e-puck robots, starting on the left of the arena, are attracted to the beacon (the black box) on the right of the arena. Crucially the swarm's movement toward the beacon is not directly programmed into the robots, it is what we call an emergent property of the swarm. I won't explain how it works here, except to say that the robots need to - in a sense - cooperate. One robot can't make it to the beacon on its own, nor two, nor three or four. Five is about the fewest number that can get to the beacon.



If you watch the movie clip carefully you will see that a few seconds into the experiment Jan Dyre has arranged that two of the robots fail: you can see them stop moving. In fact they fail in a really bad way. Their electronics and software still works, only the motors have failed. But because the swarm works cooperatively, the failed robots have the effect of anchoring the swarm and impeding its movement toward the beacon. However, what the clip also shows is that 'force' of the swarm movement (of the 8 robots still working) is, after a while, enough to overcome the 'anchoring force' of the two failed robots. Bearing in mind that partial failures are the worst kind, 20% is a massive failure rate, so this experiment demonstrates the very high level of fault tolerance in a robot swarm.

Tuesday, September 02, 2008

E-pucks with spiky hats

Here are some pictures of e-pucks sporting their new spiky hats (click to enlarge). The purpose of these hats is to allow us to mark each e-puck with 3 reflective spheres, as shown on the left e-puck in the pictures. The reflective spheres allow the e-pucks to be tracked by our Vicon tracking system, and the grid of spikes means that we can provide each e-puck with its own unique pattern of 3 reflective spheres. Jan Dyre (who took these photos) tells me that there are 92 ways of uniquely arranging 3 spheres on this 6x4 grid. The Vicon system will, I'm advised, be able to track each robot in the swarm by recognising its unique pattern of 3 spheres. The Vicon system is due to be set up by their engineers this coming Thursday: it will be great to see it working.

Monday, August 04, 2008

Richard Vaughan and Marco Dorigo visit the lab

Terrific to have visits today of both Richard Vaughan at his team from Simon Fraser University in Vancouver, Canada, and Marco Dorigo from the Universite Libre de Bruxelles. Both Richard and Marco are luminaries in the field of Swarm Robotics: Richard for his part in developing the Player/Stage simulation tools and Marco for more or less pioneering the field of Swarm Intelligence and subsequent leading swarm robotics projects such as SWARM-BOT.

Friday, August 01, 2008

Heart Robot and BSc Robotics

It has been brilliant to see the amazing coverage of the Heart Robot project during the last two days. Check out this piece on BBC news online, or Google Heart Robot. Heart robot was jointly conceived by my colleagues Matthew Studley (lecturer in Robotics), Claire Rocks (research fellow) and BSc robotics student David McGoran. Matt and Claire wrote the bid for funding from the EPSRC partnerships for public engagement scheme, with David as a named researcher. I don't need to describe Heart Robot because you can see the whole story on the excellent project web pages here http://www.heartrobot.org.uk/.

The reason for this post is to say Way to go Team, and to anybody out there who might be thinking of studying robotics at university: this is what can happen when you come and do robotics at UWE!.

Tuesday, July 15, 2008

How to make a fool of yourself on national radio

Being interviewed live on national radio is an interesting experience.

It's not so bad when you're in a studio face to face with the interviewer. Then there's a proper sense of occasion, of being there for a purpose, something to rise to.

But being interviewed by telephone is an altogether different and more risky proposition. Why risky? Let me set the scene. You've agreed to be interviewed by a national radio station that has, hitherto, never blipped onto your cultural radar. The producer called and asked if you would be able to comment, in the science slot of the breakfast show, about a recent newspaper article listing the top 10 reasons that mankind could be wiped out this century. In particular the one that predicts mankind will, within 40 years, build super-intelligent robots who promptly (and ungratefully) enslave their creators. Quickly passing over your observation that said producer seems surprisingly laid back, you say to yourself - can't be so bad - they have a science slot. And of course you would be grateful for the opportunity to explain why this particular prediction is laughably absurd.

You rise early the following morning, after checking the news piece and giving some thought to how you can counter this particular piece of futurology. (Which turns out to be based on the mistaken assumption that because processing power is doubling roughly every 2 years, then robot intelligence is doing the same.)

With 20 minutes to spare you find the radio station on the Interweb and click the listen now button. The presenter starts to talk about robots-taking-over-the-world and invites a phone in. He wants listeners to phone with mad robot inventions and introduce them with a robot voice. Hmmm. At this point you begin to realise that the science slot doesn't have quite the level of gravitas that you might have hoped for.

Then the phone rings. Butterflies. Ok, normal. It's the laid back producer again. After a few minutes listening to the radio on the phone you hear yourself being introduced and you're on. This bit is always weird. You're on the phone with a few hundred thousand people on the other end. Just focus. It's only a conversation with some guy. Nevermind that he's called Xane. Or the fact that he just egregiously misquoted the article by inserting the words 'taking-over-the-world' between 'probability of super-intelligent robots' and 'high'.

The first couple of questions are kind of ok. More or less what you expected. You carefully explain that no, in your opinion it's extremely unlikely that we will build robots with super-human intelligence in the next 40 years and, even if we did, why should they be evil and take over the world (or more to the point why would we make them evil). Then some relatively innocuous questions: What is the most powerful robot in the world - is it Asimo? Er no, Asimo is actually remotely controlled by a team of 6. What about that freaky monkey robot with the robot arm? Well, that's not so much a robot as work to improve neural electronic interfaces to help people with smart prostheses.

Then just when you think it's all over you get the inevitable mad-question-at-the-end.

Q. But if robots did take over the world, what would we call them?

A. I really don't think robots are going to take over the world. 

Q. (More insistently this time) Yes, but if they did. What would we call them?

A. No, they really aren't going to take over the world.

Q. (Even more insistently) But what if they did? What would we call them?

Then you make a fool of yourself on the radio by wearily saying 'evil robot master' or somesuch nonsense, thus eliciting the triumphal response from Xane and his co-presenter: Aha! See, the professor says so. Robots really are going to take over the world.

Monday, June 30, 2008

Linux e-puck


I'm very excited because my colleague Wenguo Liu has completed testing of a Linux plug-in board for the e-puck robot. This board very significantly enhances the e-puck - making it into a fully grown-up Linux robot. Here it is with its USB Wifi stick.

Tuesday, April 01, 2008

e-Puck hearing experiment

Here's a video clip, by Davide Laneri, of one e-puck sounding a tone on its loudspeaker, and the other hearing the tone and turning toward it. In order to make this work we've had to (1) turn the loudspeaker of the e-puck on the right so that it's facing directly forward (like a mouth) and (2) add 'ears' to the e-puck so that it has directional hearing. We've not done extensive tests because we've now decided to focus our effort, here in Bristol, on imitation of movement instead of sound.

Wednesday, March 19, 2008

Just returned from the Symbrion Kick-off meeting

I just returned from an amazing meeting in Stuttgart with an amazing group of people: the project kick-off meeting for the Symbrion project. So what is Symbrion and what are we trying to achieve? Well, the idea is to build a swarm of mobile robots that can autonomously self-assemble into an artificial organism in which each individual robot becomes - in effect - a cell in a kind of artificial multi-cellular organism. The idea of self-assembling robots is not new, but in Symbrion the robots will be able to function as a swarm but then, if the situation demands, self-assemble into a 3 dimensional organism; then if required disassemble and form into a different kind of 3D organism. Imagine the swarm coming to a barrier too high to cross then autonomously forming an 'organism' to climb over the wall, then disassembling and reassembling into a different morphology to, for instance, collectively transport an object too large for a single robot to carry. In this way the Symbrion organism will be able to morph between different 3D forms as required by the situation.

No surprisingly, Symbrion is an extremely challenging project with very tough technical milestones. Our first task is to design and build the Symbrion robots - each robot will need to operate autonomously and have its own power, computation, sensing and motors for mobility but - in addition - have the ability to physically dock with other Symbrion robots on several sides. Furthermore the docking mechanism will need to be motorised so that once attached several robots will be able to bend in 3D. Thus a 2D swarm will be able to self-assemble into a 2D planar structure but then, once assembled, lift itself into a 3D shape - for instance from a X shape into a 4 legged walker.

-------------
Postscript: following the London press launch ITN posted the TV interview onto YouTube, see Robots with a mind of their own.

Thursday, February 28, 2008

How and Why do we have Culture?

In the run up to Science Week (7-16 March) the BA have been asking both the public and scientists for their big questions (see my previous blog What do Aliens look like). When I was asked for my Big Question I didn't have to think too hard, because I'm part of just about the most exciting research project of my life. That project is called The Emergence of Artificial Culture in Robot Societies, and sets out to answer the question "how can culture emerge as a novel property of social animals?" or to put it another way "how and why do we (humans) have culture?".

Of course you may be wondering what business a robot guy (me) has to do with a question of - essentially - evolutionary anthropology, which on the face of it has nothing to do with robotics. Well, firstly I've spent the last ten years working on Swarm Robotics - basically building robot swarms to try and understand how swarm intelligence works, and a robot swarm is a kind of primitive society of robots. Secondly, that work has opened my eyes to the extraordinary power of emergence, or self-organisation*. And thirdly, I'm passionate about trying to work on research problems that completely cross discipline boundaries, ideally across the arts/humanities, social- and natural-science boundaries. The question "how and why do we have culture" is just such a question.

I won't explain now how we intend to address this research question in detail. Suffice it to say that we are going to use a radical approach - which is to build a society of real robots, program them with (what we believe to be) a necessary and sufficient set of social behaviours, then observe them free running. Of course the big question then is will anything happen at all that is capable of being robustly interpreted as evidence of emerging proto-cultural behaviours and - if it does - would we even recognise it (since this will be an emerging robot- not human- culture; an exo-culture if you will).

I'm privileged to be part of a team that includes a computer scientist, theoretical biologist, philosopher, social scientist and art historian/cultural theorist. For more detail here is the announcement on EPSRC grants on the web. Not least in order to mitigate the risk that we fail to recognise anything interesting that might emerge, but also because we strongly believe in Open Science, the whole project will be on the web - live robots, data and all - as soon as we're up and running.

And here's a picture of 2 of the robots we plan to use (called e-pucks). We've added some 'ears' so that they can chirp at each other; the artificial culture lab will have around 50-60 of these robots.

----------------------------------------------------------
*Emergence is - in my view - both ubiquitious (everywhere from physics, to life, intelligence and culture, to galaxies) and for more important than I think we realise. I would go so far as to say that I believe natural selection (although beautiful and powerful) is on its own insufficent to explain the astonishing complexity of many biological and societal systems. I think you need natural selection + emergence.

Saturday, February 23, 2008

What do Aliens look like?

Amazing meeting yesterday afternoon at the Science Museum. Here's the story: the British Association for the Advancement of Science (BA) has been asking for science questions via their web pages for awhile, in advance of Science Week, which is 7-16 March. A question that keeps coming up is "what do aliens look like?" so, to address that question, the BA pulled together a small panel which met yesterday. Our brief was to come up with some plausible alien life forms that can be visually presented during Science Week. The keyword here is plausible. It would be easy to pluck super-exotic aliens from the rich fauna of SF but then very difficult to explain the science. Of course, the fictional evolutionary science, or biochemistry, or ecosystems of even plausible aliens is going to have gaping holes, but we were tasked with trying to minimise those.

So, what did we come up with..? I can't say now but all will, I hope, be revealed during Science Week (and in this blog, then). ...and here is the press release describing our life forms.


Visualisation by Julian Hume, Research Fellow at the Natural History Museum/University of Portsmouth.

Wednesday, February 20, 2008

Blogging the Robot Bloggers

Here at @Bristol Walking with Robots is running a 3-day workshop for students and researchers in robotics, artificial intelligence and animatronics. It's a kind of masterclass whose objective is to train those students and researchers in science communication, with the hope that they'll be motivated to get involved in public engagement. I'm not going to talk about the training workshop here because it's described elsewhere in both a recent UWE press release Making Science Fun for Everyone and on EPSRC grants on the web here.

What I want to blog about here are the robot bloggers on wwrobots.wordpress.com.

The workshop has four streams with about a quarter of the students signing up for each. One of those streams is New Media, which is training its group in online reporting. Here is picture of the online newsroom.

These students have been tasked with publishing stories from the other three groups. Remarkably, the blog was up and running by mid-morning Monday and has provided a more-or-less real-time record of the workshop since then. The newsroom has been busy the whole time, but never more so than right now. This afternoon the whole workshop has had an amazing opportunity to put their activities to the test on the floor of @Bristol, and since this is the half-term holidays it's pretty busy out there (to put it mildly). Our robot bloggers have been out on the floor too, recording with still photos, video and text, the extraordinary excitement of the interaction between our students, their robotics, AI and animatronics activities, and the children and families. Scroll down wwrobots and you'll quickly get a sense of that excitement.

Of course I'm focussing on just one aspect of the workshop but it was, for me, perhaps the most surprising. Firstly, by opening my eyes to the potential for Web 2.0 media to provide - in effect - real time interactive reporting; a kind of worldwide outside broadcast. It seems to me that, with this approach, there is remarkable potential to enhance and extend the reach of many kinds of public engagement or science communication.

But secondly and even more significantly here, our robot bloggers bound together the whole workshop in an altogether unexpected and enriching way.

Saturday, January 26, 2008

Light-speed, Gravitation and Quantum Instantaneity

Or, Pope's Dangerous Idea.

One of the most important principles of science is attributed to 13th century philosopher William of Ockham and known as Ockham’s Razor. It is the methodological principle of ontological parsimony: when presented with alternative explanations always opt for the simplest, the one with the fewest possible causes, assumptions or variables.

This book is an appeal to Ockham’s Razor, for it offers a much more economical account of special and general relativity than is present in the traditional development of relativity theory. In fact, the extent of the simplification of relativity offered by Anthony Osborne and N. Vivian Pope is such that a theory which could hitherto be fully appreciated only by those with advanced university-level mathematical training, can now be understood with little more than high school maths. But, in this book, simplification is not the same as dumbing down. Osborne and Pope present a rigorous, scholarly and philosophically coherent re-appraisal of the fundamental tenets of both Newtonian and Einsteinian physics and some of the consequences of that physics to both quantum physics and cosmology.

That re-appraisal has consequences that go far beyond an incremental re-working or adjustment of existing results. If the theory presented in this book turns out to be correct it would presage a paradigm shift in physics that would not only challenge the academic establishment but also change the way that ordinary people think about the material world. Let me give an example. Newton's first law of motion states that every object in a state of uniform motion tends to remain in that state of motion unless an external force is applied to it. In other words, that the 'natural' state of motion is in a straight line. Osborne and Pope propose an alternative first law of motion: the natural (force less) state of motion is orbital. I.e. that bodies continue to orbit unless an external force is applied. Now the Universe is full of orbital motion. From the micro-scale - electrons in orbit around nuclei, to the macro-scale - moons around planets, planets around stars, rotating galaxies etc. If this alternative first law is true, it would mean that we don't need to propose a force of gravitational attraction to account for orbital motion. This is compelling not least because it leads to a simpler and more elegant explanation. It would also explain why - despite vast effort and millions of dollars worth of research - no empirical evidence (gravity waves or gravity particles) has yet been found for how gravity propagates or acts at-a-distance. A common-sense objection to this idea is "well if there's no such thing as gravity what is it that sticks us to the surface of the earth - why don't we just float off?". The answer, according to Osborne and Pope, is that the natural (force less) orbital radius for you (given the mass of your body), is quite a long way towards the centre of the earth from where you now sit. So there is a force that means that you weigh something, it's just not a mysterious force of gravity but the real force exerted by the thing that restrains you from orbiting freely, i.e. the ground under your feet.

The world is, of course, full of alternative theories. One reason that this one deserves to be taken seriously is that it is experimentally testable. That experiment is conceptually simple: take a spinning mass; spinning one way it should weigh a little more than when at rest, spinning in the opposite direction it should weigh a little less. The problem is not conceptual but practical: the weight change predicted by Osborne and Pope is small (about one hundredth of a milligram for a 175 gram disk spinning at 18,000 revolutions per minute), thus the experimental apparatus would need to be both very strong to cope with the energy of the spinning disk and very sensitive to be able to accurately measure the weight change. The experiment is certainly difficult, but not impossible with today’s technology. What is important here is that Osborne and Pope are not asking for the ideas presented in this book to be taken on faith. The Pope Osborne Angular Momentum Synthesis can be proven, one way or the other.

This book deserves to be read and it deserves to provoke controversy. This work is not popular science. It is the culmination of a 20 year collaboration between Osborne and Pope, and a 50 year intellectual journey by N Vivian Pope (quite literally, a life’s work). To anyone who cares about the truth in physics, I commend this book.

---------------------------------------------------------
This is my foreword to this book, launched today at Borders bookshop, Swansea. The ideas in this book are significant and heretical; to paraphrase Daniel Dennett, these are Dangerous Ideas.

Thursday, January 24, 2008

Ecobots at the Dana

Great evening at the Dana Centre Tuesday. A public debate/discussion called Techno Bodies - Hybrid Life? "What counts as a hybrid life form and how it affects you."

My contribution was to lead a discussion about the lab's Ecobots - starting with the Slugbot, then Ecobots I and II (famously fly eating robots powered by Microbial Fuel Cells), a little about what's next - Ecobot III - and to speculate about possible future applications for these hybrid-life robots. Hybrid life..? I'd never thought of these robots as hybrid life before but I guess the marrying of bio-chemical processes with plastic, metal and silicon does blur the boundaries.

Here's a picture of the Ecobot II, built by colleagues Ioannis Ieropoulos, Ian Horsfield, Chris Melhuish and John Greenman. Remarkably this robot runs for about 12 days on just 8 dead flies.

I was going to blog here about the various interesting questions that came up in discussion, but I don't need to because SciencePunk has done it already, so go here for a great summary.

Friday, January 04, 2008

What I've changed my mind about and Why

Every New Year the Edge poses a question and publishes the responses from among the world's intelligentsia. It's fun to read the responses but, even more fun to write your own.

This year's question is What have you changed your mind about? Why? With the rather important sub-text science is based on evidence. What happens when the data change? How have scientific arguments or findings changed your mind?

The sub-text makes all the difference. Without that we can all think of things we've changed our minds about. But how often are we faced with major paradigm shifts from one set of views to a completely different set, based on scientific evidence, and quite possibly in the face of our own prior beliefs or prejudices. Rarely, I'll bet. Most scientists tend to be educated within a particular tradition and then remain within its established dogma throughout their working lives.

Without doubt the most profound scientific change of mind I have experienced (which took many years) is in relation to Newton's first law of motion, which I now believe to be wrong. However, I discussed this two years ago, in response to the Edge 2006 New Year question, so I won't repeat it here.

So, what else have I changed my mind about?

Well, in November 2007 I was lucky enough to be invited to a University of Bristol workshop on Mathematical Models of Cognitive Behaviour. The assembled speakers covered a broad spectrum of topics, but one in particular really caught my imagination. Dennis Bray's brilliant and illuminating talk introduced the computational processes going on inside the single cell, e-coli. I learned, to my astonishment, that single-celled micro-organisms have a complex repertoire of behaviours including memory and adaptation (in addition to finding food and reproduction). Behaviours that one normally associates with far more complex multi-celled organisms with nervous systems.Yet these are creatures with no nervous system. But, as I learned, an exquisitely complex set of molecular computational processes are present within the dense chemical soup (cytoplasm) encapsulated within the cell. Dennis explained that for e-coli, for instance, we now have a kind of circuit diagram of the complete set of computational processes. But it's far from simple!

Before Dennis' talk I was fond of saying that the robots in our lab are simpler than the simplest animals. As a result of what I learned from his talk, I've now seriously downgraded my assessment:

Our robots are simpler than the simplest single-celled micro-organisms.

Wednesday, December 26, 2007

Human consciousness could be immortal

Our subjective experience of the 'continuity' of consciousness is surely an illusion. But what makes that illusion and why is it so compelling? That's a deep question but here are I think two fundamental reasons:

1. Embodiment. You are an embodied intelligence. It is a mistake to think of mind and body as somehow separate. Our conscious experience and its awakening as a developing child is surely deeply rooted in our physical experience of the world as mediated through our senses.

2. Environmental continuity. Our experience of the world changes 'relatively' slowly. The word relative is important here since I mean relative to the rate at which our conscious experience updates itself. Of course we do experience the discontinuity of going to sleep then waking to the changed world of a new sunrise, but this is both deeply familiar and predictable.

A word that encompasses both of these is situatedness. Our intelligence, and hence also conscious experience is inextricably situated in our bodies and in the world. Let me illustrate what this might mean with a thought experiment.

Imagine a brain transplant. Your brain complete with its memories and life's experience, together with as much of your central nervous system as might be needed for it to function properly were to be transplanted into a different body. You would wake from the procedure into this new body. I strongly suspect that you would experience a profound and traumatic discontinuity of consciousness and, well, go mad. Indeed it's entirely possible that you simply couldn't (and perhaps mercifully) regain or experience any sort of consciousness at all. Why? Because the conditions for the emergence of consciousness and the illusion of its continuity have been irreversibly broken.

However, if what I have said above is true, there's a flip side to the story that could have extraordinary consequences.

If the continuity of consciousness is an illusion then, in principle at least, it might be possible to artificially perpetuate that illusion.

Imagine that at some future time we have a sufficiently deep understanding of the human brain that we can scan its internal structures for memories, acquired skills, and all of those (at present dimly understood) attributes that make you you. It's surely safe to assume that if we're able to decode a brain in this way, then we would also be able to scan the body structures (dynamics, musculature and deep wiring of the nervous system). It would then be a simple matter to scan, at or just before the point of death, and transfer those structures into a virtual avatar within a virtual world. The simulated brain structures would be 'wired' to the avatar's virtual body in exactly the same way as the real brain was wired to its real body, thus satisfying the requirement for embodiment. If the virtual world is also a high fidelity replica of the real world then we would also satisfy the second requirement, environmental continuity.

I would argue that, under these circumstances, the illusion of the continuity of consciousness could be maintained. Thus, on dying you would awake (in e-heaven), almost as if nothing had happened. Except, of course, that you could be greeted by the avatars of your dead relatives. Even better, because e-heaven is just a virtual environment in the real-world, then you could just as easily be visited by your living friends and relatives. Could this be the retirement home of the far future?

In this way human consciousness could, I believe, be immortal.

Monday, December 24, 2007

My love hate relationship with an Xbox 360

Ok, I admit it. I have an Xbox 360.

Of course I'm hopeless at playing video games, which is perhaps just as well because it means that I don't spend long playing (or wasting time, depending on your point of view). Just as long as it takes for me to get frustrated by how useless I am and give up. (But it's ok because my eldest son levels me up when he comes to stay;-).

However, as a piece of hardware it's awesome. Liquid cooled, 3 processors plus graphics processing unit. As someone who worked with the very first 8-bit microprocessors over 30 years ago (see my post By, you were lucky...) and with pretty much every generation since, I ought to be inured to the now expected doubling of performance roughly every two years (Moore's Law). But I'm not. I still easily get awestruck by next generation hardware and its capability. I'm equally impressed by the Xbox 360's operating system, its so called 'blade' interface is intuitive and a pleasure to use.

Of course this blog is really just an excuse for me to post a screen capture from the stunning Project Gotham Racing 4 of me driving (in my dreams) an Enzo Ferrari down The Mall, in the rain.

Saturday, December 08, 2007

The future doesn't just happen, we must own it

Had a remarkable couple of days last week. Walking with Robots (WWR), the Royal Academy of Engineering (RAE) and the London Engineering Project jointly ran a pilot Young Peoples' Visions Conference. The aim of the 2 day event was to give an opportunity for around 20 young adults (age 16-18) to "explore visions of their future and the part robots will play" in that future.

For me the event was important because it takes Walking with Robots into a new form of dialogue. At WWR events and activities I have been privileged to engage with many children, teenagers and adults, over the past year and I increasingly have a sense that our children, in particular, believe that the future is nothing to do with them. That technology is something that happens, or is imposed from outside and that they are merely passive consumers or targets for that technology. In this conference we really tried to change that view. It was wonderful to see the group change from - at the beginning - seeing this as 2 days out of college to - in the end - all being deeply engaged in the dialogue. I got a real sense of them feeling empowered and that their considered opinions are important - not least because those opinions will be published by the RAE and send to policymakers. In other words that they do have a say in their own technological futures.

Friday, October 26, 2007

A Mac with new spots: installing Leopard

A day off sick with a head cold and painful sinuses had one consolation. I had a little time this afternoon to install Leopard - the new version of Mac OS X (which interestingly arrived this morning before it had been officially launched at 6.00pm this evening).

How did the installation go? Well I'm happy to report that it was remarkable for being unremarkable. Just two minor comments: firstly, there was a very long pause (5 minutes perhaps) at the start of the install process proper, when the time remaining said 'calculating' and there was no apparent hard disk or DVD activity - I was beginning to have doubts about whether all was ok before the process sprang into life again (note to Apple: any kind of long and worrying pause like this really should still have some sort of progress indicator no matter how simple). Secondly, the time remaining calculation appeared to have difficulty making its mind up. Initially it said something over three and a half hours and then revised its estimate downwards over the next 30 minutes or so. In the end it took about an hour and a half from start to finish, and that included a long time for install disk verification.

First impression? Well it's fine. It's an operating system which means - in my view - should not be the main event but just get on and do its thing in the background while letting me get on with my work. It looks very nice of course, but so did Tiger. Cosmetically not such a big difference, especially for me since I place the dock on the left rather than at the bottom. (Ergonomically it makes more sense there because a left mouse movement to reach the dock is far easier than a down hand movement.)

The main new feature that I am immediately and gratefully using is called 'spaces'. It is basically the same thing that Linux window managers have had for years - which I have missed since switching (back) to Mac - that means I can open applications across four virtual screens and then quickly switch between them. This is great for me because when writing I like to have Firefox open for web searches, OpenOffice for drawing diagrams, Preview to read pdf papers, BibDesk and TeXShop for the actual writing. A single screen gets pretty crowded. (Of course what I'd really like is a bank of LCD displays so I can see everything at once but - for now - spaces will have to do.)

What else? Well the ability to instantly search and then - again with almost no delay - view the search results with 'cover flow' and the use 'quick look' to review what you find in more detail is terrific. The way that quick look opens everything from powerpoint presentations to movies and allows you to skip through the files with the left and right arrow keys but also scroll up and down individual files is just great. For the first time in 33 years of using computers I really think I don't need to remember filenames anymore. Given that this is still a good old fashioned traditional Unix file system underneath, Leopard is probably as close as you can get to feeling like an associative 'contents addressable' file system.

----------------------------------------------------------
*Footnote: I returned to Mac earlier this year after a 20 year separation. The first computers at APD (that we didn't design and build ourselves) were 128K Macs in 1985. Lovely machines with a proper windows OS (while the PC was still running DOS) that were used from everything from word processing and accounts to technical drawing.

Sunday, October 07, 2007

You really need to know what your bot(s) are thinking (about you)

The projected ubiquity of personal companion robots raises a range of interesting but also troubling questions.

There can be little doubt that an effective digital companion, whether embodied or not, will need to be both sensitive to the emotional state of its human partner and be able to respond sensitively. It will, in other words, need artificial empathy - such a digital companion would (need to) behave as if it has feelings. One current project at the Bristol Robotics Laboratory is developing such a robot, which will of course need some theory of mind if it is to respond appropriately. Robots with feelings takes us into new territory in human-machine interaction. We are of course used to temperamental machinery and many of us are machine junkies. We are addicted to our cars and dishwashers, our mobile phones and iPods. But what worries me about a machine with feelings (and frankly it doesn’t matter whether it really has feelings or not) is how it will change the way humans feel about the machine.

Human beings will develop genuine emotional attachments to companion bots. Recall Weizenbaum’s secretary’s sensitivity about her private conversation with ELIZA - arguably the worlds first chat-bot in the 1960s. For more recent evidence look no further than the AIBO pet owners clubs. Here is a true story from one such club to illustrate how blurred the line between pet and robot has already become. One AIBO owner complained that her robot pet kept waking her at night with its barking. She would “jump out of bed and try to calm the robo-pet, stroking its smooth metallic-gray back to ease it back to sleep”. She was saved from “going crazy” when it was suggested that she switch the dog off at night to prevent its barking.

It is inevitable that people will develop emotional attachments, even dependencies, on companion bots. This, of course, has consequences. But what interests me is if the bots acquire a shared cultural experience. Another BRL project called ‘the emergence of artificial culture in robot societies’ is investigating this possibility. Consider this scenario. Your home has a number of companion bots. Some may be embodied, others not. It is inevitable that they will be connected, networked via your home wireless LAN, and thus able to chat with each other at the speed of light. This will of course bring some benefits - the companion bots will be able to alert each other to your needs: “she’s home”, or “he needs help with getting out of the bath”. But what if your bots start talking about you?

Herein lies the problem that I wish to discuss. The bots shared culture will be quintessentially alien, in effect an exo-culture (and I don’t mean that to imply sinister). Bot culture could well be inscrutable to humans, which means that when bots start gossiping with each other about you, you will have absolutely no idea what they’re talking about because - unlike them - you have no theory of mind for your digital companions.

--------------------------------------------------------------
This is a short 'position statement' prepared for e-horizons forum Artificial Companions in Society: Perspectives on the Present and Future, 25th and 26th October, Oxford Internet Institute.

Sunday, September 16, 2007

A night train to Lisbon*

Fed up with airports and - perhaps rather pathetically trying to do my bit for climate change - I'm taking the train from Bristol to Lisbon. In planning the trip I quickly discovered that taking the planet friendly option is neither quick, cheap, or particularly easy to organise. Of course I didn’t expect it to be quick and anyway part of the attraction was to actually see some countryside en-route. Nor did I especially expect it to be cheap - around 29 hours of train travel across four countries including a sleeper is never going to be able to compete with a point to point budget (or even regular) airline. As for organisation, amazingly I found a great web-page dedicated to the business of getting from the UK to Lisbon by train.

There are a couple of options but the one I chose (and the shortest in travelling time) was Eurostar to Paris, TGV to Irun, then the sleeper ‘Sud Express’ from Irun to Lisbon. So, an early start from Bristol to get the 06.56 Bristol Parkway to Paddington, to be in good time for the 10.10 Eurostar to Paris Nord. Not my first time on Eurostar but, I’m ashamed to say, my first time in Paris, so I got a cab from Gare du Nord to Gare Montparnasse, to get at least a glimpse of the city. The taxi route crossed the Seine and passed the Piramide Louvre and the Arc de Triomphe. Wonderful. Mental note to self: must come here for a proper visit. Then had less than an hour to wait for the TGV. No time to do anything but wait in an especially dismal café on the station concourse - with amazingly aggressive sparrows trying to fight me for my tarte de pomme. Good coffee though.

The 15.50 TGV from Paris to Irun was packed and - I must confess - slightly disappointing. Not in speed or timeliness - which couldn’t be faulted - but I was somewhat nonplussed to find there was no restaurant car. For some reason (probably a romantic impression gleaned from too many European train movies - Poirot and the like), I expected that on a five and a half hour journey I would be able to get dinner. Oh well, at least the buffet was a significant improvement on English trains, and the French seem to prefer to stay in the buffet car to consume their sandwich and coffee, which I rather liked. The loos were also pretty inadequate for the number of people on the train and the length of the journey. There’s nothing quite like the dismal experience of putting a generous gloop of soap on your hands to then discover that the water has run out and there are no hand towels. Still - pretty impressive to find myself looking out of the window on the Atlantic at Biarritz a little over 5 hours after leaving Paris.

I did notice that the French have not banned smoking from stations - only trains. Thus, on every approach to a station a handful of smokers would gather by the carriage doors ready to leap onto the platform for a tobacco fix in the few brief minutes of the halt. I don’t think I ever saw anyone drag so deeply or gratefully on his kingsize Gitanes as one fellow.

Resumed 11.30pm. The romance of long distance trains fully restored on discovering a restaurant car on the Sud Express. The slick air-con and Starbucks-alike chic of the TGV buffet is replaced by 1960s wood-effect plastic and a no-nonsense bar of the sort you can prop yourself up against. And I did. No namby-pamby air conditioning here. The only way to reduce the temperature to anything bearable is to open all the windows which means the noise level in the restaurant car is deafening. Which is fine really because it helps to mask the fact that I don’t speak a word of either Spanish or Portuguese, and the waiter doesn’t speak more than a word or two of English. There were only three of us eating, me and a Spanish looking couple, his back to me and her huge beautiful brown all-seeing-all-knowing-female-wise-eyes glancing at me from time to time. After a while I realised that the reason there were so few of us in the restaurant was that the doors at the other end of the carriage required superhuman strength to open - eventually a few did manage the necessary feat of strength and the bar achieved something like a friendly buzz. I think there must be something very strange about me that I could so much enjoy this solitary repast while clattering through Spain in the middle of the night.

The following morning. The night seemed too hot for anything like proper sleep on top of which the sleeping car was right behind the engine which sounded to me like the oldest diesel locomotive they could possibly muster. However, the rhythm of the train did get the better of the noise and I awoke surprisingly refreshed. A leisurely breakfast in the restaurant car and in what seemed no time at all we were pulling into Lisbon's Santa Apolonia station exactly on time at 11.03am. Great. No queuing at passport control or the baggage claim. Just step off the train into the city. A civilised way to arrive.

-----------------------------------------
*With apologies to Emily Grayson

Friday, August 17, 2007

A truly Grand Challenge

Between 13th and 16th August thirteen teams descended on Monte Ceneri in Ticino, Switzerland, for C-ELROB - the Civilian European Land Robotics Trials. Simply arriving was, in fact, something of a trial for the teams; given the physical size of many of the robots and the amount of supporting equipment and tools, air travel was clearly impossible. Teams therefore drove from the four corners of continental Europe, from as far apart as Portugal, Poland and Finland. For the Finnish lads this meant an epic 3,500km road trip.

The event was staged on and around the Monte Ceneri Swiss army base in the beautiful forested mountains above Lake Maggiore. The base provided a perfect environment for the four outdoor robotics challenges; all challenges were based upon the same basic task: as-far-as-possible autonomous search and location of number-plate sized orange markers including reading the letters and numbers on the marker plates (an abstracted search and rescue task). The winner of each challenge would be the robot which found and read the most marker plates - in the shortest time - with the least human intervention. (The rules allowed for human tele-operation of robots, but this carried a points penalty over fully autonomous operation.)

I was privileged to be able to closely observe the competition, as one of the small team of judges. Although closely involved in robotics research for 15 years this was my first up-close-and-personal experience of outdoor robotics, and my first impression was the sheer difficulty of the physical terrain: robots facing the non-urban challenges in particular had to be able to move and navigate through hilly forest and undergrowth, across rutted and muddy trial paths and, on day two, cope with heavy rain as well. The ‘urban’ challenge of day three was perhaps marginally easier for the robots, but still very tough with steps, ramps and indoor as well as outdoor aspects to negotiate.

Some teams were clearly well financed with, for instance impressive 4x4 vehicles fitted with state-of-the-art high performance sensors whereas other teams brought robots that had clearly been hand-built by the students themselves, on a limited budget. My observation was that the well financed teams did not have a significant advantage. The nature of the challenges was such that - provided the robot could cope with the physical demands of the trials - then it was the quality and ingenuity of the autonomous control strategies that was being tested.

All four challenges were demanding, but the ultimate trial on day four was undoubtedly the most testing of all. Robots were required to drive around a 3km forest road while spotting, reading and recording the location of the orange markers, with the optional addition of a 2km off-road section. Although tele-operation was allowed within the rules (albeit with a penalty), the hilly terrain meant that maintaining a radio signal between base and robot was more or less impossible. This challenge required no less than full autonomy which - given the difficulty of the mountain roads including no road markings, no clear road edges, deep shadows caused by overhanging trees and hazards like steep drops and hairpin bends - amounted to a significantly harder test than the much feted DARPA grand challenge. Remarkably, the winner of this event - the unassuming University of Hanover robot not much bigger than a wheelchair shown here - completed the course in a little over 40 minutes.

Quite apart from the demonstration of the state-of-the-art in outdoor robotics, this event was an extraordinarily valuable experience for the teams who entered. Most of the student teams worked around the clock in their makeshift mini-workshops of clustered laptops to test and refine their robots, right up to the last minute, (one lead professor remarked to me that all he had to do was keep his team supplied with pizza). Modern robots are very complex machines which must closely integrate mechanical and propulsion systems; power and energy management; sensors, vision and navigation; wireless communications and command and control software. The key to success here is systems integration and that integration was reflected not only in the engineering but in the very effective teamwork that was much in evidence at C-ELROB. That teamwork, together with the sense of healthy rivalry between teams, contributed to a remarkably successful event.

The full list of teams, and pictures of their robots are here.

If you can read German (or if not check out the pictures anyway), here is a detailed account of each of the 4 days of the event by science journalist Hans-Arthur Marsiske:
Day 1: Combined air and land trial
Day 2: Off-road (woodland) trial
Day 3: Urban marketplace search trial
Day 4: Autonomous reconnaissance

Sunday, July 08, 2007

Could a robot have feelings?

One of the great pleasures of giving public lectures is the questions that come from the audience, and my talk last week to the Bath U3A group was no different. A great question (and one I've been challenged with before) was "could a robot have feelings - emotions like fear, sadness or love?"

My questioner (and I guess most people) would be hoping for a comforting answer: No, only people and maybe some animals (your dog) can have feelings. In one sense that answer would be correct, because if we define feelings simply and subjectively as 'the feelings we experience when we are afraid, sad, or in love', such a definition would make those emotions uniquely human and something robots could not have by definition.

For the sake of argument let's get over that by asking a slightly different question: could a robot have artificial emotions that - in some way - are an analogue of human feelings. At this point I would normally say to my questioner, well, yes, I can conceive of a robot that behaves as if it is experiencing emotions. A humanoid robot, in other words, that acts is if it were afraid, or sad. The better the technology, then the more convincing and subtle would the (artificial) body language be. Actually we don't have to try very hard to imagine such a robot - colleagues in the lab are working on robot (heads) with artificial empathy. Here a picture.

At this point my questioner says "ah, but that robot doesn't really have feelings. It's just pretending to have feelings".

True! But don't human beings do that all the time. Isn't it the case, in fact, that we value the ability to be able to pretend to express feelings very highly, providing it's the honest deception of the actor's trade? We also value people who can make us feel in different ways: musicians, artists, poets or storytellers. Ok, I admit I'm being a little tricky here, but the point I'm trying to make is that humans can be very good at pretending to have feelings or at deceiving others into having feelings, and some (me included) have a hard time telling the difference.

Back to robots though. Yes, a robot that is programmed to behave as if it has feelings cannot, I grant, be said to actually have feelings. At best that robot is a good actor. This might seem to be the end of the issue but it's not, because the possibility of a robot that is very good at behaving as if it has feelings raises some pretty interesting issues. Why? Because in human robot interaction such a robot could have quite some power over the human. What interests me here is how such a robot would make humans feel towards it. Could a human, for instance, fall in love with an deeply (albeit artificially) empathic robot? I suspect the answer is yes.

This is a subject I'll return to in future blogs: not only the social implications of robots with artificial feelings, but also the question of whether robots could be designed that really do have feelings.

Friday, May 11, 2007

Anthropomorphism and robotics

Spent the last two days at a really interesting workshop: Practices of Anthropomorphism: from Automata to Robotics. The best thing about the meeting was the mix of disciplines, including anthropology, psychology, art and robotics. Terrific. The key to creativity, IMHO, is working with people outside your own discipline. It can be tough, mind you, outside the comfort zone of familiar ideas and frameworks. But that's precisely the point. Recall Koestler's brilliant "The Act of Creation" - creativity happens when two previously disconnected ideas intersect, meaningfully. (Or, humour, if the intersection is absurd.)

So, what were the creative outcomes of the workshop? Well, there were plenty of "ah ha!" moments (as well of lots of "ha ha"). For me the big insight was the realisation (obvious I guess to the anthropologists) that human beings are so utterly pre-disposed, hard-wired even, to anthropomorphise. Whether we like it or not we are, it seems, anthropomorphiliacs with a compulsion to attribute anthropic qualities to and develop emotional attachments with animals or artefacts, almost anything from the everyday to the exotic, the banal to the sublime. This is very interesting to a roboticist on a number of levels. For example, even simple robots are imbued with characteristics they don't have (cute, inquisitive, happy, ill) and at the other end of the robot spectrum humans will anthropomorphise and thereby compensate for the shortcomings of humanoid or android robots. In other words robot builders don't need to worry about making android robots into perfect artificial humans - cartoon robots will do - like NEC's PaPeRo shown here (thus also neatly avoiding the Uncanny Valley). You may think this is just another fraud, like de Vaucanson's defacating duck. And, in a way, it is. But as long as we, the roboticists, are completely honest and transparent about the real capabilities of our robots: what they can but especially what they cannot do, then it is a fraudulent contract that humans and robots can willingly and beneficially submit to.

This brings me to another question. Why are we so fascinated by robots? I think the answer lies in another surprising innate ability of humans. That is our ability to tell the difference between animate and inanimate. I think we love robots because they behave as if they are animate, yet we know them to be inanimate artefacts. We are, I believe, delighted by this deception.

A final thought: if, as Gabriella Airenti brilliantly argued at the workshop, anthropomorphism is an (inevitable) consequence of imitation and theory of mind, then it's surely not inconceivable that future intelligent robots might also develop this tendency. Except that, for them, it would be robomorphism.

Sunday, April 15, 2007

Walterian Creatures

In Daniel Dennett's remarkable book "Darwin's Dangerous Idea" he describes the Tower of Generate-and-Test; a brilliant conceptual model for the evolution of intelligence that has become known as Dennett's Tower. I propose here another storey to the Tower, for what I want to call Walterian Creatures, after the pioneering neurophysiologist W. Grey Walter, inventor of the world's first electro-mechanical autonomous mobile robot.

In a nutshell Dennett's tower is set of conceptual creatures each one of which is successively more capable of reacting to (and hence surviving in) the world through having more sophisticated strategies for 'generating and testing' hypotheses about how to react. Read chapter 13 of Darwin's Dangerous Idea for the full account, but there are some reasonable précis to be found on the web; here's one fullsome description. But for now here's my very brief outline of the storeys of Dennett's tower, starting on the ground floor:
  • Darwinian creatures have only natural selection as the generate and test mechanism, so mutation and selection is the only way that Darwinian creatures can adapt - individuals cannot.
  • Skinnerian creatures can learn but only by literally generating and testing all different possible actions then reinforcing the successful behaviour (which is ok providing you don't get eaten while testing a bad course of action).
  • Popperian creatures have the additional ability to internalise the possible actions so that some (the bad ones) are discarded before they are tried out for real.
  • Gregorian creatures are tool makers including - importantly - mind tools like language, which means that individuals no longer have to generate and test all possible hypotheses since others have done so already and can pass on that knowledge.
  • Scientific creatures. Here Dennett proposes that a particular way of rigorously, collectively and publically testing hypotheses - namely the scientific method - is sufficiently powerful and distinct to merit a further floor of the tower. (I'm not sure that I agree, however that isn't important to the point I'm trying to make in this blog.)

Like the Tower of Hanoi each successive storey is smaller (a sub-set) of the storey below, thus all Skinnerian creatures are Darwinian, but only a sub-set of Darwinian creatures are Skinnerian and so on.

Gregorian creatures (after Richard Gregory) are tool makers, of both physical tools (like scissors) and mind-tools (like language and mathematics), and Dennett suggests that these tools are 'intelligence amplifiers'. Certainly they give Gregorian creatures a significant advantage over merely Popperian creatures, because they have the benefit of the shared experience of others, expressed either through using the tools they have made or refined or, more directly, through their knowledge or instructions as spoken or written. Arguably the most powerful intelligence amplifier so far created by one particular species of Gregorian-Scientific creature: man, is the computer, for with it we are able to simulate almost any reality we can imagine. Simulation is potent stuff, gedanken thought experiments are no longer doomed to remain flights of fancy and mathematical models need no longer remain dry abstractions. And one of the most remarkable kinds of computer simulation is of intelligence itself: Artificial Intelligence.

What if the tools made by Gregorian creatures take on a life of their own and become, in a sense, independent of the tool-makers? Embodied AI (= Artificial Life) has this potential. Walterian creatures are, I propose, smart tools that have learned to think, grown up and left the toolbox. Think of future intelligent robots (far more capable than the crude prototypes we can currently build) that might co-exist with humans in an extraordinary and fulfilling symbiosis.

The defining characteristic of Walterian creatures is that they are artificial. They've not only left the toolbox but crawled out of the gene pool. No longer bound by the common biochemistry of Earth's biota, yet sharing both the inheritance and evolutionary (albeit artificial) processes of their Darwinian ancestors. So what does this mean for Walterian creatures? Well, all of Walterian’s ancestors share the fact that however simple or sophisticated their strategies for hypothesising about possible actions those actions have to be undertaken by the self-same physical creatures that do the hypothesising. Ok, Gregorian- Scientific creatures can augment themselves with magnificent tools for compensating for their own sensory or physical limitations, like electron microscopes, submarines or manned spacecraft, or remotely operated robot space probes that act as sense extenders, but one thing Gregorian individuals cannot do is evolve themselves as part of the generate and test process. Consider this scenario. A future intelligent autonomous robot is exploring a planet about which very little is known. As part of its generate and test strategy this Walterian can in simulation fast-forward artificial genetic algorithms to evolve its own physical capabilities and then re-build parts of itself on-the-fly to best deal with the situation it has encountered. It could, for instance, artificially evolve and re-engineer itself the means to make best use of whatever energy sources are to hand. (It would be like you or I falling into a river and being able to artificially evolve and grow gills in less time than it takes to drown.)

Walterian creatures are, like Gregorians, able to share tools, knowledge and experience. They will be fully interconnected, so that any individual - subject only to the physical delays of the networking technology - can instantly seek information or resources from the shared Walterian artificial culture. However, unlike Gregorians, these individuals are capable of Lamarckian learning. Need a skill fast? If you’re a Walterian creature then, providing at least one other individual has already learned the skill and is either online or has previously uploaded that skill, then you simply download it. Walterian creatures would surely be profoundly different - and perhaps unimaginable by our merely Gregorian - kinds of minds.

Sunday, March 25, 2007

The Mozart meme

Heard some great lectures yesterday evening at the Bath Royal Literary and Scientific Institute. John Sloboda gave a wonderful talk about the psychology of music, and in particular what makes the difference between a musical genius and the rest of us. He showed, with some pretty compelling evidence, that firstly you need to start learning an instrument very young, and secondly you need around 10 years of averaging 2 hours per day of practice - but not just any old practice, it has to be strongly supported and guided - before then starting composition. Are musical geniuses born or made? John offered the view that it's much more down to nurture than nature. He suggested that genetics may account for musical deficits (such as the very small number of people who are genuinely tone-deaf) but probably not musical genius.

John told a story so fascinating that I want to recount it here. A US research team ran some IQ tests on a group of adults, one group immediately after listening to Mozart, the other (control) group without. The Mozart group showed higher IQ scores than the control group. Now that's interesting enough, but it's what happened subsequently and outside the research lab, that is really quite shocking. John recounted that a journalist reported this finding as "The Mozart Effect" and parents anxious to improve their youngsters' IQs started playing them Mozart, schools introduced the same practice, and in some US districts this became a requirement of the education authority. Pop-psych books were published and money was made. Google the Mozart effect and you'll see what I mean.

But does it work? No. John explained that the original study was done on adults and subsequent work has shown that the same effect isn't apparent in children, and even on adults the IQ raising effect wears off after 10 minutes or so. But that's the power of a great meme. The idea is so attractive that as soon as it catches hold the truth behind the idea becomes irrelevant. And of course Mozart already has almost mythical super-genius status, so the Mozart effect meme was already riding on a winner. Someone asked John if the same would have happened if the original study had used another composer's music. "Almost certainly not", he replied, "the Couperin effect, doesn't have anything like the same ring to it!"

But the world is full of such memes. Some emerge from flaky science, others from a flawed interpretation of otherwise good science. A particular hobby horse of mine is "The Big Bang". Popular culture regards the Big Bang as an established fact. But it isn't. There are two competing theories for the origin of the Universe: one is the big bang theory, the other is the steady-state theory. The problem with the steady-state theory is that it's just dull. Where's the excitement in the idea that the Universe has always existed? Like the Mozart effect, the big bang theory feeds a need. Finite creatures that we are, we like the idea that the Universe has a birth and a death. And if you believe in God, even better. The steady-state theory is not good news for theists.

Memes really are powerful magic.

Wednesday, March 14, 2007

Homo dinosauroid

Last night's Horizon was promising: what might have happened if the asteroid that is generally agreed to have triggered the extinction event at the end of the cretaceous period 65 million years ago had missed? This should be good, I thought. Interesting to speculate about how dinosaurs might have continued to evolve. What forms might they have evolved into by now..?

But the programme was spoiled by an unnecessary and scientifically dubious focus on the question "what would have happened if humans had co-evolved along with dinosaurs?".

Given the extraordinary success of the dinosaurs in exploiting ecological niches (as the programme pointed out) the likelihood that mammals would have evolved very much beyond the rodent-like animals (like Repenomamus) that managed to just about co-exist with dinosaurs must be vanishingly small. (Clutching at straws perhaps) the programme suggested that the tree-tops might have provided a dinosaur-free niche in which primates might have evolved, but failed to address the question of why dinosaurs would not have also moved into the same eco-space, especially with fresh mammalian meat to tempt them.

But for me the programme makers lost it completely with the suggestion that intelligent humanoid dinosaurs might have co-evolved with humans. Now I love thought experiments, but the idea that homo dinosauroid would now be peacefully sharing our 21st C. cafe culture is, frankly, insulting to dinosaurs. We were shown a rather meek and frightened looking specimen (well you would be too with no clothes on) - clearly 21st C. homo d. needs to get down to the gym.

Now I have no problem at all with the idea that dinosaur evolution, if it had not been rudely interrupted by the Chicxulub asteroid, might have resulted in highly intelligent dinosaurs, language, culture and so on (especially given emerging evidence for gregarious behaviour in dinosaur groups). If the asteroid had missed, and (against the odds) primates and hominids had evolved alongside intelligent dinosaurs, the suggestion that the two lineages would have somehow co-evolved into a peaceful vision of Dinotopia is, well, just unbelievable*. Much more likely is that the dinosaurs would have been subject to another and equally lethal extinction event. Man.

--------------------------------------------------------------------
*I say this with the greatest respect for the wonderful books of James Gurney.


Thursday, March 01, 2007

"By, you were lucky..."

My friend, erstwhile mentor and visiting professor colleague Rod Goodman and I were reminiscing a few days ago about our first experiences (~1977) with the Intel 8080, which arrived on a circuit board with 1K bytes RAM, a 1K byte EPROM and absolutely no software. We were having one of those conversations inspired by Monty Python's four Yorkshiremen sketch (and thanks to Dave Snowden for this link from his excellent blog):

"When I were a lad, we only had 4K bytes of RAM and a hex keypad"

"Hex keypad! By, you were lucky. We only 'ad 1K of memory and had to key in t'boot loader by 'and in noughts and ones before we could even start work".

"Well you were lucky. We were so poor we could only afford noughts..." and so on.

But the truth is (and I realise how perilously close I am to becoming a grumpy old man parody here) that my fellow graduate students and I really did have to start from scratch and make all of our own development tools. I recall that we first had to write a cross-assembler, in Algol-68, on the university mainframe: an ICL 1904S. We took advantage of the fact that the mainframe was accessed by electro-mechanical 'teletypes' which were fitted, as standard, with paper-tape punches. We got hold of a paper tape reader and interfaced it to the Intel 8080 development board (designing by hand the necessary interface electronics and device driver code - remember this is long before 'plug and play'). Then we were able to write symbolic 8080 assembler on the mainframe, generate 8080 machine code on paper tape, and load that directly into the 8080 development board to test it. Of course the edit test cycle was pretty long, and not helped by the fact that our lab was two floors from the mainframe terminals, so to speed things up we invested in a special device that allowed us to directly 'edit' the paper tape. The device allowed us to make extra holes and cover over - with a special kind of sticky tape - unwanted holes. Here's a picture of this marvellous device.

So, to anyone out there who grumbles about their software development tools I have only one thing to say. "You're lucky you are. When I were a lad..."

Friday, February 23, 2007

An e-puck outing

At a little over 5 cm tall the e-pucks are remarkable little robots. Here is a picture from the web pages of supplier and all round good people at Cyberbotics. Our e-pucks got their first outing at the Brighton Science Festival's Big Science Sunday, on February 18th (and let me pay tribute to festival organiser Richard). A small pack of 4 or 5 e-pucks in a table top arena proved to be a compelling attraction for kids of all ages. A great talking point that allowed us to pontificate about everything from ants and swarm intelligence to the future of robots in society. Here is a picture with my colleague Claire Rocks in mid-demonstration showing part of the arena with two of the e-pucks contrasting with the old Linuxbot on the left. It's amazing to think that the Linuxbot was state-of-the-art technology just 10 years ago. The e-pucks, with sound (microphones and speaker), vision (camera and LEDs), bluetooth radio, proximity sensors and accelerometer are astonishingly sensor-rich compared with the venerable Linuxbot and its generation.

Now the small size of the e-puck can be deceptive. A week or so before the Brighton gig I thought I would try and code up some new swarm behaviours for the robots. "Little robot - how hard can it be", I thought to myself as I sat down to an evening's light coding. Boy was I mistaken. Within the e-puck's densely packed motherboard is a complex system which belies its small size. The Microchip dsPIC microcontroller at the heart of the e-puck has come a long way from the reduced-instruction-set and reduced-everything-else 8 bit PIC I programmed with a few dozen lines of assembler for our early Bismark robot 10 years ago. And in the e-puck the microcontroller is surrounded by some pretty complex sub-systems, such as the sound i/o codec, the camera and the bluetooth wireless. It's a complex system of systems. So, suitably humbled, I shall have to take some time to learn to program the e-puck*.

Just goes to show that with robots too, appearances can be deceptive.

----------------------------------------------------------------------
*Fortunately, and with remarkable generosity, the e-puck's designers have released the whole of the e-puck design - hardware and software - under an open source licence. So there are lots of function libraries and example programs to be explored.
And I should have mentioned that, in addition to public engagement, we're also evaluating the e-pucks as possible robots for our new Artificial Culture project. More blogs about this in due course.

Tuesday, February 13, 2007

The Rights of Robot

Almost exactly a year ago I wrote about wild predictions of human level AI. Another prediction that has caught the attention of the general press is about robot rights. See for instance this piece in the otherwise sensible Financial Times: uk report says robots will have rights, or the BBC technology news here, and elsewhere. 

The prediction that provoked these responses is worth a look: Robo-rights: Utopian dream or rise of the machines? 

The report, by Outsights - Ipsos MORI, was part of the UK government's strategic horizon scanning exercise and is pretty brief at a little over 700 words. In a nutshell, the report says that if robots gain artificial intelligence then calls may be made for them to be granted human rights. The report doesn't make it clear whether such calls would be made by humans on robots' behalf, or by the robots themselves (although the only link given is to the American Society for Prevention of Cruelty to Robots, which seems to imply the former). The likelihood of this is rated 1 out of 3 stars (33%..?), and timescale 21-50+ years. The report, which is clearly written from a legal perspective (nothing wrong with that), goes on to make some frankly surreal speculations about robots voting, becoming tax payers or enjoying social benefits like housing or health-care. 

Hang on, is this really a UK government commissioned report, or a script from Futurama..? I'm surprised it didn't go on to warn of loutish robots subject to ASBOs. 

Ok, let's get real. 

Do I think robots will have (human) rights within 20-50 years? No, I do not. Or to put it another way, I think the likelihood is so small as to be negligible. Why? Because the technical challenges of moving from insect-level robot intelligence, which is more or less where we are now, to human-level intelligence are so great. 

Do I think robots will ever have rights? Well, perhaps. In principle I don't see why not. Imagine sentient robots, able to fully engage in discourse with humans, on art, philosophy, mathematics; robots able to empathise or express opinions; robots with hopes, or dreams. Think of Data from Star Trek. It is possible to imagine robots smart, eloquent and persuasive enough to be able to argue their case - like Bicentennial Man - but, even so, there is absolutely no reason to suppose that robot emancipation would be rapid, or straightforward. After all, even though the rights of man* as now generally understood were established over 200 years ago, human rights are still by no means universally respected or upheld. Why should it be any easier for robots?

*or, to be accurate, 'men'.

Thursday, August 31, 2006

In praise of Ubuntu

Back in March I described problems compiling and installing new applications onto Linux and suggested that until this problem is solved, Linux will not take over the desktop; see On Linux. But in the last few weeks a colleague has introduced me to Ubuntu. It's a lovely distribution which feels clean, stable and nicely integrated. However, the real revelation of Ubuntu is the online package system, which means that finding and installing new applications is unbelievably straightforward. Of course this system doesn't work for any Linux application at all, just the ones (actually many) that have been placed in the Ubuntu package server, but it's clearly the way to go.

I strongly recommend Ubuntu, and commend the good people supporting this distribution who really do appear to live up to the ideals implied by the word "Ubuntu".