Monday, July 13, 2009

Installing Player/Stage on OS X with MacPorts

Back in 2006 I wrote about installing the excellent Player/Stage robot simulator under Linux, and the problems caused by dependencies (i.e. other packages that need to be installed first, before you can then install Player/Stage). Wouldn't it be great, I wrote, if there were a universal installer programme that would sort out all of these dependencies.

I should explain that since that post I've switched from Linux to OS X, running on a MacBook Pro, and have only just got round to installing Player/Stage. I was very pleased to discover that my plea for a universal installer has been answered by the (almost) excellent MacPorts.

I say almost excellent because installation wasn't completely glitch free.

Here's what I had to do to install to Mac OS X 10.5.7 (Leopard)

1. Download and install XCode (MacPorts depends on it)

2. Download and install MacPorts, install details here

3. In a terminal window run MacPorts with
$ sudo port install playerstage-stage playerstage-player

And wait an hour or so - it takes awhile. However, compilation of playerstage-player fails library not found for -ljpeg. To fix this as detailed here:

4. sudo port install python_select && sudo python_select python25

then re-run step 3.

5. But before you can run Player/Stage there's another fix needed, as detailed here.

sudo ln -s /usr/X11/share/X11/rgb.txt /usr/X11R6/lib/X11/rgb.txt

And that's it. Player/Stage installed.

Thursday, July 09, 2009

Letter to the Times Higher

Here is the full text of my letter in this week's THE:

Sir,

I am dismayed by the poor quality of journalism shown in the article Sandpits bring out worst in infantilised researchers. Of the two academics quoted the first, Professor Docherty, hasn't been to a sandpit; a second, unnamed, researcher apparently hadn't either, instead reporting what some bloke had said to him at a conference. Come on THE, you can do better than this. It can't be that hard to find one or two participants prepared to offer opinions on the record, from the 25 sandpits so far. The piece is depressing also in its use of the pejorative trope 'reality-TV' without justifying it. I recall nothing even vaguely reality-TV-like about the sandpit I attended. And micromanaged? Yes the the week was skilfully managed – but how else can you go from 30 more or less complete strangers to coherent project teams and amazing proposals in 5 days? In fact there was a significant level of self-organisation going on within the sandpit framework. And what on earth is wrong with the word sandpit? The key to creativity is working with people outside your own discipline, outside your intellectual comfort zone; the analogy with play and exploration is apt. To be brought together with 30 very, very smart people and asked to think about big research questions is exhilarating, not infantilising.

Yours faithfully

Wednesday, July 08, 2009

Robotic Visions at At-Bristol





The first Robotic Visions conference has started here in the Science Learning Centre of At-Bristol! We have around 25 students from 4 schools. As the day progresses I'll be posting updates, and some pictures, here:

2.00pm The five groups have come up with their big issues:
  1. Robots looking after us
  2. Robots Venturing into Space
  3. Robot Family and Friends (companions)
  4. Robot Teachers and Trainers
  5. Equal access to Robot Technology for rich and poor
What a great set of issues. Especially the last one.

5.00pm We just finished the celebration session in which the five groups presented their findings and recommendations to the invited VIPs. Four of the groups elected to have show and tell presentations with posters and written material - all of which were brilliant. The 'equal access' group, instead read out a powerful and moving statement that was both critical of technology for technology's sake, when set against real issues such as poverty, while at the same time calling for a strong ethical approach to robotics. Hopefully I can get hold of a copy of that statement and post it here.

Friday, July 03, 2009

Scratchbot in the news

Check our this Youtube video of the amazing Scratchbot built my my colleagues at the Bristol Robotics Lab:



This robot not only has artificial whiskers, that 'whisk' just like real rodents' whiskers, but even more amazingly it processes the sense data from the whiskers with a high-fidelity electronic model of the barrel cortex - the small part of the rat's brain that processes sensory input from its whisker's. If you look carefully you can see the micro-vibrissae - the small extra-sensitive non-whisking whiskers at the robot's snout.

Here's the full story on the EU Cordis news service.

Thursday, June 25, 2009

Chimpanzee culture on Material World

There was a great piece on this afternoon's Material World - an interview with Andrew Whiten about cultural traditions in chimpanzees. Andrew Whiten makes the very interesting observation that while many animals appear to have 'traditions' (i.e. separate groups of the same bird species with different birdsong), chimpanzee have dozens of traditions. Does this mean that chimps have culture? I think so, yes.

Chimp culture appears, however, to have remained relatively static - Whiten observes that archeological investigation has shown traditions to have persisted for hundreds if not thousands of years. Longer, I would suspect, given that anatomically modern chimps have been around for over six million years. In other words, the big bang of human cultural evolution has never happened for chimps. What cognitive deficit in chimps might account for this..?

Friday, June 19, 2009

Artificial Culture web pages now up

Check out our new Artificial Culture project web pages:



These have been built using Google Sites. A remarkably straightforward way to create both the structure and content for a set of web pages, without HTML coding (actually I did have to tweak the code a couple of times). Integration with other Google applications means, for instance, that creating a slide show of images needs you only to upload the images to a Picasa album, then insert the slideshow gadget and point to the Picasa URL. Add another image to the album and it automatically appears in your web site slide show.

There is one limitation: while invited collaborators can sign-in and add comments - in blog fashion - to existing posts (as well as create and edit new pages), ordinary visitors to the web site cannot. Given that blog functionality is clearly built into the sites technology, it ought to be straightforward to provide an option to allow comments to be submitted, to selected pages, by non signed-in visitors. Or a blog gadget. Google..?

Tuesday, June 16, 2009

Autonomous robots with guns are a bad idea

Check out Noel Sharkey's excellent piece describing the depressingly relentless 'advance' of offensive robots, in today's Daily Telegraph: March of the Killer Robots.

Like Noel I am profoundly worried by the weaponisation of robots.

Friday, May 29, 2009

Robotics a key future industry

Great to see an independent report listing robotics as one of the key future industry sectors for the UK in today's Guardian: uk industry set to put its best robotic foot forward. (Notice also the brilliant photograph of the amazing hand on the BRL/Elumotion robot BERTI.)

There's clearly something in the air because just a couple of days ago blogs were discussing the launch of a US National Robotics Technology roadmap. Here's an interesting quote from the briefing paper "robotech represents one of the few technologies capable in the near term of building new companies and creating new jobs and in the long run of addressing issues of critical national importance".

Thursday, April 23, 2009

Artificial Culture in Prague

I'm here at the brilliant European Union conference Science beyond Fiction, and yesterday gave my talk in the session on Collective Robotics: adaptivity, co-evolution, robot societies. I was pretty nervous because (a) this is my first talk on the Artificial Culture project to a international audience of senior researchers and (b) the project is still in its early development stages so we don't yet have any results. However, I'm pleased to say the talk went down well and I had some great questions - followed by conversations late into the evening.

Here is a movie of my presentation slides:


One of the questions was about robot imitation: are the robots learning to imitate, or have we pre-programmed them with imitation? My answer was that we have hand-coded imitation, in other words, our robots are endowed with an imitation instinct. You have to start somewhere, I argued, and this seems a good place to start and will initially allow us to study meme-evolution in our robot society in isolation from robot adaptation. While my questioners agreed, they also suggested that the evolution of imitation would also be really interesting, and encouraged us to - in effect - turn the evolutionary clock a little further back in our robot model of the emergence of culture.

Here are all of my blog posts on this project so far.

Wednesday, March 25, 2009

Robotic Visions goes nationwide

Really great news. We learned today that our EPSRC bid to take Robotic Visions nationwide has been granted.

Let me explain what Robotic Visions is. About a year and a half ago we (that is Walking with Robots) ran an event in London called the Young Person's Visions conference, co-organised with the excellent London Engineering Project and the Royal Academy of Engineering (RAE). We brought about 20 young people - aged between 16-18 - to the grand setting of the RAE for 2 days and asked them to think and talk about what kind of robotics technology they would like in their future. They met with and took evidence from roboticists, in much the same way that a parliamentary select committee does, and at the end formed and agreed a set of recommendations. Those recommendations have now been published by the RAE to inform senior members of the academy and other policy-makers: click here to see that report.

See my blog post on that event here: the future doesn't just happen - we must own it. The new grant will now allow us to run the same kind of event in other venues across the UK: Bristol, Newcastle, Aberystwyth, Glasgow and Oxford.

Tuesday, March 24, 2009

Emergence in Glasgow

Just returned from the excellent 2nd EmergeNet meeting, in Glasgow. EmergeNet is an EPSRC funded network of projects and people linked by an interest in emergence. As the Wikipedia article states, the phenomenon of emergence has been known about for a long time, but it still defies a proper scientific definition. In other words a definition that allows you to look at some complex phenomena and say yes, this is true emergence, but that isn't, and to measure the strength of the emergence (if indeed that is possible).

The reason a rigourous definition of emergence is important is that we can now contemplate designing complex systems that exploit emergence. A swarm robotics system is, for instance, a designed system which relies on emergence but - within the framework of complexity science - many other systems, from molecular to economic, would benefit from a deep understanding of emergence.

There were some truly excellent talks at EmergeNet2 - I'll add a link here when the presentations are online. But from one of those talks here is a link to an astonishing YouTube video from EmergeNet leader Lee Cronin and his team, showing (if I understand it correctly) controlled inorganic crystalline growth of molecular tubes - which looks remarkably organic.

Friday, March 13, 2009

Symbrion debates @Stuttgart

In Stuttgart, at the University, for a Symbrion project meeting. Its been a really tough meeting - which is hardly surprising given that we're one year in and - next month - have the big end-of-first-year review meeting in Prague. So a major part of the meeting has been a dress rehearsal for the review.

However, spending a day and a half with a group of very smart people is always a pleasure, and there were some really interesting issues to debate. One concerns the fundamental question of how much of the Symbrion system should be designed and how much evolved (using evolutionary computing techniques). One could take a purist view and aim to evolve every aspect. My own view is more pragmatic. I think that achieving the aims of the Symbrion project is going to be so difficult that we should resort to artificial evolution only for the parts of the system that we can't design, because we don't know how.

Also, I think there's a 'biological plausibility' argument for taking the pragmatic view. The Symbrion system will be both a swarm of individual robots, behaving like a swarm, and - following self-assembly - a multi-cellular organism, behaving as a single organism. Swarm and organism have, I think, radically different control paradigms; the former fully decentralised and dependent on mechanisms of emergence and self-organisation, the latter centralised and coordinated (by a central nervous system). Of course ant genes must both contain the instructions to build multi-cellular animals (the ants with CNSs and coordinated control, e.g. for walking), and their behaviours which give rise to the colony's collective swarm intelligence. However, Symbrion goes beyond anything seen in nature. We want the Symbrion robots to sometimes behave like complicated ant-like creatures, and sometimes behave like complicated cells in a complex body (that can perform useful coordinated functions). I think if such a thing were possible to be evolved it would have been (except for the fascinating but much-simpler-than-Symbrion case of the social amoeba Dictyostelium discoideum sometimes self-assembling into multicellular structures).

This is why I think engineering a single evolutionary process that can evolve both swarm intelligent control and centralised coordinated control is asking too much.

Sunday, February 01, 2009

E-puck imitation

Last week we made a significant breakthrough in the Artificial Culture project. My student Mehmet Erbas demonstrated robot robot imitation for the first time. To be more precise: one e-puck robot first watching another e-puck perform a sequence of movements, then (attempting to) imitate the same sequence of movements. This sounds much easier than it is. It's difficult for two reasons. Firstly, because the e-puck can't see very well. It only has one eye - so no stereo vision and no depth information. Thus we make it easier for the robots to see each other by fitting coloured skirts in primary colours. Secondly, the robot has to translate what it has seen (which amounts to a coloured blob moving left to right and/or getting larger or smaller within its field of vision) into a set of motor commands so it can copy those movements. This transformation is what researchers in imitation in humans and animals call the correspondence problem.

Mehmet has solved these problems with some very neat coding, and the demonstration shows the the imitated dance is - on most runs - a remarkably good copy of the original. We're now figuring out how to measure the quality of imitation Qi so we can get some results and understand the average Qi, and its variance.

Wednesday, January 28, 2009

WWR @ Disneyland in January

Just gave a Walking with Robots talk for about 450 school children, at Disneyland, Paris. The Gaumont cinema to be precise, on the Disneyland complex. This was the first time I've given a talk in a cinema - with my slides projected onto the giant sized cinema screen behind me!

My audience, who I discovered had been bussed from various schools across the UK, were attending a Royal Institution Study Experience - a kind of science winter camp. Even allowing for the cold and grey January weather - what a great way to spend a few days of intensive hands-on science.

Wednesday, January 14, 2009

Robots for Risky Interventions

Returning on the Eurostar from a really interesting workshop in Brussels, on Robots for Risky Interventions and Environmental Surveillance (RISE 09). The focus of the workshop was a number of EU funded projects aimed at developing multi-robot systems in safety-critical applications. One project called GUARDIANS, led by Jacques Penders at Sheffield-Hallam, is aimed at providing firefighters with robot outriders, providing sensing and navigation that - in effect - give the firefighter extended super-senses. I learned that one of the most dangerous situations they have to deal with is large warehouse fires which quickly fill with smoke, making it very easy for firefighters to become lost and disoriented in the labyrinth of aisles between storage racks. But the flat smooth warehouse floor and grid like layout is of course ideal for mobile robots, making this a really good application for robots to prove themselves useful in a serious and worthwhile real-world task.

I gave a talk setting out the potential of using a swarm robotics approach to safety-critical applications. The swarm approach differs from the conventional multi-robot systems approach in its control paradigm. A multi-robot system will typically use a centralised command and control system to both direct the actions of individual robots and coordinate the whole group. In contrast a swarm uses a completely decentralised, distributed approach, in which each robot decides how to act autonomously - using local sensing and communication with neighbouring robots - so that the swarm self-organises to achieve the overall task or mission. Although the robots may look the same in both cases, the swarm approach is radically different from a systems control point of view. But the swarm approach offers the potential of much higher resilience to failure (of individual robots, for instance).

Thursday, November 20, 2008

Swarm Tolerance to Failure

Jan Dyre Bjerknes has made a terrific breakthrough with his PhD research that I'm very excited about: see the YouTube video (courtesy of Jan Dyre) below.

Let me explain what's going on here, and why I'm so excited about it. The swarm of 10 e-puck robots, starting on the left of the arena, are attracted to the beacon (the black box) on the right of the arena. Crucially the swarm's movement toward the beacon is not directly programmed into the robots, it is what we call an emergent property of the swarm. I won't explain how it works here, except to say that the robots need to - in a sense - cooperate. One robot can't make it to the beacon on its own, nor two, nor three or four. Five is about the fewest number that can get to the beacon.



If you watch the movie clip carefully you will see that a few seconds into the experiment Jan Dyre has arranged that two of the robots fail: you can see them stop moving. In fact they fail in a really bad way. Their electronics and software still works, only the motors have failed. But because the swarm works cooperatively, the failed robots have the effect of anchoring the swarm and impeding its movement toward the beacon. However, what the clip also shows is that 'force' of the swarm movement (of the 8 robots still working) is, after a while, enough to overcome the 'anchoring force' of the two failed robots. Bearing in mind that partial failures are the worst kind, 20% is a massive failure rate, so this experiment demonstrates the very high level of fault tolerance in a robot swarm.

Tuesday, September 02, 2008

E-pucks with spiky hats

Here are some pictures of e-pucks sporting their new spiky hats (click to enlarge). The purpose of these hats is to allow us to mark each e-puck with 3 reflective spheres, as shown on the left e-puck in the pictures. The reflective spheres allow the e-pucks to be tracked by our Vicon tracking system, and the grid of spikes means that we can provide each e-puck with its own unique pattern of 3 reflective spheres. Jan Dyre (who took these photos) tells me that there are 92 ways of uniquely arranging 3 spheres on this 6x4 grid. The Vicon system will, I'm advised, be able to track each robot in the swarm by recognising its unique pattern of 3 spheres. The Vicon system is due to be set up by their engineers this coming Thursday: it will be great to see it working.

Monday, August 04, 2008

Richard Vaughan and Marco Dorigo visit the lab

Terrific to have visits today of both Richard Vaughan at his team from Simon Fraser University in Vancouver, Canada, and Marco Dorigo from the Universite Libre de Bruxelles. Both Richard and Marco are luminaries in the field of Swarm Robotics: Richard for his part in developing the Player/Stage simulation tools and Marco for more or less pioneering the field of Swarm Intelligence and subsequent leading swarm robotics projects such as SWARM-BOT.

Friday, August 01, 2008

Heart Robot and BSc Robotics

It has been brilliant to see the amazing coverage of the Heart Robot project during the last two days. Check out this piece on BBC news online, or Google Heart Robot. Heart robot was jointly conceived by my colleagues Matthew Studley (lecturer in Robotics), Claire Rocks (research fellow) and BSc robotics student David McGoran. Matt and Claire wrote the bid for funding from the EPSRC partnerships for public engagement scheme, with David as a named researcher. I don't need to describe Heart Robot because you can see the whole story on the excellent project web pages here http://www.heartrobot.org.uk/.

The reason for this post is to say Way to go Team, and to anybody out there who might be thinking of studying robotics at university: this is what can happen when you come and do robotics at UWE!.

Tuesday, July 15, 2008

How to make a fool of yourself on national radio

Being interviewed live on national radio is an interesting experience.

It's not so bad when you're in a studio face to face with the interviewer. Then there's a proper sense of occasion, of being there for a purpose, something to rise to.

But being interviewed by telephone is an altogether different and more risky proposition. Why risky? Let me set the scene. You've agreed to be interviewed by a national radio station that has, hitherto, never blipped onto your cultural radar. The producer called and asked if you would be able to comment, in the science slot of the breakfast show, about a recent newspaper article listing the top 10 reasons that mankind could be wiped out this century. In particular the one that predicts mankind will, within 40 years, build super-intelligent robots who promptly (and ungratefully) enslave their creators. Quickly passing over your observation that said producer seems surprisingly laid back, you say to yourself - can't be so bad - they have a science slot. And of course you would be grateful for the opportunity to explain why this particular prediction is laughably absurd.

You rise early the following morning, after checking the news piece and giving some thought to how you can counter this particular piece of futurology. (Which turns out to be based on the mistaken assumption that because processing power is doubling roughly every 2 years, then robot intelligence is doing the same.)

With 20 minutes to spare you find the radio station on the Interweb and click the listen now button. The presenter starts to talk about robots-taking-over-the-world and invites a phone in. He wants listeners to phone with mad robot inventions and introduce them with a robot voice. Hmmm. At this point you begin to realise that the science slot doesn't have quite the level of gravitas that you might have hoped for.

Then the phone rings. Butterflies. Ok, normal. It's the laid back producer again. After a few minutes listening to the radio on the phone you hear yourself being introduced and you're on. This bit is always weird. You're on the phone with a few hundred thousand people on the other end. Just focus. It's only a conversation with some guy. Nevermind that he's called Xane. Or the fact that he just egregiously misquoted the article by inserting the words 'taking-over-the-world' between 'probability of super-intelligent robots' and 'high'.

The first couple of questions are kind of ok. More or less what you expected. You carefully explain that no, in your opinion it's extremely unlikely that we will build robots with super-human intelligence in the next 40 years and, even if we did, why should they be evil and take over the world (or more to the point why would we make them evil). Then some relatively innocuous questions: What is the most powerful robot in the world - is it Asimo? Er no, Asimo is actually remotely controlled by a team of 6. What about that freaky monkey robot with the robot arm? Well, that's not so much a robot as work to improve neural electronic interfaces to help people with smart prostheses.

Then just when you think it's all over you get the inevitable mad-question-at-the-end.

Q. But if robots did take over the world, what would we call them?

A. I really don't think robots are going to take over the world. 

Q. (More insistently this time) Yes, but if they did. What would we call them?

A. No, they really aren't going to take over the world.

Q. (Even more insistently) But what if they did? What would we call them?

Then you make a fool of yourself on the radio by wearily saying 'evil robot master' or somesuch nonsense, thus eliciting the triumphal response from Xane and his co-presenter: Aha! See, the professor says so. Robots really are going to take over the world.