Pages

Monday, December 22, 2014

Robot Bodies and how to Evolve them

Evolutionary robotics has been around for about 20 years: it's about 15 years since Stefano Nolfi and Dario Floreano published their seminal book on the subject. Yet, surprisingly the number of real, physical robots whose bodies have been evolved can be counted on the fingers of one hand. The vast majority of ER research papers are concerned with the evolution of robot brains - the robot's control system. Or, when robot bodies are evolved often the robot is never physically realised. This seems to me very odd, given that robots are real physical artefacts whose body shape - morphology - is deeply linked to their role and function.

The question of how to evolve real robot bodies and why we don't appear to have made much progress in the last 15 years was the subject of my keynote at the IEEE International Conference on Evolvable Systems (ICES 2014) in Orlando, a week ago. Here are my slides:



The talk was in three parts.

In part one I outlined the basic approach to evolving robots using the genetic algorithm, referring to figure 18: The four-stage process of Evolutionary Robotics, from chapter 5 of my book:

I then reviewed the state-of-the-art in evolving real robot bodies, starting with the landmark Golem project of Hod Lipson and Jordan Pollack, referencing both Henrik Lund and Josh Bongard's work on evolving Lego robots, then concluding with the excellent RoboGen project of Josh Auerbach, Dario Floreano and colleagues at EPFL. Although conceptually RoboGen has not moved far from Golem, it makes the co-evolution of robot hardware and controllers accessible for the first time, through the use of 3D-printable body parts which are compatible with servo-motors, and a very nice open-source toolset which integrates all stages of the simulated evolutionary process.

RoboGen, Golem and, as far as I'm aware, all work on evolving real physical robot bodies to date has used the simulate-then-transfer-to-real approach, in which the whole evolutionary process - including fitness testing - takes place in simulation and only the final 'fittest' robot is physically constructed. Andrew Nelson and colleagues in their excellent review paper point out the important distinction between simulate-then-transfer-to-real, and embodied evolution in which the whole process takes place in the real world - in real-time and real-space.

In part two of the talk I outlined two approaches to embodied evolution. The first I call an engineering approach, in which the process is completely embodied but takes place in a kind of evolution factory; this approach needs a significant automated infrastructure: instead of an manufactory we need an evofactory. The second approach I characterise as an artificial life approach. Here there is no infrastructure. Instead 'smart matter' somehow mates then replicates offspring over multiple generations in a process much more analogous to biological evolution. This was one of the ambitious aims of the Symbrion project which, sadly, met with only limited success. Trying to make mechanical robots behave like evolving smart matter is really tough.

Part three concluded by outlining a number of significant challenges to evolving real robot bodies. First I reflect on the huge challenge of evolving complexity. To date we've only evolved very simple robots with very simple behaviours, or co-evolved simple brain/body combinations. I'm convinced that evolving robots of greater (and useful) complexity requires a new approach. We will, I think, need to understand how to co-evolve robots and their ecosystems*. Second I touch upon a related challenge: genotype-phenotype mapping. Here I refer to Pfeifer and Bongard's scalable complexity principle - the powerful idea that we shouldn't evolve robots directly, but instead the developmental process that will lead to the robot, i.e. artificial evo-devo. Finally I raise the often overlooked challenge of the energy cost of artificial evolution.

But the biggest challenge remains essentially what it was 20 years ago: to fully realise the artificial evolution of real robots.


Some of the work of this talk is set out in forthcoming paper: AFT Winfield and J Timmis, Evolvable Robot Hardware, in Evolvable Hardware, eds M Trefzer  and A Tyrrell, Springer, in press.

*I touch upon this in the final para of my paper on the energy cost of evolution here.

Thursday, December 18, 2014

Philae: A proof of concept for cometary landing

The question Robotics by Invitation asked its panel in November 2014, was:

What does the first successful landing on a comet mean for the future of (robotic) space mining and exploration? What are the challenges? What are the opportunities?

Here is my answer:

The successful landing of Philae on comet 67P/Churyumov-Gerasimenko is an extraordinary achievement and of course demonstrates - despite the immense challenges - that it is possible. The Philae mission was, in a sense, a proof of concept for cometary landing and this, for me, answers the question 'what does it mean'. 

Of course there is a very large distance between proof of concept and commercial application, so it would be quite wrong to assume that Philae means that space mining (of planets, asteroids or comets) is just around the corner. Undoubtedly the opportunities are immense and - as pressure on Earth's limited and diminishing resources mounts - there is an inevitability about humankind's eventual exploitation of off-world resources. But the costs of space mining are literally astronomical, so unthinkable for all but the wealthiest companies or, indeed, nations. 

Perhaps multi-national collaborative ventures are a more realistic proposition and - for me - more desirable; the exploitation of the solar system is something I believe should benefit all of humankind, not just a wealthy elite. But politics aside, there are profoundly difficult technical challenges. You cannot teleoperate this kind of operation from Earth, so a very high level of autonomy is required and, as Philae dramatically demonstrated, we need autonomous systems able to deal with unknown and unpredictable situations then re-plan and if necessary adapt - in real-time - to deal with these exigencies. The development of highly adaptive, resilient, self-repairing - even self-evolving – autonomous systems is still in its infancy. These remain fundamental challenges for robotics and AI research. But even if and when they are solved there will be huge engineering challenges, not least of which is how to return the mined materials to Earth. 

Bearing in mind that to date only a few hundred Kg of moon rock have been successfully returned* and Mars sample-return missions are still at the planning stage, we have a very long way to go before we can contemplate returning sufficient quantities to justify the costs of mining them.

*and possibly a few grains of dust from Japanese asteroid probe Hayabusa.