Monday, September 30, 2013

Don't build robots, build robot systems

Why aren't there more intelligent mobile robots in real world applications? It's a good question, and one I'm often asked. The answer I give most often is that it's because we're still looking for that game changing killer app - the robotics equivalent of the spreadsheet for PCs. Sometimes I place the blame on a not-quite-yet-solved technical deficit - like poor sensing, or sensor fusion, or embedded AI; in other words, our intelligent robots are not yet smart enough. Or I might cite a not-fully-developed-capability, like robots not able to cope with unpredictable (i.e. human) environments, or we can't yet assure that our robots are safe, and dependable.

Last week at euRathlon 2013 I realised that these answers are all wrong. Actually that would be giving myself credit where none is due. The answer to the question: why aren't there more intelligent mobile robots in real world applications was pointed out by several of the presenters at the euRathlon 2013 workshop, but most notably by our keynote speaker Shinji Kawatsuma, from the Japan Atomic Energy Authority (JAEA). In an outstanding talk Kawatsuma explained, with disarming frankness, that, although his team had robots they were poorly prepared to use those robots in the Fukushima Daiichi NPP, because the systems for deployment were not in place. The robots are not enough. Just as important are procedures and protocols for robot deployment in an emergency; mobile infrastructure, including vehicles to bring the robots to the emergency, which are capable - as he vividly explained - of negotiating a road system choked with debris (from the Tsunami) and strained with other traffic (rescue workers and evacuees); integration with other emergency services; and, above all, robot operators trained, practised and confident to guide the robots through whatever hazards they will face in the epicentre of the disaster.

In summing up the lessons learned from robots at Fukushima, Shinji Kawatsuma offered this advice - actually it was more of a heartfelt plea: don't build robots, build robot systems. And, he stressed, those systems must include operator training programmes. It was a powerful message for all of us at the workshop. Intelligent robots are endlessly fascinating machines, with all kinds of difficult design challenges, so it's not surprising that our attention is focussed on the robots themselves. But we need to understand that real world robots are like movie stars - who (despite what they might think) wouldn't be movie stars at all without the supporting cast, camera and sound crews, writers, composers, special effects people and countless other departments that make the film industry. Take the Mars rover Curiosity - an A-list movie star of robotics. Curiosity could not do its job without an extraordinary supporting infrastructure that, firstly, delivered her safely to the surface of Mars and, secondly, allows Curiosity's operators to direct her planetary science exploration.

Curiosity: an A-list movie star robot (NASA/JPL-Caltech/MSSS), with a huge supporting cast of science and technology.

















So, to return to my question why aren't there more intelligent mobile robots in real world applications. The answer is plain. It's because without supporting systems: infrastructure and skilled operators integrated and designed to meet the real world need, a robot - regardless of how innovative and intelligent it is - will never make the transition from the lab to the real world. Without those systems that robot will remain no more than a talented but undiscovered actor.


Hans-Arthur Marsiske, The use of robots in Fukushima: Shinji Kawatsuma Interview, Heise online, 25 September 2013 (in German).
K Nagatani, S Kiribayashi, Y Okada, K Otake, K Yoshida, S Tadokoro, T Nishimura, T Yoshida, E Koyanagi, M Fukushima and S Kawatsuma, Emergency response to the nuclear accident at the Fukushima Daiichi Nuclear Power Plants using mobile rescue robots, Journal of Field Robotics, 30 (1), 44-63, 2013.

Friday, September 20, 2013

The Triangle of Life: Evolving Robots in Real-time and Real-space

At the excellent European Conference on Artificial Life (ECAL) a couple of weeks ago we presented a paper called The Triangle of Life: Evolving Robots in Real-time and Real-space (this links to the paper in the online proceedings).

As the presenting co-author I gave a one-slide one-minute pitch for the work, and here is that slide.



The paper proposes a new conceptual framework for evolving robots, that we call the Triangle of Life. Let me outline what this means. But first a quick intro to evolutionary robotics. In my very short introduction to Robotics I wrote:
One of the most fascinating developments in robotics research in the last 20 years is evolutionary robotics. Evolutionary robotics is a new way of designing robots. It uses an automated process based on Darwinian artificial selection to create new robot designs. Selective breeding, as practised in human agriculture to create new improved varieties of crops, or farm animals, is (at least for now) impossible for real robots. Instead, evolutionary robotics makes use of an abstract version of artificial selection in which most of the process occurs within a computer. This abstract process is called a genetic algorithm. In evolutionary robotics we represent the robot that we want to evolve, with an artificial genome. Rather like DNA, our artificial genome contains a sequence of symbols but, unlike DNA, each symbol represents (or ‘codes for’) some part of the robot. In evolutionary robotics we rarely evolve every single part of a robot.

A robot consists of a physical body with an embedded control system - normally a microprocessor running control software. Without that control software the robot just wouldn't do anything - it would be the robot equivalent of a physical body without a mind. In biological evolution bodies and minds co-evolved (although the dynamics of that co-evolutionary process are complex and interesting). But in 20 years or so of evolutionary robotics the vast majority of work has focussed only on evolving the robot's controller. In other words we take a pre-designed robot body, then use the genetic algorithm to discover a good controller for that particular body. There has been little work on body-brain co-evolution, and even less work on evolving real robot bodies. In fact, we can count the number of projects that have evolved new physical robot bodies on the fingers of one hand*. Here is one of those very rare projects: the remarkable Golem project of Hod Lipson and Jordan Pollack.

This is surprising. When we think of biological evolution and the origin of species, our first thoughts are of the evolution and diversity of body shapes and structures. In the same way, the thing about a robot that immediately captures our attention is its physical body. And bodies are not just vessels for minds. As Rolf Pfeifer and Josh Bongard explain in their terrific book How the Body Shapes the Way We Think, minds depend crucially on bodies. The old dogma of Artificial Intelligence, that we can simply design an artificial brain without any regard to its embodiment, is wrong. True artificial intelligence will only be achieved by co-evolving physical bodies with their artificial minds.

In this paper we are arguing for a radical new approach in which the whole process of co-evolving robot bodies and their controllers takes place in real space and real time. And, as the title makes clear, we are also advocating a open-ended cycle of artificial life, in which every part of the robots' artificial life cycle takes place in real space and real time, from artificial conception, through to artificial birth, artificial infancy and development, then artificial maturity and mating. Of course these words are metaphors: the artificial processes are at best a crude analogue. But let me stress that no-one has demonstrated this. The examples that we give in the paper, from the EU Symbrion project, are just fragments of the process - not joined up in reality. And the Symbrion example is very constrained because of the modular robotics approach which means that the building blocks of these 'multi-cellular' robot organisms - the 'cells' - are themselves quite chunky robots; we have only 3 cell types and only a handful of cells for evolution to work with. Evolving robots in real space and real time is ferociously hard but, as the paper concludes: Our proposed artificial life system could be used to investigate novel evolutionary processes, not so much to model biological evolution – life as it is, but instead to study life as it could be.

Full reference:

Eiben AE, Bredeche N, Hoogendoorn M, Stradner J, Timmis J, Tyrrell A, and Winfield A (2013), The Triangle of Life: Evolving Robots in Real-time and Real-space, pp 1056-1063 in Advances in Artificial Life, ECAL 2013, proc. Twelfth European Conference on the Synthesis and Simulation of Living Systems, eds. LiĆ² P, Miglino O, Nicosia G, Nolfi S and Pavone M, MIT Press.


*was surprised to discover this when searching the literature for a new book chapter I'm co-authoring with Jon Timmis on Evolvable Robot Hardware.

Related blog posts:
New experiments in embodied evolutionary swarm robotics
New video of 20 evolving e-pucks