Wednesday, November 03, 2010

Why large robot swarms (and maybe also multi-cellular life) need immune systems.

Just gave our talk at DARS 2010, basically challenging the common assumption that swarm robot systems are highly scalable by default. In other words the assumption that if the system works with 10 robots, it will work just as well with 10,000. As I said this morning "sorry guys, that assumption is seriously incorrect. Swarms with as few as 100 robots will almost certainly not work unless we invent an active immune system for the swarm". The problem is that the likelihood that some robots partially fail - in other words fail in such a way as to actually hinder the overall swarm behaviour - quickly increases with swarm size. The only way to deal with this - and hence build large swarms - will be to invent a mechanism that enables good robots to both identify and disable partially failed robots. In other words an immune system.

Actually - and this is the thing I really want to write about here - I think this work hints toward an answer to the question "why do animals need immune systems?". I think it's hugely interesting that evolution had to invent immune systems very early in the history of multi-cellular life. I think the basic reason for this might be the very same reason - outlined above - that we can't scale up from small to huge (or even moderately large) robot swarms without something that looks very much like an immune system. Just like robots, cells can experience partial failures: not enough failure to die, but enough to behave badly - badly enough perhaps to be dangerous to neighbouring cells and the whole organism. If the likelihood of one cell failing in this bad way is constant, then it's self-evident that its much more likely that some will fail in this way in an organism with 10,000 cells than 10 cells. And with 10 million cells (still a small number for animals) it becomes a certainty.

Here is the poster version of our paper.


  1. I just found your blog thru a Google Alert. I have a directory of robot sites. If you like, I can include you in the next update. You can get a free copy here: ""

  2. Interesting question indeed ! One related issue is that multicellular organism often have the capacity of regenerating cells and tissues (to a certain and limited extend of course in many cases).
    So it is not an adaptive immune systems capability in this case but just the amazing possibility of regenerating some parts that are aging, failing or dying. In this case, the role of stem cells are important. For example, they are found in hair and skin (including human), two organs known to be highly capable or regenerating themselves. I would suggest a "stem cell approach" to collective robotics... Yet Another Project (YAP) :-)

  3. I can see how even ordinary (non-malicious) mechanical failure can potentially cause real problems for swarms for example, when robots are programmed to follow one another.

  4. The degradation in swarm reliability your poster indicates is pretty severe. I suppose one solution would be to have malfunctioning robots signal their distress in such a way that other robots in the swarm modify their behavior to avoid the problem.

    In some ant species, when an ant dies, it release oleic acid, a chemical signal that induces other ants to remove the corpse from the nest. Details vary. The explanations I've seen is that this has probably evolved to reduce pathogenic infestation of the colony. I wonder also whether having ant corpses near a colony detrimentally affects the chemical messaging systems that ants rely upon. At the very least, dead ants might block tunnels in the nest.

  5. Thank you for your comments.

    José - great idea for stem cell approach to collective robotics - you should raise this as a possible new direction for the Symbrion project.

    Jacob - you make some very good points. Finding a mechanism by which working robots can sense (and therefore take action) when a fellow robot has partially failed is really difficult. One problem is that the means by which a robot signals that it has failed could itself fail. I think the only robust way to solve this problem would be for robots to be able to observe each others' behaviours, learn what are 'normal' behaviours, and hence be able to detect when a robot's behaviours become 'abnormal'.