Just spent 4 days at the beautiful Schloss Dagstuhl in SW Germany attending a seminar on Artificial Immune Systems. The Dagstuhl is a remarkable concept – a place dedicated to residential retreats on advanced topics in computer science. Everything you need is there to discuss, think and learn. And learn is what I just did – to the extent that by lunchtime today when the seminar closed I felt like the small boy who asks to be excused from class because “miss, my brain is full”.
Knowing more or less nothing about artificial immune systems it was, for me like sitting in class, except that my teachers are world experts in the subject. A real privilege. So, what are artificial immune systems? They are essentially computer systems inspired by and modelled on biological immune systems. AISs are, I learned, both engineering systems for detecting and perhaps repairing and recovering from faults in artificial systems (in effect system maintenance), and scientific systems for modelling and/or visualising natural immune systems.
I learned that real immune systems are not just one system but several complex and inter-related systems, the biology of which is not fully understood. Thus, interestingly, AISs are modelled on (and models of) our best understanding so far of real immune systems. This of course means that biologists almost certainly have something to gain from engaging with the AIS community. (There are interesting parallels here with my experience of biologists working with roboticsts in Swarm Intelligence.)
The first thing I learned was about the lines of defence to external attack on bodies. The first is physical: the skin. If something gets past this then bodies apply a brute force approach by, for instance raising the temperature. If that doesn’t work then more complex mechanisms in the innate immune system kick-in: white blood cells that attempt to ‘eat’ the invaders. But more sophisticated pathogens require a response from the last line of defence: the adaptive immune system. Here the immune system ‘learns’ how to neutralise a new pathogen with a process called clonal selection. I was astonished to learn that clonal selection actually ‘evolves’ a response. Amazing – embodied evolution going on super-fast inside your body within the adaptive immune system, taking just a couple of days to complete. Now as a roboticist I’m very interested in embodied evolution – and by coincidence I attended a workhop on that very subject just a month ago. But I’d always assumed that embodied evolution was biologically implausible – an engineering trick if you like. But no – there it is going on inside adaptive immune systems. (As an aside, it appears that we don’t understand the processes that prompted the evolution of adaptive immune systems some 400 million years ago – in jawed vertebrates).
Of course while listening to this fascinating stuff I was all the while wondering what this might mean for robotics. For instance what hazards would require the equivalent of an innate immune response in robots, and which would need an adaptive response. And what exactly is the robot equivalent of an ‘infection’. Would a robot, for instance, get a temperature if it was fighting an infection. Quite possibly yes – the additional computation needed for the robot to figure out how to counter the hazard might indeed need more energy – so the robot would have to slow down its motors to direct its battery power instead to its computer. Sounds familiar doesn’t it: slowing down and getting a temperature!
Swarm robots with faults is something I’ve been worrying about for awhile and, based on the work I blogged about here, at the Dagstuhl I presented my hunch that – while swarm of 100 robots might work ok – swarms of 100,000 robots definitely wouldn’t without something very much like an immune system. That led to some very interesting discussions about the feasibility of co-evolving swarm function and swarm immunity. And, given that we think we’re beginning to understand how to embed and embody evolution across a swarm of robots, this is all beginning to look surprisingly feasible.
Pages
▼
Friday, April 29, 2011
Wednesday, April 13, 2011
Why Slow Science may well be A Very Good Thing
A few weeks ago I spent a very enjoyable Saturday at the Northern Arts and Science Network annual conference Dialogues, in Leeds. The morning sessions including two outstanding keynote talks. The first from Julian Kiverstein on synthetic synaesthesia and the second from David James on technology enhanced sports. Significant food for thought in both talks. Then Jenny Tennant Jackson and I ran an afternoon workshop on the Artificial Culture project (aided and abetted by 8 e-puck robots) which generated lots of questions and interest.
But apart from singing the praises of NASN and the conference I want to reflect here on something that emerged from the panel discussion at the end of the conference. There was quite a bit of debate around the question of open research (in both science and the arts) and public engagement. In recent years I've become a strong advocate of a unified open science + public engagement approach. In other words doing research transparently - ideally using an open notebook approach so that the whole of the process as well as the experimental outcomes are open to all - combined with proactive public engagement in (hopefully) a virtuous circle*.
So there I was pontificating about the merits of this approach in the panel discussion at NASN when someone asked rather pointedly "but isn't that all going to slow down the process of advancing science?" Without thinking I retorted "Good! If the cost of openness is slowing down science then that has to be a price worth paying." The questioner was clearly somewhat taken aback and to you sir, if you should read this blog, I offer sincere apologies for the abruptness of my reply. In fact I owe you not only apologies but thanks, for that exchange has really got me thinking about Slow Science.
So, having reflected a little, here's why I think slowing down science might not be as crazy as it sounds.
First the ethical dimension. Science or engineering research that is worth doing, i.e. is important and has value, has - by definition - an ethical dimension. The ethical and societal impact of science and engineering research needs to be acknowledged and understood by researchers themselves then widely and transparently debated, and not left to bad science journalism, science denialism or corporate interests. This takes time.
Next, unintended consequences. High impact research always has implications, and the larger the impact, the greater the potential for unintended consequences (no matter how well intentioned the work). Of course negative unintended consequences (scientific, economic, philosophical) almost always end up becoming a problem for society - so they too should be properly considered and discussed during a project's lifetime.
Finally the open science, public engagement dimension. I would argue that the time and effort costs of building open science and public engagement into research projects will reap manifold dividends in the long run. First take the open science aspect, openness - while it can take some courage to actually do - can surely only bring long term benefits in increased trust (in the work of the project, and in science in general). Second, running an integrated open science - public engagement approach alongside the research brings direct educational benefit to the next generation. And the additional real cost (in time and effort) has to be much less than it would be for an isolated project seeking the same educational outcomes.
Critics will of course argue that Slow Science would be uncompetitive. In a limited sense they would be right, but it seems to me important not to confuse commercialisation of spin out products with the much longer time span of research, nor to allow the tail of exploitation to wag the dog of research. Big science that takes decades can still spin out lots of wealth creating stuff along the way. Another criticism of Slow Science is to do with pressing problems that desperately need solutions. This is harder to counter but - perhaps - the unintended consequences argument might hold sway.
Slow Science: a Good Thing, or not?
*science communicator and PhD student Ann Grand is researching exactly this subject and has already published several papers on it.
But apart from singing the praises of NASN and the conference I want to reflect here on something that emerged from the panel discussion at the end of the conference. There was quite a bit of debate around the question of open research (in both science and the arts) and public engagement. In recent years I've become a strong advocate of a unified open science + public engagement approach. In other words doing research transparently - ideally using an open notebook approach so that the whole of the process as well as the experimental outcomes are open to all - combined with proactive public engagement in (hopefully) a virtuous circle*.
So there I was pontificating about the merits of this approach in the panel discussion at NASN when someone asked rather pointedly "but isn't that all going to slow down the process of advancing science?" Without thinking I retorted "Good! If the cost of openness is slowing down science then that has to be a price worth paying." The questioner was clearly somewhat taken aback and to you sir, if you should read this blog, I offer sincere apologies for the abruptness of my reply. In fact I owe you not only apologies but thanks, for that exchange has really got me thinking about Slow Science.
So, having reflected a little, here's why I think slowing down science might not be as crazy as it sounds.
First the ethical dimension. Science or engineering research that is worth doing, i.e. is important and has value, has - by definition - an ethical dimension. The ethical and societal impact of science and engineering research needs to be acknowledged and understood by researchers themselves then widely and transparently debated, and not left to bad science journalism, science denialism or corporate interests. This takes time.
Next, unintended consequences. High impact research always has implications, and the larger the impact, the greater the potential for unintended consequences (no matter how well intentioned the work). Of course negative unintended consequences (scientific, economic, philosophical) almost always end up becoming a problem for society - so they too should be properly considered and discussed during a project's lifetime.
Finally the open science, public engagement dimension. I would argue that the time and effort costs of building open science and public engagement into research projects will reap manifold dividends in the long run. First take the open science aspect, openness - while it can take some courage to actually do - can surely only bring long term benefits in increased trust (in the work of the project, and in science in general). Second, running an integrated open science - public engagement approach alongside the research brings direct educational benefit to the next generation. And the additional real cost (in time and effort) has to be much less than it would be for an isolated project seeking the same educational outcomes.
Critics will of course argue that Slow Science would be uncompetitive. In a limited sense they would be right, but it seems to me important not to confuse commercialisation of spin out products with the much longer time span of research, nor to allow the tail of exploitation to wag the dog of research. Big science that takes decades can still spin out lots of wealth creating stuff along the way. Another criticism of Slow Science is to do with pressing problems that desperately need solutions. This is harder to counter but - perhaps - the unintended consequences argument might hold sway.
Slow Science: a Good Thing, or not?
*science communicator and PhD student Ann Grand is researching exactly this subject and has already published several papers on it.