Tuesday, May 28, 2013

New Robotics and new opportunities

Here are the slides of my talk at the BARA Academic Forum for Robotics meeting Robotics: from innovation to service, on Monday 20 May 2013:



The key messages from my talk were:
  • The new wave of robotics represents a kind of Cambrian explosion in robotics: an exciting but also bewildering exploration of new forms, functions and materials. This explosion of diversity means that the New Robotics is not one kind of robot. Thus any kind of prediction about which of these will successfully evolve to become mainstream is more or less impossible.
  • There are two common myths: first, the waiting-for-AI myth: the idea that robotics is waiting for some breakthrough innovation in Artificial Intelligence, without which robotics is stuck. And second, the need-full-autonomy myth: the idea that fully autonomous robots represent some ideal end-state of the development of robotics; this is not true - instead we need robots and human-robot interfaces that will transition smoothly between tele-operation and semi-autonomy. We call this dynamic autonomy.
  • There are significant opportunities for innovation right now - underpinned by a significant head-of-steam of fundamental technologies from university R&D. I offer some examples for discussion, including companion robots, wearable robots and tele-operated robots with immersive tele-presence, perhaps making use of remote tele-haptics (although I claim no special insights). 
  • We need new and agile approaches to innovation. New kinds of research-industry partnerships and flexible, responsive pathways to commercialisation. Especially campus start-ups and incubators, nurturing post-docs as next generation entrepreneurs; and innovative modes of funding. We also need responsible and sustainable innovation.Haptocs 

Here are links to further information, and video clips, on the projects and robots highlighted in the talk:

Slide 10: The Cooperative Human Robot Interaction Systems (CHRIS) project
Slide 11: MOBISERV - An Integrated Intelligent Home Environment for the Provision of Health, Nutrition and Well-Being Services to Older Adults
Slide 12: Hand exoskeleton for post stroke recovery

Slide 13: Tactile Sensing - tele-haptics

Slide 14: Surgical Haptics
Slide 15: Search and Rescue - Disaster Response
Slide 16: Towards energy sustainability

Sunday, May 26, 2013

What is the single biggest obstacle preventing robotics going mainstream?

The question Robotics by Invitation asked its panel in May 2013, was:

What is the single biggest obstacle preventing robotics from going mainstream? It has been said that we are on the edge of a ‘robotic tipping point’ … but where, exactly, is this edge? And what’s holding us back?

Here is my answer:

It depends on what you mean by mainstream. For a number of  major industry sectors robotics is already mainstream. In assembly-line automation, for instance; or undersea oil well maintenance and inspection. You could argue that robotics is well established as the technology of choice for planetary exploration. And in human culture too, robots are already decidedly mainstream. Make believe robots are everywhere, from toys and children’s cartoons, to TV ads and big budget Hollywood movies. Robots are so rooted in our cultural landscape that public attitudes are, I believe, informed – or rather misinformed – primarily by fictional rather than real-world robots.

So I think robotics is already mainstream. But I understand the sentiment behind the question. In robotics we have a shared sense of a technology that has yet to reach its true potential; of a dream unfulfilled.

The question asks what is the single biggest obstacle. In my view some of the biggest immediate obstacles are not technical but human. Let me explain with an example. We already have some very capable tele-operated robots for disaster response. They are rugged, reliable and some are well field-tested. Yet why it is that robots like these are not standard equipment with fire brigades? I see no technical reason that fire tenders shouldn’t have, as standard, a compartment with a tele-operated robot – charged and ready for use when it’s needed. There are, in my view, no real technical obstacles. The problem I think is that such robots need to become accepted by fire departments and the fire fighters themselves, with all that this entails for training, in-use experience and revised operational procedures.

In the longer term we need to ask what it would mean for robotics to go mainstream. Would it mean everyone having a personal robot, in the same we all now have personal computing devices? Or, when all cars are driverless perhaps? Or, when everyone whose lives would be improved with a robot assistant, could reasonably expect to be able to afford one? Some versions of mainstream are maybe not a good idea: I’m not sure I want to contemplate a world in there are as many personal mobile robots, as there are mobile phones now (~4.5 billion). Would this create robot smog, as Illah Nourbakhsh calls it in his brilliant new book Robot Futures?

Right now I don’t have a clear idea of what it would mean for robots to go mainstream, but one thing’s for sure: we should be thinking about what kind of sustainable, humanity benefitting and life enhancing mainstream robot futures we really want.


Thursday, March 28, 2013

A Crisis of Expectations

At the first UK Robot Ethics workshop on 25th March 2013, I offered - for discussion - the proposition that robotics is facing a Crisis of Expectations. And not for the first time. I argue that one possible consequence is (another) AI winter.

Here is a hypertext linked version of my paper.

Introduction

In this talk I set out the proposition that robotics is facing a crisis of expectations. As a community we face a number of expectation gaps - significant differences between what people think robots are and do, and what robots really are and really do, and (more seriously) might reasonably be expected to do in the near future. I will argue that there are three expectation gaps at work here: public expectations, press and media expectations and funder or stakeholder expectations, and that the combined effect of these amounts to a crisis of expectations. A crisis we roboticists need to be worried about.

Public Expectations

Here's a simple experiment. Ask a non-roboticist to give you an example of a robot - the first that comes into their mind. The odds are that it will be a robot from a Science Fiction movie: perhaps Terminator, R2-D2 or C-3PO or Data from Star Trek. Then ask them to name a real-world robot. Unlike your first question, which they will have answered quickly, this one generally needs a little longer. You might get an answer like "the robot in the advert that spray-paints cars" or, if you're lucky, they might know someone with a robot vacuum cleaner. So, although most people have a general idea that there are robots in factories, or helping soldiers to defuse bombs, the robots that they are most familiar with - the ones they can name and describe - are fictional.

None of this is surprising. The modern idea of a robot was, after all, conceived in Science Fiction. Czech playwright K. Capek first used the word Robot to describe a humanoid automaton in his play Rossum’s Universal Robots (RUR) and Isaac Asimov was the first to coin the word Robotics, in his famous short stories of the 1940s. The idea of a robot as an artificial mechanical person has become a ubiquitous fictional trope, and robots have, for half a century, been firmly rooted in our cultural landscape. We even talk about people acting robotically and, curiously, we don't mean like servants, we mean in a fashion that mimics the archetypal robot: stiff jointed and emotionally expressionless.

Furthermore, people like robots, as anyone who has had the pleasure of giving public talks will know. People like robots because robots are, to paraphrase W. Grey Walter, An Imitation of Life. Which probably accounts for the observation that we are all, it seems, both fascinated and disturbed by robots in equal measure. I have often been asked the question "how intelligent are intelligent robots?", but there's always an unspoken rider "...and should we be worried?". Robot dystopias, from Terminator, to The Matrix or out-of-control AI like HAL in Kubrick's 2001, make compelling entertainment but undoubtedly feed the dark side of our cultural love affair with robots.

It is not surprising then, that most people's expectations about robots are wrong. Their beliefs about what real-world robots do now are hazy, and their expectations about what robots might be like in the near future often spectacularly over optimistic. Some think that real-world robots are just like movie robots. Others are disappointed and feel that robotics has failed to deliver the promises of the 1960s. This expectation gap - the gap between what people think robots are capable of and what they're really capable of - is not one-dimensional and is, I argue, a problem for the robotics community. It is a problem that can manifest itself directly when, for instance, public attitudes towards robots are surveyed and the results used to inform policy [1]. It makes our work as roboticists harder, because the hard problems we are working on are problems many people think already solved, and because it creates societal expectations of robotics that cannot be met. And it is a problem because it underpins the next expectation gap I will describe.

Press and Media Expectations

You are technically literate, an engineer or scientist perhaps with a particular interest in robotics, but you've been stranded on a desert island for the past 30 years. Rescued and returned to civilisation you are keen to find out how far robotics science and technology has advanced and - rejoicing in the marvellous inventions of the Internet and its search engines - you scour the science press for robot news. Scanning the headlines you are thrilled to discover that robots are alive, and sending messages from space; robots can think or are "capable of human reasoning or learning"; robots have feelings, relate to humans, or demonstrate love, even behave ethically. Truly robots have achieved their promised potential. 

Then of course you start to dig deeper and read the science behind these stories. The truth dawns. Although the robotics you are reading about is significant work, done by very good people, the fact is - you begin to realise - that now, in 2013, robots cannot properly be said to think, feel, empathise, love or be moral agents; and certainly no robot is, in any meaningful sense, alive or sentient. Of course your disappointment is tempered by the discovery that astonishing strides have nevertheless been made.

So, robotics is subject to journalistic hype. In this respect robotics is not unique. Ben Goldacre has done much to expose bad science reporting, especially in medicine. But a robot is different to, say, a new strain of MRSA because - as I outlined above - most people think they know what a robot is. Goldacre has characterised bad science stories as falling into three categories: wacky stories, scare stories and breakthrough stories [2]. My observation is that robots in the press most often headline as either wacky or scary, even when the development is highly innovative.

I believe that robohype is a serious problem and an issue that the robotics community should worry about. The problem is this. Most people who read the press reports are lay readers who - perfectly reasonably - will not read much beyond the headline; certainly few will look for the source research. So every time a piece of robohype appears (pretty much every day) the level of mass-delusion about what robots do increases a bit more, and the expectation gap ratchets a little wider. Remember that the expectation gap is already wide. We are at the same time fascinated and fearful of robots, and this fascination feeds the hype because we want (or dread) the robofiction to become true. Which is of course one of the reasons for the hype in the first place.

Who's to blame for the robohype? Well we roboticists must share the blame. When we describe our robots and what they do we use anthropocentric words, especially when trying to explain our work to people outside the robotics community. Within the robotics and AI community we all understand that when we talk about an intelligent robot, what we mean is a robot that behaves as if it were intelligent; 'intelligent robot' is just a convenient shorthand. So when we talk to journalists we should not be too surprised when "this robot behaves, in some limited sense, as if it has feelings" gets reported as "this robot has feelings". But science journalists must, I think, also do better than this.

Funder and Stakeholder Expectations

Many of us rely on research grants to fund our work and - whether we like it or not - we have to become expert in the discipline of grantology. We pore over the small print of funding calls and craft our proposals with infinite care in an effort to persuade reviewers (also skilled grantologists) to award the coveted 'outstanding' scores. We are competing for a share of a limited resource, and the most persuasive proposals - the most adventurous, which also promise the greatest impact while matching themes defined to be of national importance - tend to succeed. Of course all of this is more or less equally true whether you are bidding for a grant in history, microbiology or robotics. But the crisis of expectations makes robotics different.

There are, I think, three factors at work. The first is the societal and cultural context - the expectation gaps I have outlined above. The second is the arguably disgraceful lack of useful and widely accepted benchmarks in robotics, which means that it is perfectly possible to spend 3 years developing a new robot which is impossible to quantifiably demonstrate as superior to comparable robots, including those that already existed when that project started. And the third is the fact that policymakers, funders and stakeholders are themselves under pressure to deliver solutions to very serious societal or economic challenges and therefore perhaps too eager to buy into the promise of robotics. Whether naively or wittingly, we roboticists are I believe guilty of exploiting these three factors when we write our grant applications.

I believe we now find ourselves in an environment in which it is now almost de rigueur to over-promise when writing grant applications. Only the bravest proposal writer will be brutally honest about the extreme difficulty of making significant progress in, for instance, robot cognition and admit that even a successful project, which incrementally extends the state of the art, may have only modest impact. Of course I am not suggesting that all grants over promise and under deliver, but I contend that many do and - because of the factors I have set out - they are rarely called to account. Clearly the danger is that sooner or later funding bodies will react by closing down robotics research initiatives and we will enter a new cycle of AI Winter.

AI has experienced "several cycles of hype, followed by disappointment and criticism, followed by funding cuts, followed by renewed interest years or decades later" [3]. The most serious AI Winter in the UK was triggered by the Lighthill Report [4] which led to a more or less complete cancellation of AI research in 1974. Are we heading for a robotics winter? Perhaps not. One positive sign is the identification of Robotics and Autonomous Systems as one of eight technologies of strategic importance to the UK [5]. Another is the apparent health of robotics funding in the EU and, in particular, Horizon 2020. But a funding winter is only the most extreme consequence of the culture of over-promising I have outlined here.

Discussion

I want to conclude this talk with some thoughts on how we, as a community, should respond to the crisis of expectations. And respond we must. We have, I believe, an ethical duty to the society we serve, as well as to ourselves, to take steps to counter the expectation gaps that I have outlined. Those steps might include:
  • At every opportunity, individually and collectively, we engage the public in honest explanation and open dialogue to raise awareness of the reality of robotics. We need to be truthful about the limitations of robots and robot intelligence, and measured with our predictions. We can show that real robots are both very different and much more surprising than their fictional counterparts.
  • When we come across particularly egregious robot reporting in the press and media we make the effort to contact the reporting journalist, to explain simply and plainly the true significance of the work behind the story. 
  • Individually and collectively we endeavour to resist the pressure to over-promise in our bids and proposals, and when we review proposals or find ourselves advising on funding directions or priorities, we seek to influence towards a more measured and ultimately sustainable approach to the long term Robotics Project.
References

[1] Public Attitudes towards Robots. Special Eurobarometer 382, European Commission, 2012.

[2] Ben Goldacre. Don't dumb me down. The Guardian, 8 September 2005.
  
[3] AI Winter. Wikipedia, accessed 14 March 2013.
  
[4] James Lighthill. Artificial Intelligence: A General Survey. In Artificial Intelligence: a paper symposium, Science Research Council, 1973. Here is a BBC televised debate which followed publication of the Lighthill report, in which Donald Michie, Richard Gregory and John McCarthy challenge the report and its recommendations (1973).

Sunday, March 24, 2013

Robotics has a new kind of Cartesian Dualism, and it's just as unhelpful

I believe robotics has re-invented mind-body dualism.

At the excellent European Robotics Forum last week I attended a workshop called AI meets Robotics. The thinking behind the workshop was:
The fields of Artificial Intelligence (AI) and Robotics were strongly connected in the early days of AI, but became mostly disconnected later on. While there are several attempts at tackling them together, these attempts remain isolated points in a landscape whose overall structure and extent is not clear. Recently, it was suggested that even the otherwise successful EC program "Cognitive systems and robotics" was not entirely effective in putting together the two sides of cognitive systems and of robotics.
I couldn't agree more. Actually I would go further and suggest that robotics has a much bigger problem than we think. It's a new kind of dualism which parallels Cartesian brain-mind dualism, except in robotics, it's hardware-software dualism. And like Cartesian dualism it could prove just as unhelpful, both conceptually, and practically - in our quest to build intelligent robots.

While sitting in the workshop last week I realised rather sheepishly that I'm guilty of the same kind of dualistic thinking. In my Introduction to Robotics one of the (three) ways I define a robot is: an embodied Artificial Intelligence. And I go on to explain:
...a robot is an Artificial Intelligence (AI) with a physical body. The AI is the thing that provides the robot with its purposefulness of action, its cognition; without the AI the robot would just be a useless mechanical shell. A robot’s body is made of mechanical and electronic parts, including a microcomputer, and the AI made by the software running in the microcomputer. The robot analogue of mind/body is software/hardware. A robot’s software – its programming – is the thing that determines how intelligently it behaves, or whether it behaves at all.
But, as I said in the workshop, we must stop thinking of cognitive robots as either "a robot body with added AI", or "an AI with added motors and sensors". Instead we need a new kind of holistic approach that explicitly seeks to avoid this lazy with added thinking.


Thursday, March 07, 2013

Extreme debugging - a tale of microcode and an oven

It's been quite awhile since I debugged a computer program. Too long. Although I miss coding, the thing I miss more is the process of finding and fixing bugs in the code. Especially the really hard-to-track-down bugs that have you tearing your hair out - convinced your code cannot possibly be wrong - that something else must be the problem. But then when you track down that impossible bug, it becomes so obvious.

I wanted to write here about the most fun I've ever had debugging code. And also the most bizarre, since fixing the bugs required the use of an oven. Yes, an oven. It turned out the bugs were temperature dependent.

But first some background. The year is 1986. I'm the co-founder of a university spin-out company in Hull, England, called Metaforth Ltd. The company was set up to commercialise a stack-based computer architecture that runs the language Forth natively. In other words Forth is the equivalent of the CPU's assembly language. Our first product was a 16-bit industrial processor which we called the MF1600. It was a 2-card module, designed to plug into the (then) industry standard VME bus. One of the cards was the Central Processing Unit (CPU) - not using a microprocessor, but a set of discrete components using fast Transistor Transistor Logic devices. The other card provided memory, input-output interfaces, and the logic needed to interface with the VME bus.

The MF1600 was fast. It ran Forth at 6.6 Million Forth Instructions Per Second (MIPS). Sluggish of course by today's standards, but in 1986 6.6 MIPS was faster than any microprocessor. Then PCs were powered by the state-of-the-art Intel 286 with a clock frequency of 6MHz, managing around 0.9 Assembler MIPS. And because Forth instructions are higher level than assembler, the speed differential was greater still when doing real work.

Ok, now to the epic debugging...

One of our customers reported that during extended tests in an industrial rack the MF1600 was mysteriously crashing. And crashing in a way we'd not experienced before when running tried and tested code. One of their engineers noted that their test rack was running very hot, almost certainly exceeding the MF1600's upper temperature limit of 55°C. Out of spec maybe, but still not good.

So we knew the problem was temperature related. Now any experienced electronics engineer will know that electrical signals take time to get from one place to another. It's called propagation delay, and these delays are normally measured in billionths of a second (nanoseconds). And propagation delays tend to increase with temperature. Like any CPU our MF1600 relies on signals getting to the right place at the right time. And if several signals have to reach the same place at the same time then even a small extra delay in one of them can cause major problems.

On most CPUs when each basic instruction is executed, a tiny program inside the CPU actually does the work of that instruction. Those tiny programs are called microcode. Here is a blog post from several years ago where I explain what microcode is. Microcode is magic stuff - it's the place where software and hardware meet. Just like any program microcode has to be written and debugged, but uniquely - when you write microcode - you have to take account of how long it takes to process and route signals and data across the CPU: 100nS from A to B; 120nS from C and D, and so on. So if the timing in any microcode is tight (i.e. only just allows for the normal delay and leaves no margin of error), it could result in that microcode program crashing at elevated temperatures.

So, we reckoned we had one, or possibly several, microcode programs in the MF1600 CPU with 'tight' timing. The question was, how to find them.

The MF1600 CPU had around 86 (Forth) instructions, and the timing bugs could be in any of them. Now testing microcode is very difficult, and the nature of the problem made the testing problem even worse. A timing problem at elevated temperatures means that testing the microcode by single-stepping the CPU clock and tracing the signals through the CPU with a logic analyser wouldn't help at all. We needed a way to efficiently identify the buggy instructions. Then we could worry about debugging them later. What we wanted was a way to test (i.e. exercise single instructions, one by one), on a running system at high temperatures.

Then we remembered that we don't need all 86 instructions to run the computer. Most of them can be emulated by putting together a set of simpler instructions. So a strategy formed: (1) write a set of tiny Forth programs that replace as many of the CPU instructions as possible, (2) recompile the operating system, then (3) hope that the CPU runs ok at high temperature. If it does then (4) run the CPU in an oven and one by one test the replaced instructions.

Actually it didn't take long to do steps (1) and (2), because the Forth programs already existed to express more complex instructions as sets of simpler ones. Many Forth systems on conventional microprocessor systems were built like that. In the end we had a minimal set of about 24 instructions. So, with the operating system recompiled and installed we put the CPU into the oven and switched on the heat. The system ran perfectly (but a little slower than usual), and continued to run well above the temperature it had previously crashed. A real stroke of luck.

Here's an example of a simple Forth instruction to replace two values on the stack with the smaller of those values, expressed as a Forth program we call MIN
: MIN  OVER OVER > IF SWAP THEN DROP ;
(From my 1983 book The Complete Forth).

From then on it was relatively easy to run small test programs to exercise the other 62 instructions (which were of course still there in the CPU - just not used by the operating system). A couple of days work and we found the rogue 2 instructions that were crashing at temperature. They were - as you might have expected - rather complex instructions. One was (LOOP) an instruction for do loops.

Then debugging those instructions simply required studying the microcode and the big chart with all the CPU delay times, over several pots of coffee. Knowing (or strongly suspecting) that what we were looking for were timing problems, called race hazards, where the data from one part of the CPU just doesn't have time to get to another part in time to be used for the next step of the microcode program. Having identified the suspect timing I then re-wrote the microcode for those instructions to leave a bit more time - by adding one clock cycle to each instruction (50nS).

Then reverting to the old non-patched operating system, it was the moment of truth. Back in the oven, cranking up the temperature, while the CPU was running test programs specifically designed to stress those particular instructions. Yes! The system didn't crash at all, over several days of running at temperature. I recall pushing the temperature above 100°C. Components on the CPU circuit board were melting, but still it didn't crash.

So that's how we debugged code with an oven.

Wednesday, February 20, 2013

Could we experience the workings of our own brains?

One of the oft quoted paradoxes of consciousness is that we are unable to observe or experience our own conscious minds at work; that we cannot be conscious of the workings of consciousness. I've always been puzzled about why this is a puzzle. After all, we don't think it odd that word processors have no insight into their inner workings (although that's a bad example because we might conceivably code a future self-aware WP and arrange for it to access its inner machinery).

Perhaps a better example is this. The act of picking up a cup of hot coffee and bringing it to your lips appears, on the face of it, to be perfectly observable. No mystery at all. We can see the joints and muscles at work, 'feel' the tactile sensing of the coffee cup, and its weight as we begin to lift it. We can even build mathematical models of the kinetics and dynamics, and (with somewhat more difficulty) make robot arms to pick up cups of coffee. But - I contend - we are kidding ourselves if we think we know what's going on in the complex sensory and neurological processes that appear so effortless to perform. The fact we can observe and even feel ourselves lifting a coffee cup gives very little real insight. And the mathematical models - and robots - are not really models of the human neurological and physiological processes at all, they are models of idealised abstractions of limbs, joints and hand.

I would argue that we have no greater insight into the workings of this (apparently straightforward) physical act, than we do of thinking itself. But again this is not surprising. The additional cognitive machinery to be able to access or experience the inner workings of any process, whether mental or physical, would be huge and (biologically) expensive. And with no apparent survival value (except perhaps for philosophers of mind), it's not surprising that such mechanisms have not evolved. They would of course require not just extra grey matter, but sensing too. It's interesting that there are no pain receptors within our brains - that's why it's perfectly possible to have brain surgery while wide awake.

But this got me thinking. Imagine that at some future time we have nanoscale sensors capable of positioning themselves throughout our brains in order to provide a very large sensor network. If each sensor is monitoring the activity of key neurons, or axons, and able to transmit its readings in real-time to an external device, then we would have the data to provide ourselves with a real-time activity image of our own brains. It could be presented visually, or perhaps sonically (or via multi-media). It might be fun for awhile, but this personal brain imaging technology (let's call it iBrain) probably wouldn't provide us with much more insight or experience of our own thought processes.

But let's assume that by the time we have the nanotechnology for harmlessly inserting millions of brain nanosensors we will have also figured out the major architectural structures of the brain - crucially linking the neural scale to the macro scale. Actually, if we believe that the recently announced European and US human brain Grand Challenges will achieve what they are promising in terms of modelling and mapping human brain activity, then such an understanding should only be a few decades away. So now build those maps and structures into the personal iBrain, and we will be presented not with a vast and bewildering cloud of colours, as in the beautiful image above, but a simpler image with major highways and structures highlighted. Still complex of course, but then so are street maps of cities or countries. So the iBrain would allow you to zoom into certain regions and really see what's going on while you (say) listen to Bach (the very thing I'm doing right now).

Then we really would be able to observe our own brains at work and, just perhaps, experience the connection between brain and thought.

Thursday, February 07, 2013

euRathlon is go!


I'm very excited to be leading a new project called euRathlon - which is short for European Robotics Athlon. Up until now the project has been under wraps, but now - finally - we can go public. I'll explain a bit more about the process that led to here later in this blog post, but first - about euRathlon.

It is an EU funded project to set up and run a series of outdoor robotics competitions. The focus is robots for search and rescue, or - more broadly - disaster response. Right now robots are not part of the standard equipment of emergency services, like fire brigades. But actually, robotics technology is coming close to the point where they could be and, in my view, should be. It seems to me that first responders should have robots as a standard part of their equipment, so that when there is a disaster robots are used as a matter of routine. euRathlon will, I hope, speed up the development and adoption of smarter robots for first responders.

The big vision of euRathlon is a competition scenario in which no single type of robot is, on its own, sufficient. Inspired by the Fukushima accident of 2011, our Grand Challenge will require teams of land, sea and flying robots to investigate the scene. Here is the project abstract:
euRathlon is a new outdoor robotics competition, which will invite teams to test the intelligence and autonomy of their robots in realistic mock emergency-response scenarios. Inspired by the 2011 Fukushima accident the euRathlon competition will require a team of land, underwater and flying robots to work together to survey the scene, collect environmental data, and identify critical hazards. Leading up to this ‘grand challenge’ in 2015, will be directly related land and underwater robot competitions in 2013 and 2014, respectively. The euRathlon competitions will be supported by annual workshops for competitors. In parallel there will be an open process of developing benchmarks to allow comparison of different robots in the euRathlon competitions. Linked public engagement activities will connect euRathlon with robotics research, industry and emergency services, as well as the general public. Attendance of spectators will be welcomed, and we hope that euRathlon events will attract considerable press and media attention. By targeting a specific and urgent need - intelligent robots for disaster-response -euRathlon will provide European robotics with a platform for challenging, extending and showcasing European cognitive robotics technologies.
Followers of this blog will know that I've been involved in the European Land Robotics Challenge (ELROB) for some years. I blogged about it in 2010: Real-world robotics reality check, and in 2007: A truly Grand Challenge. So, when the EU Framework Programme (FP7) issued a call for competition proposals late in 2011 an opportunity arose for those of us involved in ELROB to think about bidding for a new competition, building on that experience and extending our ambition. We were very fortunate to link up with the organisers of the Student Autonomous Underwater Challenge - Europe (SAUC-E), a very well regarded underwater robot competition. We then had land and sea robots covered. The final piece of the jigsaw fell into place when we were joined by our final partner, organisers of the workshop on Research, Development and Education on Unmanned Aerial Systems (RED-UAS 2011), with huge experience of aerial robots.

The euRathlon consortium was complete, and together we submitted our bid in April 2012. Following evaluation the bid was successful and then, from September to December 2012, we went into a phase of project negotiation, in which we worked out and agreed the details of the project with the EC. That process concluded successfully, and the project started on 1 January 2013.

So now, euRathlon is go!

Saturday, September 15, 2012

How to make an artificial companion you can really trust

Here are the slides of my Bristol TEDx talk:



My key take home messages from this talk are:
  • Big transition happening now from industrial robots to personal robots.
  • Current generation android robots are disappointing: they look amazing but their AI - and hence their behaviour - falls far short of matching their appearance. I call this the brain-body mismatch problem. That's the bad news.
  • The good news is that current research in personal human robot interaction (pHRI) is working on a whole range of problems that will contribute to future artificial companion robots.
  • Taking as a model from SF, the robot butler Andrew in the movie Bicentennial Man, I outline capabilities that an artificial companion will need to have, and current research - some at the Bristol Robotics lab.
  • Some of these capabilities are blindingly obvious: like safety, others less so, like gestural communication using body language. Most shocking perhaps is that an artificial companion will need to be self-aware, in order to be safe and trustworthy.
  • A very big challenge will be putting all of these capabilities together, blending them seamlessly into one robot. This is one of the current Grand Challenges of robotics.

And here are my speaker notes, for each slide, together with links to the YouTube videos for the movie clips in the presentation. Many of these are longer clips, with commentary by our lab or project colleagues.

Slide 2
There are currently estimated as between 8 and 10 million robots in the world, but virtually none of these are ‘personal’ robots. Robots that provide us with companionship, or assistance if we’re frail or infirm. Or as helpmates in our workplace. This is odd, when we consider that the word 'robot' was first coined 90 years ago to refer to artificial humanoid person.

But there is a significant transition happening right now in robotics from robots like these - working by and large out of sight and out of mind in factories, or warehouses, servicing underwater oil wells, or exploring the surface of Mars, to robots working with us in the home or workplace.

What I want to do in this talk is outline the current state of play in artificial companions, and the problems that need to be solved before artificial companions become commonplace.

Slide 3
But first I want to ask what kind of robot companion you would like - taking as a cue robots from SF movies...? I would choose WALL-E!

Before I begin to outline the challenges of building an artificial companion you could trust, let me first turn to the question of robot intelligence.

Slide 4
How intelligent are intelligent robots - not SF robots, but real world robots.

Our intuition tells us that a cat is smarter than a crocodile, which is in turn smarter than a cockroach. So we have a kind of animal intelligence scale, Where would robots fit?

Of course this scale assumes 'intelligence' is one thing that animals, humans or robots have more or less of, which is quite wrong - but let's go with it to at least make an attempt to see where robots fit.

Slide 5
Humans are, of course, at the 'top' of this scale. We are, for the time being, the most 'intelligent' entities we know.

Slide 6
Here is a robot vacuum cleaner. Some of you may have one. Where would it fit?

I would say right at the bottom. It's perhaps about as smart as a single celled organism - like an amoeba.

Slide 7
This android, called an Actroid robot from University of Osaka, looks as if it should be pretty smart. Perhaps toward the right of this scale..?

But looks can be deceptive. This robot is - I would estimate - little smarter than your washing machine.

Slide 8
Actroid is an illustration of what I can the brain-body mismatch problem: we can build humanoid - android robots that look amazing, beautiful even, but their behaviours fall far far short of what we would expect from their appearance. We can build the bodies but not the brains - and its the problem of bridging this gap that I will focus on now.

But we should note - looking at the amazing Paro robot baby seal - that the brain-body mismatch problem is much less serious for zoomorphic robots. Robot pets. This is why robot pets are, right now, much more successful artificial companions than humanoid robots.

Slide 9
In order to give us a mental model of the kind of artificial companion robot we might be thinking about, let's choose Andrew, the butler robot from the movie Bicentennial Man. The model, perhaps, of an ideal companion robot.

Although I prefer the robotic 'early' Andrew in the movie, rather than the android Andrew that the robot becomes. I think that robots should look like robots, and not like people.

So what are the capabilities that Andrew, or a robot like Andrew, would need?

Slide 10
The first, I would strongly suggest, is that our robot need to be mobile and very, very safe. Safety, and how to guarantee it, is a major current research problem in human robot interaction, and here we see a current generation human assistive robot used in this research.

At this point I want to point out that almost all of the robots, and projects, I'm showing you now are right here in Bristol, at the Bristol Robotics Laboratory. A joint research lab of UWE, Bristol and the University of Bristol, and the largest robotic research lab in the UK.

Slide 11
Humans use body language, especially gesture, as a very important part of human-human communication, and so an effective robot companion needs to be able to understand and use human body language, or gestural communication. This is the Bristol Elumotion Robot Torso BERT, used for research in gestural communication.

Another important part of human-human communication is gaze. We unconsciously look to see where each others' eyes are looking, and look there too. On the right here we see the Bristol digital robot head used for research in human robot shared attention through gaze tracking.

This, and the next few slides, all represent research done as part of the EU funded CHRIS project, which stands for Cooperative Human Robot Interaction Systems. The BRL led the CHRIS project.

Video clip BERT explains its purpose.

Slide 12
A robot companion needs to be able to learn and recognise everyday objects, even reading the labels on those objects, as we see in the movie clips here.

Video clip BERT learns object names.

Slide 13
And of course our robot companion needs to be able to interact directly with humans, able to give objects, safely and naturally, to a human, and take objects from a human. This remain a difficult challenge - especially to assure the safety of these interactions - but here we see the iCub robot used in the CHRIS project for work in on this problem.

Also important, not just to grasping objects but for any possible direct interaction with humans, is touch sensitive hands and fingertips - and here we see parallel research in the BRL on touch sensitive fingertips.

I think a very good initial test of trust for a robot companion would be to hold out your hand for a handshake. If the robot is able to recognise what you mean by the gesture, and respond with its hand, then safely and gently shake your hand - then it would have taken the first step in earning your trust in its capabilities.

Video clip of iCub robot passing objects is part of the wonderful CHRIS project video: Cooperative Human Robot Interactive System - CHRIS Project - FP7 215805 - project web http://www.chrisfp7.eu

Slide 14
Let's now turn to emotional states. An artificial companion robot needs to be able to tell when you are angry, or upset, for instance, because it may well moderate its behaviour accordingly and ask - what's wrong? Humans use facial expression to express emotional states, and in this movie clip we see the android expressive robot head Jules pulling faces with research Peter Jaeckel.

If our robot companion had an android face (even though I'm not at all sure its necessary or a good idea) then it too would be able to express 'artificial' emotions, through facial expression.

Video clip Jules, an expressive android robot head is part of a YouTube video: Chris Melhuish, Director of the Bristol Robotics Laboratory discusses work in the area of human/robot interaction.

Slide 15
Finally I want to turn to perhaps the oddest looking robot in this roundup. Cronos, conceived and built in a project led by my old friend Owen Holland. Owen was a co-founder of the robotics lab at UWE, together with its current director Chris Melhuish, and myself.

Cronos is quite unlike the other humanoid robots we've seen because it was designed to be humanoid from the inside, not the outside. Owen Holland calls this approach 'anthropomimetic'.

Cronos is made from hand-sculpted 'bones' made from thermo-softening plastic, held together with elastic tendons and motors that act as muscles. As we can see in this move Cronos 'bounces' whenever it moves even just a part of its body. Cronos is light, soft and compliant. This makes Cronos very hard to control, but this is in fact the whole idea. Cronos was designed to test ideas on robots with internal models. Cronos has, inside itself, a computer simulation of itself. This means that Cronos can in a sense 'imagine' different moves and find ones that work best. It can learn to control its own body.

Cronos therefore has a degree of self-awareness, that most other humanoid robots don't have.

I think this is important because a robot with an internal model and therefore able to try out different moves in its computer simulation before enacting them for real, will be safer as a result. Paradoxically therefore I think that a level of self-awareness is needed for safer, and therefore more trustworthy robots.

Video clip ECCE Humanoid Robot presented by Hugo Gravato Marques

Slide 16
The various skills and capabilities I've outlined here are almost certainly not enough for our ideal artificial companion. But suppose we could build a robot that combines  all of these technologies in a single body - I think we would have moved significantly closer to an artificial companion like Andrew.

Slide 17
Thankyou for listening and thank you for affording me the opportunity to talk about the work of the many amazing roboticists I have represented - I hope accurately - in this talk.

All of the images and movies in this presentation are believed to be copyright free. If any are not then please let me know and I will remove them.


Related blog posts:
On self-aware robots: Robot know thyself
60 years of asking Can Robots Think?
On Robot ethics: The Ethical Roboticist (lecture); Discussing Asimov's laws of robotics and a draft revision; Revising Asimov: the Ethical Roboticist
Could a robot have feelings?

Tuesday, July 24, 2012

When robots start telling each other stories...

About 6 years ago the late amazing Richard Gregory said to me, with a twinkle in his eye, "when your robots start telling each other stories, then you'll really be onto something". It was a remark with much deeper significance than I realised at the time.

Richard planted a seed that's been growing since. What I didn't fully appreciate then, but do now, is the profound importance of narrative. More than we perhaps imagine. Narrative is, I suspect, a fundamental property of both human societies and individual human beings. It may even be a universal property of all advanced societies of sentient social beings. Let me try and justify this outlandish claim. First, take human societies. We humans love to tell each other stories. Whether our stories are epic poems, love songs; stories told with sound (music), or movement (dance), or with stuff (sculpture or art). Stories about what we did today, or on our holidays, stories made with images (photos, or movies); true stories or fantasies, or stories about the Universe that strive to be true (science), or very formal abstract stories told with mathematics, stories are everywhere. Arguably human culture is mostly stories.

Since humans started remembering stories and passing them on orally, and more recently with writing, we have had history: the more-or-less-true grand stories of human civilisation. Even the many artefacts of our civilisation are kinds of stories. They are embodied stories, which narrate the process by which they were designed and made; the plans and drawings which we use to formally record those designs are literally stories which tell how to arrange and join materials in space to fashion the artefact. Project plans are narratives of a different kind: they tell the story of the future steps that must be taken to achieve a goal. Computer programs are stories too. Except that they contain multiple narratives (bifurcated with branches and reiterated with loops), whose paths are determined by input data, which are related over and over at blinding speed within the computer. 

Now consider individual humans. There is a persuasive view in psychology that each of us owes our identity, our sense of self, to our personal life stories. The physical stuff that makes us, the cells of our body, are regenerated and replaced continuously, so that there's very little of you that existed 5 years ago. (I just realised the fillings in my teeth are probably the oldest part of me!) Yet you are still you. You feel like the same you 10, 20 or in my case 50 years ago - since I first became self-aware. I think that it's the lived and remembered personal narrative of our lives that provides us with the feeling, the illusion if you like, of a persistent self. This is I think why degenerative brain diseases are so terrifying. They appear to eat away that personal narrative so devastatingly that the person is ultimately lost, even while their physical body continues living.

So I was tremendously excited to be invited to a cross-disciplinary workshop on Narrative and Complex Systems at the York Centre for Complex Systems Analysis a couple of weeks ago, co-organised by York Professors of English Richard Walsh, and Computer Science Susan Stepney. For the first time I found myself in a forum in which I could share and debate ideas on narrative.

In preparing for the workshop I realised that perhaps the idea of robots telling each other stories isn't as far fetched as it first appears. Think about a simple robot, like the e-puck. What does the story of its life consist of? Well, it is the complete history of all of the movements, including turns, etc, punctuated by interactions with its environment. Because the robot and its set of behaviours is simple, then those interactions are pretty simple too. It occurred to me that it is perfectly possible for a robot to remember everything that has ever happened to it. Now place a number of these robots together, in a simple 'society' of robots, and provide them with the mechanism to exchange 'life stories' (or more likely, fragments of life stories). This mechanism is something we already developed in the Artificial Culture project - it is social learning by imitation. These robots would be telling each other stories.

But, I hear you ask, would these stories have any meaning? Well, to start with I think we must abandon the notion that they would necessarily mean anything to us humans. After all, these are robots telling each other stories. Ok, so would the stories mean anything to the robots themselves, especially robots with limited 'cognition'? Now we are in the interesting territory of semiotics, or - to be more accurate - robosemiotics. What, for instance, would one robot's story signify to another? That signification would I think be the meaning. But I think to go any further we would need to do the robot experiment I have outlined here.

And what would be the point of my proposed robot experiment? It is, I suggest, this:
to explore, with an abstract but embodied model, the relationship between the narrative self and shared narrative, i.e. culture.
By doing this experiment would we be, as Richard Gregory suggested, really onto something?

Wednesday, June 27, 2012

Robot know thyself

How can we measure self-awareness in artificial systems?

This was a question that came up during a meeting of the Awareness project advisory board two weeks ago at Edinburgh Napier University. Awareness is a project bringing together researchers and projects interested in self-awareness in autonomic systems. In philosophy and psychology self-awareness refers to the ability of an animal to recognise itself as an individual, separate from other individuals and the environment. Self-awareness in humans is, arguably, synonymous with sentience. A few other animals, notably elephants, dolphins and some apes appear to demonstrate self-awareness. I think far more species may well experience self-awareness - but in ways that are impossible for us to discern.

In artificial systems it seems we need a new and broader definition of self-awareness - but what that definition is remains an open question. Defining artificial self-awareness as self-recognition assumes a very high level of cognition, equivalent to sentience perhaps. But we have no idea how to build sentient systems, which suggests we should not set the bar so high. And lower levels of self-awareness may be hugely useful* and interesting - as well as more achievable in the near-term.

Let's start by thinking about what a minimally self-aware system would be like. Think of a robot able to monitor its own battery level. One could argue that, technically, that robot has some minimal self-awareness, but I think that to qualify as 'self-aware' the robot would also need some mechanism to react appropriately when its battery level falls below a certain level. In other words, a behaviour linked to its internal self-sensing. It could be as simple as switching on a battery-low warning LED, or as complex as suspending its current activity to go and find a battery charging station.

So this suggests a definition for minimal self-awareness:
A self-aware system is one that can monitor some internal property and react, with an appropriate behaviour, when that property changes.
So how would we measure this kind of self-awareness? Well if we know the internal mechanism because we designed it), then it's trivial to declare the system as (minimally) self-aware. But what if we don't? Then we have to observe the system's behaviour and deduce that it must be self-aware; it must be reasonably safe to assume an animal visits the watering hole to drink because of some internal sensing of 'thirst'.



But it seems to me that we cannot invent some universal test for self-awareness that encompasses all self-aware systems, from the minimal to the sentient; a kind of universal mirror test. Of course the mirror test is itself unsatisfactory. For a start it only works for animals (or robots) with vision and - in the case of animals - with a reasonably unambiguous behavioural response that suggests "it's me!" recognition.

And it would be trivially easy to equip a robot with a camera and image processing software that compares the camera image with a (mirror) image of itself, then lights an LED, or makes a sound (or something) to indicate "that's me!" if there's a match. Put the robot in front of a mirror and the robot will signal "that's me!". Does that make the robot self-aware? This thought experiment shows why we should be sceptical about claims of robots that pass the mirror test (although some work in this direction is certainly interesting). It also demonstrates that, just as in the minimally self-aware robot case, we need to examine the internal mechanisms.

So where does this leave us? It seems to me that self-awareness is, like intelligence, not one thing that animals or robots have more or less of. And it follows, again like intelligence, there cannot be one test for self-awareness, either at the minimal or the sentient ends of the self-awareness spectrum.



Related posts:
Machine Intelligence: fake or real?
How Intelligent are Intelligent Robots?
Could a robot have feelings?

* In the comments below Andrey Pozhogin asks the question:What are the benefits of being a self-aware robot? Will it do its job better for selfish reasons?

A minimal level of self-awareness, illustrated by my example of a robot able to sense its own battery level and stop what it's doing to go and find a recharging station when the battery level drops below a certain level, has obvious utility. But what about higher levels of self-awareness? A robot that is able to sense that parts of itself are failing and either adapt its behaviour to compensate, or fail safely is clearly a robot we're likely to trust more than a robot with no such internal fault detection. In short, its a safer robot because of this self-awareness.

But these robots, able to respond appropriately to internal changes (to battery level, or faults) are still essentially reactive. A higher level of artificial self-awareness can be achieved by providing a robot with an internal model of itself. Having an internal model (which mirrors the status of the real robot as self-sensed, i.e. it's a continuously updating self-model) allows a level of predictive control. By running its self-model inside a simulation of its environment the robot can then try out different actions and test the likely outcomes of alternative actions. (As an aside, this robot would be a Popperian creature of Dennett's Tower of Generate and Test - see my blog post here.) By assessing the outcomes of each possible action for its safety the robot would be able to choose the action most likely to be the safest. A self-model represents, I think, a higher level of self-awareness with significant potential for greater safety and trustworthiness in autonomous robots.

To answer the 2nd part of Andrey's question, the robot would do its job better, not for selfish reasons - but for self-aware reasons.
(postscript added 4 July 2012)

Tuesday, June 19, 2012

60 years of asking Can Robots Think?

Last week at the Cheltenham Science Festival we debated the question Can robots think? It's not a new question. Here, for instance, is a wonderful interview from 1961 on the very same question. So, the question hasn't changed. Has the answer?


Well it's interesting to note that I, and fellow panelists Murray Shanahan and Lilian Edwards, were much more cautious last week in Cheltenham, than our illustrious predecessors. Both on the question can present day robots think: answer No. And will robots (or computers) be able to think any time soon: answer, again No.

The obvious conclusion is that 50 years of Artificial Intelligence research has failed. But I think that isn't true. AI has delivered some remarkable advances, like natural speech recognition and synthesis, chess programs, conversational AI (chatbots) and lots of 'behind the scenes' AI (of the sort that figures out your preferences and annoyingly presents personalised advertising on web pages). But what is undoubtedly true was Weisner, Selfridge and Shannon were being very optimistic (after all AI had only been conceived a decade earlier by Alan Turing). Whereas today, perhaps chastened and humbled, most researchers take a much more cautious approach to these kinds of claims.

But I think there are more complex reasons.

One is that we now take a much stricter view of what we mean by 'thinking'. As I explained last week in Cheltenham, it's relatively easy to make a robot that behaves as if it is thinking (and, I'm afraid, also relatively easy to figure out that the robot is not really thinking). So, it seems that a simulation of thinking is not good enough*. We're now looking for the real thing.

That leads to the second reason. It seems that we are not much closer to understanding how cognition in animals and humans works than we were 60 years ago. Actually, that's unfair. There have been tremendous advances in cognitive neuroscience but - as far as I can tell - those advances have brought us little closer to being able to engineer thinking in artificial systems. That's because it's a very very hard problem. And, to add further complication, it remains a philosophical as well as a scientific problem.

In Cheltenham Murray Shanahan brilliantly explained that there are three approaches to solving the problem. The first is what we might call a behaviourist approach: don't worry about what thinking is, just try and make a machine that behaves as if it's thinking. The second is the computational modelling approach: try and construct, from first principles, a theoretical model of how thinking should work, then implement that. And third, the emulate real brains approach: scan real brains in sufficiently fine detail and then build a high fidelity model with all the same connections, etc, in a very large computer. In principle, the second and third approaches should produce real thinking.

What I find particularly interesting is that the first of these 3 approaches is more or less the one adopted by the conversational AI programs entered for the Loebner prize competition. Running annually since 1992, the Loebner prize is based on the test for determining if machines can think, famously suggested by Alan Turing in 1950 and now known as the Turing test. To paraphrase: if a human cannot tell whether she is conversing with a machine or another human - and it's a machine - then that machine must be judged to be thinking. I strongly recommend reading Turing's beautifully argued 1950 paper.

No chatbot has yet claimed the $100,000 first prize, but I suspect that we will see a winner sooner or later (personally I think it's a shame Apple hasn't entered Siri). But the naysayers will still argue that the winner is not really thinking (despite passing the Turing test). And I think I would agree with them. My view is that a conversational AI program, however convincing, remains an example of 'narrow' AI. Like a chess program a chatbot is designed to do just one kind of thinking: textual conversation. I believe that true artificial thinking ('general' AI) requires a body.

And hence a new kind of Turing test: for an embodied AI, AKA robot.

And this brings me back to Murray's 3 approaches. My view is that the 3rd approach 'emulate real brains' is at best utterly impractical because it would mean emulating the whole organism (of course, in any event, your brain isn't just the 1300 or so grammes of meat in your head, it's the whole of your nervous system). And, ultimately, I think that the 1st (behaviourist - which is kind of approaching the problem from the outside in) and 2nd (computational modelling - which is an inside out approach) will converge.

So when, eventually, the first thinking robot passes the (as yet undefined) Turing test for robots I don't think it will matter very much whether the robot is behaving as if it's thinking - or actually is, for reasons of its internal architecture, thinking. Like Turing, I think it's the test that matters.


*Personally I think that a good enough behavioural simulation will be just fine. After all, an aeroplane is - in some sense - a simulation of avian flight but no one would doubt that it is also actually flying.

Tuesday, May 08, 2012

The Symbrion swarm-organism lifecycle

I've blogged before about the Symbrion project: an ambitious 5-year project to build a swarm of independently mobile autonomous robots that have the ability - when required - to self-assemble into 3D 'multi-cellular' artificial organisms. The organisms can then - if necessary - disassemble back into their constituent individual robots. The idea is that robots in the system can choose when to operate in swarm mode, which might be the optimal strategy for searching a wide area, or in organism mode, to - for instance - negotiate an obstacle that cannot be overcome by a single robot. We can envisage future search and rescue robots that work like this - as imagined on this ITN news clip from 2008.

Our main contribution to the project to date has been the design of algorithms for autonomous self-assembly and disassembly - that is the process of transition between swarm and organism. This video shows the latest version of the algorithm developed by my colleague Dr Wenguo Liu. It is demonstrated with 2 Active Wheel robots (developed at the University of Stuttgart - who also lead the Symbrion project) and 1 Backbone robot (developed at the Karlsruhe Institute of Technology).


Let me explain how this works. The docking faces of the robots have infra-red (IR) transmitters and receivers. When a 'seed' robot - in this case the Active Wheel robot on the left - decides to form an organism with a particular body plan, it broadcasts a 'recruitment' signal from its IR transmitters, with the 'type' of robot it needs to recruit - in this case a Backbone robot. The IR transmitters then act as a beacon which the responding robot uses to approach the seed robot, and the same IR system is then used for final alignment prior to physical docking.

Once docked, wired (ethernet) communication is established between robots, and the seed robot communicates the body-plan for the organism with the newly recruited Backbone robot. Only then does the Backbone robot know what kind of organism it is now part of, and where in the organism it is. In this case the Backbone robot determines that the partially formed organism then needs another Active Wheel and it recruits this robot using the same IR system. After the third robot has docked it too discovers the overall body plan and where in the organism it is. In this case it is the final robot to be recruited and the organism self-assembly is complete.

Using control coordinated via the wired ethernet intranet across its three constituent robots, the organism then makes the transition from 2D planar form to 3D, which - in this case - means that the 2 Active Wheel robots activate their hinge motors to bend and lift the Backbone robot off the floor. The 3D organism is now complete and can move as a single unit. The process is completely reversible, and the complete 'lifecycle' from swarm -> organism -> swarm is shown in this video clip.

It is important to stress that the whole process is completely distributed and autonomous. These robots are not being remotely controlled, nor is there a central computer coordinating their actions. Each robot has the same controller, and determines its own actions on the basis of sensed IR signals, or data received over the wired ethernet. The only external signal sent was to tell the first robot to become the 'seed' robot to grow the whole organism. Later in the project we will extend the algorithm so that a robot will decide, itself, when to become a seed and which organism to grow.

The Symbrion system is not bio-mimetic in the sense that there are (as far as I know) no examples in nature of cells that spontaneously assemble to become functioning multi-cellular organisms and vice-versa. It is, however, bio-mimetic in a different sense. The robots, while in swarm mode, are analogous to stem cells. The process of self-assembly is analogous to morphogenesis, and - during morphogenesis - the process by which robot 'cells' discover their position, role and function within the organism is analogous to cell-differentiation.

While what I have described in this blog post is a milestone following several years of demanding engineering effort by a very talented team of roboticists, some of the ultimate goals of the project are scientific rather than technical. One is to address the question - using the Symbrion system as an embodied model - of under what environmental conditions is it better to remain as single cells, or symbiotically collaborate as multi-celled organisms. It seems far fetched but perhaps we could model - in some abstract sense - the conditions that might have triggered the major transition in biological evolution of some 1000 million years ago which saw the emergence of simple multi-cellular forms.

Saturday, April 21, 2012

What's wrong with Consumer Electronics?

When I was a boy the term consumer electronics didn't exist. Then the sum total of household electronics was a wireless, a radiogram and a telephone; pretty much everyone had a wireless, fewer a radiogram and on our (lower middle-class) street perhaps one in five houses had a telephone. (In an emergency it was normal to go round to the neighbour with the phone.) In the whole of my childhood we only ever had the same wireless set and gramophone and both looked more like furniture than electronics, housed in handsome polished wooden cabinets. Of course it was their inner workings, with the warm yellow glow of the thermionic valves that fascinated me and got me into trouble when I took them to pieces, that led to my chosen career in electronics.

How things have changed. Now most middle-class households have more computing power than existed in the world 50 years ago. Multiple TVs, mobile phones, computing devices (laptops, games consoles, iPads, Kindles and the like) and the supporting infrastructure of wireless routers, printer, and backup storage, are now normal. And most of this stuff will be less than five years old. If you're anything like me the Hi-Fi system will be the oldest bit of kit you own (unless you ditched it for the iPod and docking station). Of course this gear is wonderful. I often find myself shocked by the awesomeness of everyday technology. And understanding how it all works only serves to deepen my sense of awe. But, I'm also profoundly worried - and offended too - by the way we consume our electronics.

What offends me is this: modern solid-state electronics is unbelievably reliable - what's wrong with consumer electronics is nothing, yet we treat this magical stuff - fashioned of glass - as stuff to be consumed then thrown away. Think about the last time you replaced a gadget because the old one had worn out or become unrepairable. Hard isn't it. If you still possessed it the mobile phone you had 15 years ago would - I'd wager - still work perfectly. I have a cupboard here at home with all manner of obsolete kit. A dial-up modem for instance, circa 1993. It still works fine - but there's nothing to dial into. The fact is that we are compelled to replace perfectly good nearly-new electronics with the latest model either because the old stuff is rendered obsolete (because it's no longer compatible with current generation o/s, or applications or infrastructure - or unsupported), or worse still because the latest kit has 'must have' features or capabilities not present on the old.

I would like to see a shift in consumer electronics back to a model in which gadgets are designed to be repaired and consumers are encouraged to replace or upgrade every ten years or more, not every year. What I'm suggesting is of course exactly the opposite of what's happening now. Current devices are becoming less repairable, with batteries you can't replace and designs that even skilled technicians find difficult to take apart without risk of damage. The lastest iPad for example was given a very low repairability score (2/10) by iFixit.

And the business model most electronics companies operate is fixated on the assumption that profit, and growth, can only be achieved through very short product life cycles. But all of our stuff is not like this. We don't treat our houses, or gardens, or dining room tables, or central heating systems, or any number of things as consumer goods, but the companies that build and sell houses, or dining room tables, or landscape gardens, etc, still turn a profit. Why can't electronics companies find a business model that treats electronic devices more like houses and less like breakfast cereal?

I don't think consumer electronics should be consumed at all.

Wednesday, January 11, 2012

New experiments in the new lab

Last week my PhD student Mehmet started a new series of experiments in embodied behavioural evolution. The exciting new step is that we've now moved to active imitation. In our previous trials robot-robot imitation has been passive; in other words, when robot B imitates robot A, robot A receives no feedback at all - not even that its action has been imitated. With active imitation, robot A receives feedback - it receives information on which of its behaviours has been imitated, how well the behaviour been imitated and by whom.

The switch from passive to active imitation has required a major software rewrite, both for the robots' control code and for the infrastructure. We made the considered decision that the feedback mechanism - unlike the imitation itself - is not embodied. In other words the system infrastructure both figures out which robot has imitated which (not trivial to do) and radios the feedback to the robots themselves. The reason for this decision is that we want to see how that feedback can be used to - for instance - reinforce particular behaviours so that we can model the idea that agents are more likely to re-enact behaviours that have been imitated by other agents, over those that haven't. We are not trying to model active social learning (in which a learner watches a teacher, then the teacher watches the learner to judge how well they've learned, and so on) so we avoid the additional complexity of embodied feedback.

In the first tests with the new active imitation setup we've introduced a simple change to the behaviour selection mechanism. Every robot has a memory with all of its initialised or learned behaviours. Each one of those behaviours now has a counter that gets incremented each time that particular behaviour is imitated. A robot selects which of its stored behaviours to enact, at random, but with probabilities that are determined by the counter values so that a higher count behaviour is more likely to be selected. But, as I've discovered peering at the data generated from the initial runs, it's not at all straightforward to figure out what's going on and - most importantly - what it means. It's the hermeneutic challenge again.

So, for now here's a picture of the experimental setup in our shiny new* lab. Results to follow!















*In November 2011 the Bristol Robotics Lab moved from its old location, in the DuPont building, to T block on the extended Coldharbour Lane campus.

Thursday, January 05, 2012

Philip Larkin - a recollection

My first encounter with Philip Larkin was as a fresher in October 1974. As university librarian it was Larkin's annual duty to give an introductory lecture to the year's fresh intake. I recall seeing this tall, portly man with bottle-top glasses in a bank manager's suit. Not at all how I imagined a poet should look. (My Dad had told me about Larkin when I first announced I'd chosen to go to Hull University, otherwise I'm sure I'd have taken no notice at all.) To this audience of several hundred 18-year olds - more interested in eyeing each other for fanciableness than listening to some bloke in a suit - Larkin declared with a plummy, resonant voice and measured delivery, as if it was a line from Shakespeare, "...educated people should know three things: what words mean, where places are and when things happened".

My first encounter with his poetry was several months later. It was a vacation and I was at home with Mum and Dad, younger sister and brother. Larkin was to be featured in a TV documentary and the whole family gathered expectantly round the set at the appointed time. Then the first stanza was read "They fuck you up, your mum and dad/They may not mean to, but they do/They fill you with the faults they had/And add some extra, just for you." Cue acute, embarrassed silence. My Mum, I think, said "Well I don't think much of this" and without another word the TV was switched off. We didn't discuss this (in fact I don't think we ever discussed it). It was some years later that I got to know (and love) Larkin's poetry and to reflect on the idiot producer who chose to start that TV programme with arguably his worst poem, just for the shock value of the word fuck on the BBC (this was 1975). It's not that I'm prudish about the sentiment expressed, it's just not a good poem.

Fast forward about six years. I've accepted a junior lecturing post while finishing off my PhD, and find myself a member of the science faculty board. As librarian Larkin is an ex-officio member and I recall him contributing his opinions to the board's debates. I've long forgotten the subject of those debates but I vividly recall the manner of Larkin's contributions. He would stand, as if addressing parliament, and speak what I can only describe as Perfect English. His articulation, diction and metre was actor-perfect. If you had written down exactly what he said, and punctuation would have been easy for he paused in commas and semi-colons, you would get perfect prose; each word exactly the right word, each phrase perfectly turned. I was, at the time, going out with a girl who worked in the library and she told me Larkin's memoranda were the same: each a miniature essay, a perfectly formed construction of letters.

I never knew Larkin. Nobody did. He was a distant, unapproachable man and, by all accounts, not at all likeable. The closest he and I came to conversation was exchanging nods across the lunchtime staff common-room bar. I find it satisfyingly ironic therefore that a man so apparently detached and unemotional should have written what is, for me, the finest love poem of the 20th Century: An Arundel Tomb (1).

The poem starts: Side by side, their faces blurred, the earl and countess lie in stone, and then in the second verse the beautiful observation: Such plainness of the pre-baroque hardly involves the eye, until it meets his left-hand gauntlet, still clasped empty in the other; and one sees, with a sharp tender shock, his hand withdrawn, holding her hand. I love the words sharp tender shock; then in the next verse: Such faithfulness in effigy... A sculptor’s sweet commissioned grace.

In the fifth verse Larkin constructs a spine tingling evocation of the long passage of time: Rigidly they persisted, linked, through lengths and breadths of time. Snow fell, undated. Light each summer thronged the glass. A bright litter of birdcalls strewed the same bone-riddled ground. And then the remarkable conclusion of the poem: The stone fidelity they hardly meant has come to be their final blazon, and to prove our almost-instinct almost true: what will survive of us is love.

Forgive me for removing the line breaks in these extracts from the poem. In doing so I want to illustrate my observation that, in Larkin's writing, there is little distance between prose and poetry. When reading his poems I've reflected often on why it is that a man with such an apparently effortless ability to produce perfect English published so little, and agonised so much over his writing. I now realise that he didn't have a problem with writing, but with life. "The object of writing," Larkin once said, "is to show life as it is, and if you don't see it like that you're in trouble, not life.


(1) from The Whitsun Weddings, Faber and Faber, 1964. And here is both the full text of An Arundel Tomb and Larkin reading the poem.