Saturday, December 07, 2013

Soft Robotics in Space

Space robotics is understandably conservative. When the cost of putting a robot on a planet, moon or asteroid runs into billions we need to be sure the technology will work. And with very long project lifetimes - spanning decades from engineering design to on-planet robot exploration - it's a long hard road from the research lab to the real off-world use for new advances in robotics.

This context was very much in mind when I gave a talk on Advanced Robotics for Space at the Appleton Space Conference last week. I used this great opportunity to outline a few examples of new research directions in robotics for the European space community, and suggest how these could benefit future planetary robots. I had just 20 minutes, so I couldn't do much more than show a few video clips. The four new directions I highlighted are:
  1. Soft Robotics: soft actuation and soft sensing
  2. Robots with Internal Models, for self-repair
  3. Self-assembling swarm robots, for adaptive/evolvable morphology
  4. Autonomous 3D collective robot construction
In this post I want to talk about just the first of these: soft robotics, and why I think we should seriously think about soft robotics in space. Soft robotics - as the name implies - is concerned with making robots soft and compliant. It's a new discipline which already has its own journal, but not yet a wikipedia page. Soft robots would be soft on the inside as well as the outside - so even the fur covered Paro robot is not a Soft robot. Soft robotics research is about developing new soft, smart materials for both actuation and sensing (ideally within the same material). Soft robots would have the huge advantage over conventional stiff metal and plastic robots, of being light and, well, soft. For robots designed to interact with humans that's obviously a huge advantage because it makes the robot intrinsically much safer. 

Soft robotics research is still at the exploratory stage, so there are not yet prefered materials and approaches. In our lab we are exploring several avenues, one is electroactive polymers (EAPs) for artificial muscles; another is the bio-mimetic 3D printed flexible artificial whisker. Another approach makes use of shape memory alloys to actuate octopus like limbs: here is a very nice YouTube movie from the EU OCTOPUS project. And perhaps one of the most unlikely but very promising approaches: exploiting fluid-solid phase changes in ground coffee to make a soft gripper: the Jaeger-Lipson coffee balloon gripper.

Let me elaborate a little more on the coffee balloon gripper. Based on the simple observation that when you buy vacuum-packed ground coffee the pack is completely solid, yet as soon as you cut open the pack and release the vacuum the ground coffee returns to its flowing fluid state. Heinrich Jaeger, Hod Lipson and co-workers put ground coffee into a latex balloon then, by controlling the vacuum via a pump, they demonstrate a gripper able to safely pick up and hold more or less any object. Here is a YouTube video showing this remarkable ability.

Almost any planetary exploration robot is likely to need a gripper to pick up or collect rock samples for analysis or collection (for return to Earth). Conventional robot grippers are complex mechanical devices that need very precise control in order to reliably pick up irregularly shaped and sized objects. That control is mechanically and computationally expensive, and problematical because of time delays if it has to be performed remotely from Earth. Something like the Jaeger-Lipson coffee balloon gripper would - I think - provide a much better solution. This soft gripper avoids the hard control and computation because the soft material adapts itself to the thing it is gripping; it's a great example of what we call morphological computation.

The second example I suggested is inspired by work in our lab on bio-inspired touch sensing. Colleagues have developed a device called TACTIP - a soft flexible touch sensor which provides robots (or robot fingers) with very sensitive touch sensing capable of sensing both shape and texture. Importantly the sensing is done inside TACTIP, so the outside surface of the sensor can sustain damage without loss of sensing. Here is a very nice YouTube report on the TACTIP project

It's easy to see that giving planetary robots touch sensing could be useful, but there's another possibility I outlined: the potential to allow Earth scientists to feel what the robot's sensor is feeling. PhD student Callum Roke and his co-workers developed a system based on TACTIP for what we call remote tele-haptics. Here is a video clip demonstrating the idea:



Imagine being able to run your fingers across the surface of Mars, or directly feel the texture of a piece of asteroid rock without actually being there.

Tuesday, November 26, 2013

Noisy imitation speeds up group learning

Broadly speaking there are two kinds of learning: individual learning and social learning. Individual learning means learning something entirely on your own, without reference to anyone else who might have learned the same thing before. The flip side of individual learning is social learning, which means learning from someone else. We humans are pretty good at both individual and social learning although we very rarely have to truly work something out from first principles. Most of what we learn, we learn from teachers, parents, grandparents and countless others. We learn everything from how to make chicken soup to flying an aeroplane from watching others who already know the recipe (or wrote it down), or have mastered the skill. For modern humans I reckon it’s pretty hard to think of anything we have truly learned, on our own; maybe learning to control our own bodies as babies, leading to crawling and walking are candidates for individual learning (although as babies we are surrounded by others who already know how to walk – would we walk at all if everyone else got around on all fours?). Learning to ride a bicycle is perhaps also one of those things no-one can really teach you – although it would be interesting to compare someone who has never seen a bicycle, or anyone riding one, in their lives with those (most of us) who see others riding bicycles long before climbing on one ourselves.

In robotics we are very interested in both kinds of learning, and methods for programming robots that can learn are well known. A method for individual learning is called reinforcement learning (RL). It’s a laborious process in which the robot tries out lots and lots of actions and gets feedback on whether each action helps or hinders the robot in getting closer to its goal – actions that help/hinder are re/de-inforced so the robot is more/less likely to try them again; it’s a bit like shouting “warm, hot, cold, colder…” in a hide-and-seek game. It’s fair to say that RL in robotics is pretty slow; robots are not good individual learners, but that's because, in general, they have no prior knowledge. As a fair comparison think of how long it would take you to learn how to make fire from first principles if you had no idea that getting something hot may, if you have the right materials and are persistent, create fire, or that rubbing things together can make them hot. Roboticists are also very interested in developing robots that can learn socially, especially by imitation. Robots that you can program by showing them what to do (called programming by demonstration) clearly have a big advantage over robots that have to be explicitly programmed for each new skill.

Within the artificial culture project PhD student (now Dr) Mehmet Erbas developed a new way of combining social learning by imitation and individual reinforcement learning, and the paper setting out the method together with results from simulation and real robots has been published in the journal Adaptive Behavior. Let me explain the experiments with real robots, and what we have learned from them.

Here's our experiment. We have two robots - called e-pucks. The inset shows a closeup. Each robot has its own compartment and must - using individual (reinforcement) learning - learn how to navigate from the top right hand corner, to the bottom left hand corner of its compartment. Learning this way is slow, taking hours. But in this experiment the robots also have the ability to learn socially, by watching each other. Every so often one of the robots will stop its individual learning and drive itself out of its own compartment, to the small opening at the bottom left of the other compartment. There it will stop and simply watch the other robot while it is learning, for a few minutes. Using a movement imitation algorithm the watching robot will (socially) learn a fragment of what the other robot is doing, then combine this knowledge into what it is individually learning. The robot then runs back to its own compartment and resumes its individual learning. We call the combination of social and individual learning 'imitation enhanced learning'.

In order to test the effectiveness of our new imitation enhanced learning algorithm we first run the experiment with the imitation turned off, so the robots learn only individually. This gives us a baseline for comparison. We then run two experiments with imitation enhanced learning. In the first we wait until one robot has completed its individual learning, so it is an 'expert'; the other robot then learns - using its combination of individual learning and social learning from the expert. Not surprisingly learning this way is faster.

This graph shows individual learning only as the solid black line, and imitation-enhanced learning from an expert as the dashed line. In both cases learning is more or less complete when the graphs transition from vertical to horizontal. We see that individual learning takes around 360 minutes (6 hours). With the benefit of an expert to watch, learning time drops to around 60 minutes.




The second experiment is even more interesting. Here we start the two robots at the same time, so that both are equally inexpert. Now you might think it wouldn't help at all, but remarkably each robot learns faster when it can observe, from time to time, the other inexpert robot, than when learning entirely on its own. As the graph below shows, the speedup isn't as dramatic - but imitation enhanced learning is still faster.

Think of it this way. It's like two novice cooks, neither of whom knows how to make chicken soup. Each is trying to figure it out by trial and error but, from time to time, they can watch each other. Even though its pretty likely that each will copy some things that lead to worse chicken soup, on average and over time, each hapless cook will learn how to make chicken soup a bit faster than if they were learning entirely alone.



In the paper we analyse what's going on when one robot imitates part of the semi-learned sequence of moves by the other. And here we see something completely unexpected. Because the robots imitate each other imperfectly - when one robot watches another and then tries to copy what it saw, the copy will not be perfect - from time to time, one inexpert robot will miscopy the other inexpert robot and the miscopy, by chance, helps it to learn. To use the chicken soup analogy: it's as if you are spying on the other cook - try to copy what they're doing but get in wrong and, by accident, end up with better chicken soup.

This is deeply interesting because it suggests that when we learn in groups making mistakes - noisy social learning - can actually speed up learning for each individual and for the group as a whole.

Full reference:
Mehmet D Erbas, Alan FT Winfield, and Larry Bull (2013), Embodied imitation-enhanced reinforcement learning in multi-agent systems, Adaptive Behavior. Published online 29 August 2013. Download pdf (final draft)

Wednesday, October 30, 2013

Ethical Robots: some technical and ethical challenges

Here are the slides of my keynote at last week's excellent EUCog meeting: Social and Ethical Aspects of Cognitive Systems. And the talk itself is here, on YouTube.

I've been talking about robot ethics for several years now, but that's mostly been about how we roboticists must be responsible and mindful of the societal impact of our creations. Two years ago I wrote - in my Very Short Introduction to Robotics - that robots cannot be ethical. Since then I've completely changed my mind*. I now think there is a way of making a robot that is at least minimally ethical. It's a huge technical challenge which, in turn, raises new ethical questions. For instance: if we can build ethical robots, should we? Must we..? Would we have an ethical duty to do so? After all, the alternative would be to build amoral robots. Or, would building ethical robots create a new set of ethical problems? An ethical Pandora's box.




The talk was in three parts.

Part 1: here I outline why and how roboticists must be ethical. This is essentially a recap of previous talks. I start with the societal context: the frustrating reality that even when we meet to discuss robot ethics this can be misinterpreted as scientists fear a revolt of killer robots. This kind of media reaction is just one part of three linked expectation gaps, in what I characterise as a crisis of expectations. I then outline a few ethical problems in robotics - just as examples. Here I argue it's important to link safe and ethical behaviour - something that I return to later. Then I recap the five draft principles of robotics.

Part 2: here I ask the question: what if we could make ethical robots? I outline new thinking which brings together the idea of robots with internal models, with Dennett's Tower of Generate and Test, as a way of making robots that can predict the consequences of their own actions. I then outline a generic control architecture for robot safety, even in unpredictable environments. The important thing about this approach is that the robot can generate next possible actions, test them in its internal model, and evaluate the safety consequences of each possible action. The unsafe actions are then inhibited - and the robot controller determines which of the remaining safe actions is chosen, using its usual action-selection mechanism. Then I argue that it is surprisingly easy to extend this architecture for ethical behaviour, to allow the robot to predict the robot actions that would minimise harm for a human in its environment. This appears to represent an implementation of Asimov's 1st and 3rd laws. I outline the significant technical challenges that would need to be overcome to make this work.

But, assuming such a robot could be built, how ethical would it be? I suggest that with a subset of Asimovian ethics it probably wouldn't satisfy an ethicist or moral philosopher. But, nevertheless - I argue there's a good chance that such a minimally ethical robot could help to increase trust, in the robot, from its users.

Part 3: in the final part of the talk I conclude with some ethical questions. The first is: if we could build an ethical robot, are we ethically compelled to do so? Some argue that we have an ethical duty to try and build moral machines. I agree. But the counter argument, my second ethical question, is are there ethical hazards? Are we opening a kind of ethical Pandora's box, by building robots that might have an implicit claim to rights, or responsibilities. I don't mean that such a robot would ask for rights, but instead that, because it is has some moral agency, then we might think it should be accorded rights. I conclude that we should try and build ethical robots. The benefits I think far outweigh any ethical hazards, which in any event can, I think, be minimised.


*It was not so much an epiphany, as a slow conversion from sceptic to believer. I have long term collaborator Michael Fisher to thank for doggedly arguing with me that it was worth thinking deeply about how to build ethical robots.

Sunday, October 20, 2013

A Close(ish) Encounter with Voyager 2

It is summer 1985. I'm visiting Caltech with colleague and PhD supervisor Rod Goodman. Rod has just been appointed in the Electrical Engineering Department at Caltech, and I'm still on a high from finishing my PhD in Information Theory. Exciting times.

Rod and I are invited to visit the Jet Propulsion Labs (JPL). It's my second visit to JPL. But it turned into probably the most inspirational afternoon of my life. Let me explain.

After the tour the good folks who were showing us round asked if I would like to meet some of the post-docs in the lab. As he put it: the fancy control room with the big wall screens is really for the senators and congressmen - this is where the real work gets done. So, while Rod went off to discuss stuff with his new Faculty colleagues I spent a couple of hours in a back room lab, with a Caltech post-doc working on - as he put it - a summer project. I'm ashamed to say I don't recall his name so I'll call him Josh. Very nice guy, a real southern californian dude.

Now, at this point, I should explain that there was a real buzz at JPL. Voyager 2, which had already more than met its mission objectives was now on course to Uranus and due to arrive in January 1986. It was clear that there was a significant amount of work in planning for that event. The first ever opportunity to take a close look at the seventh planet.

So, Josh is sitting at a bench and in front of him is a well-used Apple II computer. And behind the Apple II is a small display screen so old that the phosphor is burned. This used to happen with CRT computer screens - it's the reason screen savers were invented. Beside the computer are notebooks and manuals, including prominently a piece of graph paper with a half-completed plot. Josh then starts to explain: one of the cameras on Voyager 2 has (they think) a tiny piece of grit* in the camera turntable - the mechanism that allows the camera to be panned. This space grit means that the turntable is not moving as freely as it should. It's obviously extremely important that when Voyager gets to Uranus they need to be able to point the cameras accurately, so Josh's project is to figure out how much torque is (now) needed to move the camera turntable to any desired position. In other word's re-calibrate the camera's controller.

At this point I stop Josh. Let me get this straight: there's a spacecraft further from earth, and flying faster, than any manmade object ever, and your summer project is to do experiments with one of its cameras, using your Apple II computer. Josh: yea, that's right.

Josh then explains the process. He constructs a data packet on his Apple II, containing the control commands to address the camera's turntable motor and to instruct the motor to drive the turntable. As soon as he's happy that the data packet is correct, he then sends it - via the RS232 connection at the back of his Apple II - to a JPL computer (which, I guess would be a mainframe). That computer then, in turn, puts Josh's data packet together with others, from other engineers and scientists also working on Voyager 2, after - I assume - carefully validating the correctness of these commands. Then the composite data packet is sent to the Deep Space Network (DSN) to be transmitted, via one of the DSNs big radio telescopes, to Voyager 2.

Then, some time later, the same data packet is received by Voyager 2, decoded and de-constructed and said camera turntable moves a little bit. The camera then sends back to Earth, again via a composite data packet, some feedback from the camera - the number of degrees the turntable moved. So a day or two later, via a mind-bogglingly complex process involving several radio telescopes and some very heavy duty error-correcting codes, the camera-turntable feedback arrives back at Josh's desktop Apple II with the burned-phosphor screen. This is where the graph paper comes in. Josh picks up his pencil and plots another point on his camera-turntable calibration graph. He then repeats the process until the graph is complete. It clearly worked because six months later Voyager 2 produced remarkable images of Uranus and its moons.

This was, without doubt, the most fantastic lab experiment I'd ever seen. From his humble Apple II in Pasadena Josh was doing tests on a camera rig, on a spacecraft, about 1.7 billion miles away. For a Thunderbirds kid, I really was living in the future. And being a space-nerd I already had some idea of the engineering involved in NASA's deep space missions, but that afternoon in 1985 really brought home to me the extraordinary systems engineering that made these missions possible. Given the very long project lifetimes - Voyager 2 was designed in the early 1970s, launched in 1977, and is still returning valuable science today - its engineers had to design for the long haul; missions that would extend over several generations. Systems design like this requires genius, farsightedness and technical risk taking. Engineering that still inspires me today.

*it later transpired that the problem was depleted lubricant, not space grit.

Monday, September 30, 2013

Don't build robots, build robot systems

Why aren't there more intelligent mobile robots in real world applications? It's a good question, and one I'm often asked. The answer I give most often is that it's because we're still looking for that game changing killer app - the robotics equivalent of the spreadsheet for PCs. Sometimes I place the blame on a not-quite-yet-solved technical deficit - like poor sensing, or sensor fusion, or embedded AI; in other words, our intelligent robots are not yet smart enough. Or I might cite a not-fully-developed-capability, like robots not able to cope with unpredictable (i.e. human) environments, or we can't yet assure that our robots are safe, and dependable.

Last week at euRathlon 2013 I realised that these answers are all wrong. Actually that would be giving myself credit where none is due. The answer to the question: why aren't there more intelligent mobile robots in real world applications was pointed out by several of the presenters at the euRathlon 2013 workshop, but most notably by our keynote speaker Shinji Kawatsuma, from the Japan Atomic Energy Authority (JAEA). In an outstanding talk Kawatsuma explained, with disarming frankness, that, although his team had robots they were poorly prepared to use those robots in the Fukushima Daiichi NPP, because the systems for deployment were not in place. The robots are not enough. Just as important are procedures and protocols for robot deployment in an emergency; mobile infrastructure, including vehicles to bring the robots to the emergency, which are capable - as he vividly explained - of negotiating a road system choked with debris (from the Tsunami) and strained with other traffic (rescue workers and evacuees); integration with other emergency services; and, above all, robot operators trained, practised and confident to guide the robots through whatever hazards they will face in the epicentre of the disaster.

In summing up the lessons learned from robots at Fukushima, Shinji Kawatsuma offered this advice - actually it was more of a heartfelt plea: don't build robots, build robot systems. And, he stressed, those systems must include operator training programmes. It was a powerful message for all of us at the workshop. Intelligent robots are endlessly fascinating machines, with all kinds of difficult design challenges, so it's not surprising that our attention is focussed on the robots themselves. But we need to understand that real world robots are like movie stars - who (despite what they might think) wouldn't be movie stars at all without the supporting cast, camera and sound crews, writers, composers, special effects people and countless other departments that make the film industry. Take the Mars rover Curiosity - an A-list movie star of robotics. Curiosity could not do its job without an extraordinary supporting infrastructure that, firstly, delivered her safely to the surface of Mars and, secondly, allows Curiosity's operators to direct her planetary science exploration.

Curiosity: an A-list movie star robot (NASA/JPL-Caltech/MSSS), with a huge supporting cast of science and technology.

















So, to return to my question why aren't there more intelligent mobile robots in real world applications. The answer is plain. It's because without supporting systems: infrastructure and skilled operators integrated and designed to meet the real world need, a robot - regardless of how innovative and intelligent it is - will never make the transition from the lab to the real world. Without those systems that robot will remain no more than a talented but undiscovered actor.


Hans-Arthur Marsiske, The use of robots in Fukushima: Shinji Kawatsuma Interview, Heise online, 25 September 2013 (in German).
K Nagatani, S Kiribayashi, Y Okada, K Otake, K Yoshida, S Tadokoro, T Nishimura, T Yoshida, E Koyanagi, M Fukushima and S Kawatsuma, Emergency response to the nuclear accident at the Fukushima Daiichi Nuclear Power Plants using mobile rescue robots, Journal of Field Robotics, 30 (1), 44-63, 2013.

Friday, September 20, 2013

The Triangle of Life: Evolving Robots in Real-time and Real-space

At the excellent European Conference on Artificial Life (ECAL) a couple of weeks ago we presented a paper called The Triangle of Life: Evolving Robots in Real-time and Real-space (this links to the paper in the online proceedings).

As the presenting co-author I gave a one-slide one-minute pitch for the work, and here is that slide.



The paper proposes a new conceptual framework for evolving robots, that we call the Triangle of Life. Let me outline what this means. But first a quick intro to evolutionary robotics. In my very short introduction to Robotics I wrote:
One of the most fascinating developments in robotics research in the last 20 years is evolutionary robotics. Evolutionary robotics is a new way of designing robots. It uses an automated process based on Darwinian artificial selection to create new robot designs. Selective breeding, as practised in human agriculture to create new improved varieties of crops, or farm animals, is (at least for now) impossible for real robots. Instead, evolutionary robotics makes use of an abstract version of artificial selection in which most of the process occurs within a computer. This abstract process is called a genetic algorithm. In evolutionary robotics we represent the robot that we want to evolve, with an artificial genome. Rather like DNA, our artificial genome contains a sequence of symbols but, unlike DNA, each symbol represents (or ‘codes for’) some part of the robot. In evolutionary robotics we rarely evolve every single part of a robot.

A robot consists of a physical body with an embedded control system - normally a microprocessor running control software. Without that control software the robot just wouldn't do anything - it would be the robot equivalent of a physical body without a mind. In biological evolution bodies and minds co-evolved (although the dynamics of that co-evolutionary process are complex and interesting). But in 20 years or so of evolutionary robotics the vast majority of work has focussed only on evolving the robot's controller. In other words we take a pre-designed robot body, then use the genetic algorithm to discover a good controller for that particular body. There has been little work on body-brain co-evolution, and even less work on evolving real robot bodies. In fact, we can count the number of projects that have evolved new physical robot bodies on the fingers of one hand*. Here is one of those very rare projects: the remarkable Golem project of Hod Lipson and Jordan Pollack.

This is surprising. When we think of biological evolution and the origin of species, our first thoughts are of the evolution and diversity of body shapes and structures. In the same way, the thing about a robot that immediately captures our attention is its physical body. And bodies are not just vessels for minds. As Rolf Pfeifer and Josh Bongard explain in their terrific book How the Body Shapes the Way We Think, minds depend crucially on bodies. The old dogma of Artificial Intelligence, that we can simply design an artificial brain without any regard to its embodiment, is wrong. True artificial intelligence will only be achieved by co-evolving physical bodies with their artificial minds.

In this paper we are arguing for a radical new approach in which the whole process of co-evolving robot bodies and their controllers takes place in real space and real time. And, as the title makes clear, we are also advocating a open-ended cycle of artificial life, in which every part of the robots' artificial life cycle takes place in real space and real time, from artificial conception, through to artificial birth, artificial infancy and development, then artificial maturity and mating. Of course these words are metaphors: the artificial processes are at best a crude analogue. But let me stress that no-one has demonstrated this. The examples that we give in the paper, from the EU Symbrion project, are just fragments of the process - not joined up in reality. And the Symbrion example is very constrained because of the modular robotics approach which means that the building blocks of these 'multi-cellular' robot organisms - the 'cells' - are themselves quite chunky robots; we have only 3 cell types and only a handful of cells for evolution to work with. Evolving robots in real space and real time is ferociously hard but, as the paper concludes: Our proposed artificial life system could be used to investigate novel evolutionary processes, not so much to model biological evolution – life as it is, but instead to study life as it could be.

Full reference:

Eiben AE, Bredeche N, Hoogendoorn M, Stradner J, Timmis J, Tyrrell A, and Winfield A (2013), The Triangle of Life: Evolving Robots in Real-time and Real-space, pp 1056-1063 in Advances in Artificial Life, ECAL 2013, proc. Twelfth European Conference on the Synthesis and Simulation of Living Systems, eds. Liò P, Miglino O, Nicosia G, Nolfi S and Pavone M, MIT Press.


*was surprised to discover this when searching the literature for a new book chapter I'm co-authoring with Jon Timmis on Evolvable Robot Hardware.

Related blog posts:
New experiments in embodied evolutionary swarm robotics
New video of 20 evolving e-pucks

Monday, August 26, 2013

Memories of Skyrim

From time to time I like to visit Skyrim. I've been going there for about 2 years - completed the main quest a year or so ago, and since then go back to undertake a side quest or, more often that not, just wander around admiring the scenery. Of course my character is now reasonably levelled-up so wandering around is not quite as perilous as it used to be; interesting rather than terrifying.

But the thing I've noticed in recent months is that I have memories of being in places in Skyrim that, subjectively, feel completely indistinguishable from memories of being in real places in the real world. In other words the quality and character of the memory - the sense of having really been there - is no different when I recall, say, standing on the porch of Dragonsreach Hall looking South toward Throat of the World mountain, as when I remember looking down toward the Cumberland basin from the south footpath of the Clifton suspension bridge.

For me this is a new experience. I've been playing (and sometimes coding) video games since that meant batting a pixelated ball from one side of a low-res monochrome screen to the other on home built 8-bit micros in the 1980s. I have fond memories of playing the first generation Alone in the Dark ~1993 on a 386 PC, with my 5 year old son - but my memories are of the experience of playing the game with Tom, not of actually being in that haunted house. More recent games on the Xbox 360, with graphics I would have had difficulty imagining 20 years ago, have not had the same effect of creating such compellingly real memories for me.

So what is it about Skyrim that is making these memories feel so real? I think there are several factors. The first is that the scenery is so breathtakingly beautiful, which means you really do want to just stop and stare for awhile. Second, and equally important I think, is that this is not some imagined alien landscape. It is decidedly Earth - a cold Northerly Earth certainly, but the fells and mountains, the lakes and forests, the grasses and especially trees, are realised so accurately you can identify whether it's a birch or an oak. Third, the landscape is in constant motion - so the grasses sway in the wind, the brooks gurgle and splash and insects flutter. Wait a little longer and you realise the day is passing from afternoon to dusk, the sky turns golden in the sunset, then to night. A star spangled and moon-crossed night sky then rewards the patient (and the brave - this is a wild and dangerous place), following by a glorious sunrise. And there is weather too. Rain, which splashes delightfully on the lake, occasional thunderstorms (learn the power and you can call them up!), and snow blizzarding in the mountains. In case you can't go there yourself watch this YouTube movie: Skyrim - landscapes and scenery.

All of these are I think important cues in making the experience, and hence the memory, feel so real. But I think there is another factor, which is that my journeying through Skyrim is part of a narrative which connects places with events, quests and discoveries. So the memorable places are those I have arrived at following some perilous and occasionally epic trail, with multiple trials on the journey, Or they are places I have discovered offer safety and refuge; places to return to after days questing in the wilderness. Perhaps the depth and intensity of the experienced narrative somehow makes up for the limitations of the sensory experience? With only 2D vision on a flat TV screen and stereo audio, and nothing at all to stimulate the rest of the considerable human sensorium, it seems incredible that such a weakly immersive experience - compared to being in real places - can create subjectively comparable memories.

Since thinking about these memories of Skyrim, I've wondered if these are technically false memories. Am I experiencing so called False Memory Syndrome? Although FSM is controversial what is beyond doubt is the extraordinary suggestibility and unreliability of human memory. I was astonished by the unreliability of memory I witnessed at first hand two years ago while on Jury service. But false memories are memories of events that never happened, but are strongly believed. On reflection I think my memories of Skyrim do not fall into this category. The events and places occurred in the virtual rather than the real, but they, and my experience of them, really happened.

So if immersive video games technology has reached the point that it can create, for the gamer, memories of places and events which feel no different to memories of places and events in the real world, is this a bad thing? And, as the technology improves to make the experience more immersive, involving more senses, will we find ourselves unable to distinguish between the virtual and the real - confusing memories of one with the other? I think the answer is no. After all we each create a personal narrative - the remembered story of our lives - and I don't think it matters whether the events that make up that story happen in the virtual or the real. I think we're just as able to recall the difference between a trip to Rhyll and Rome, as between Skegness and Skyrim. And if, with advancing years or just because that's the way we are, we start to confuse these memories I don't think we're any more likely to confuse the virtual and the real, as we are the real and the real. Nor do I think the degree of immersion in the virtual matters as far as memories of being there is concerned*. After all, being in the real world is a fully immersive experience. Even if, and when, we can climb into a full-body immersive gaming rig, like those of Ernest Cline's brilliant Ready Player One, we will still only have an experience equal to that of the real world. So why should those experiences be remembered any differently to those in the real?

Ok. I've persuaded myself there isn't a problem. Time for another trip to Skyrim.


*There is of course another quite different concern - to do with how much more addictive the experience will become.Will we neglect the real - and ourselves, like Larry Niven's wireheads?

Thursday, August 22, 2013

The scourge of the RoboTroll is already upon us

When a robot ethics working group met nearly three years ago one of the things we fretted about was privacy. We were concerned especially about personal companion robots. Whatever their purpose, be it healthcare for the elderly or disabled, or childcare, or simply Robot and Frank style companionship we debated the inevitable privacy issues of this kind of robot. Two aspects directly impact privacy, since personal companion robots are likely to (1) collect and store data and (2) be networked. We attempted to cover both privacy and security when we drafted our Principles of Robotics.

The second of our principles states: Robots should be designed and operated to comply with existing law, including privacy. Yes, sometimes the obvious does have to be stated.

We also worried about hacking. Our third principle is: Robots are products: as with other products, they should be designed to be safe and secure. And the commentary for this principle has this to say about hacking:
We are aware that the public knows that software and computers can be “hacked” by outsiders, and processes need to be developed to show that robots are secure as far as possible from such attacks.
It seems our concerns were well founded. Last week a report appeared of a networked baby monitor that was apparently hacked. It was pretty distressing. The hacker was shouting abuse at the baby, chillingly using her name - it seems that he (let's assume it was a he) was able to gain access to the baby monitor's video feed and read the baby's name displayed above her bed. Even more chilling (adding to the parent's horror) the execrable hacker then turned the camera to look at them when they entered their child's room to find out what was going on.

A WiFi IP Camera, aka Baby Monitor is, I contend, a teleoperated robot. The thing that makes it a robot is the motorised pan and tilt mechanism for steering the camera. So, despite password protection, Mr Gilbert's networked robot was hacked. The hack was a clear violation of both privacy and security. This particular robot, and I hazard hundreds of thousands like it, is absolutely not secure from attack. It fails our 2nd and 3rd principles.

The consequences of this particular attack were, fortunately, not much more serious than giving the Gilbert's a fright they surely won't forget for some time. But for me one particularly egregious aspect of this robot hack - something that the robot ethics working group did not anticipate - was the verbal abuse hurled at baby Allyson and her parents. It is with profound dismay that I ask the question: is the first case of RoboTrolling..?

Friday, July 19, 2013

euRathlon and the DARPA Robot Challenge: a difference of approach

A week ago the DARPA Robotics Challenge unveiled the ATLAS humanoid robot, which will be used by seven competing teams. Developed by Boston Dynamics, ATLAS is an imposing 1.8m 150Kg bipedal humanoid robot, powered via a tethered cable. Another six teams have designed their own robots, and interestingly five of these are humanoid, and one a four-limbed simian-inspired robot.

In the euRathlon project we are taking a different approach in that we don't expect, or require, the competing robots to be humanoid or zoomorphic. None of the euRathlon competition scenarios demand a humanoid robot to, for example, be able to step inside a vehicle and drive it. However, for the land robots at least, there is nothing stopping euRathlon teams from bringing humanoid robots to the competition.

As I wrote when we launched euRathlon early this year, the big vision of euRathlon is a competition scenario in which no single type of robot is, on its own, sufficient. Inspired by the Fukushima accident of March 2011, the 2015 euRathlon competition will require teams of land, sea and flying robots to autonomously cooperate to survey the scene, identify critical hazards and undertake tasks to make the plant safe. Leading up to this grand challenge in 2015, will be related and preparatory land and underwater robot competitions in 2013 and 2014, respectively.

The difference of our approach is not the result of an in-principle decision. Rather, it flows naturally from several factors. First, we are specifically creating competition scenarios that require cooperating teams across the three domains of land, sea and air. Second, we are looking for very high levels of autonomy, so the robot teams will, ideally, complete their mission with hands-off human monitoring only. Any human interventions will be penalised in the euRathlon scoring schema. And third, we are not looking to push innovation in the robot platforms themselves, but rather in their cognition, autonomy and system level team working. Thus, euRathlon teams who make use of existing and proven robot hardware will gain a big advantage in that they can focus all of their efforts on the software, communications and systems engineering; the AI and the autonomy. And by autonomy we mean both control and energy autonomy. The euRathlon competition scenarios preclude the use of tethered power connections, so robots must carry their own energy supplies sufficient to last the whole mission.

For these reasons the euRathlon robots are likely to look rather conventional: wheeled or tracked land robots; fixed or rotary wing (i.e. quadcopter) flying robots, and ROV-type underwater robots. Not as dramatic as the DARPA robot challenge humanoid or animal-like robots perhaps, but looks can be deceptive: the real innovation in the euRathlon robots will be in the autonomous cooperation across the three domains. Something that has not been demonstrated in realistic outdoor disaster response scenarios.

Of course there is nothing to stop euRathlon teams from using a bio-mimetic approach, so fish-like underwater robot cooperate with bird-like flying robots, and legged animal-like land robots. That really would be something!


Related blog posts:
euRathlon is go! (Feb 2013)
Real-world robotics reality check (May 2010)
A truly Grand Challenge (August 2007)

Tuesday, May 28, 2013

New Robotics and new opportunities

Here are the slides of my talk at the BARA Academic Forum for Robotics meeting Robotics: from innovation to service, on Monday 20 May 2013:



The key messages from my talk were:
  • The new wave of robotics represents a kind of Cambrian explosion in robotics: an exciting but also bewildering exploration of new forms, functions and materials. This explosion of diversity means that the New Robotics is not one kind of robot. Thus any kind of prediction about which of these will successfully evolve to become mainstream is more or less impossible.
  • There are two common myths: first, the waiting-for-AI myth: the idea that robotics is waiting for some breakthrough innovation in Artificial Intelligence, without which robotics is stuck. And second, the need-full-autonomy myth: the idea that fully autonomous robots represent some ideal end-state of the development of robotics; this is not true - instead we need robots and human-robot interfaces that will transition smoothly between tele-operation and semi-autonomy. We call this dynamic autonomy.
  • There are significant opportunities for innovation right now - underpinned by a significant head-of-steam of fundamental technologies from university R&D. I offer some examples for discussion, including companion robots, wearable robots and tele-operated robots with immersive tele-presence, perhaps making use of remote tele-haptics (although I claim no special insights). 
  • We need new and agile approaches to innovation. New kinds of research-industry partnerships and flexible, responsive pathways to commercialisation. Especially campus start-ups and incubators, nurturing post-docs as next generation entrepreneurs; and innovative modes of funding. We also need responsible and sustainable innovation.Haptocs 

Here are links to further information, and video clips, on the projects and robots highlighted in the talk:

Slide 10: The Cooperative Human Robot Interaction Systems (CHRIS) project
Slide 11: MOBISERV - An Integrated Intelligent Home Environment for the Provision of Health, Nutrition and Well-Being Services to Older Adults
Slide 12: Hand exoskeleton for post stroke recovery

Slide 13: Tactile Sensing - tele-haptics

Slide 14: Surgical Haptics
Slide 15: Search and Rescue - Disaster Response
Slide 16: Towards energy sustainability

Sunday, May 26, 2013

What is the single biggest obstacle preventing robotics going mainstream?

The question Robotics by Invitation asked its panel in May 2013, was:

What is the single biggest obstacle preventing robotics from going mainstream? It has been said that we are on the edge of a ‘robotic tipping point’ … but where, exactly, is this edge? And what’s holding us back?

Here is my answer:

It depends on what you mean by mainstream. For a number of  major industry sectors robotics is already mainstream. In assembly-line automation, for instance; or undersea oil well maintenance and inspection. You could argue that robotics is well established as the technology of choice for planetary exploration. And in human culture too, robots are already decidedly mainstream. Make believe robots are everywhere, from toys and children’s cartoons, to TV ads and big budget Hollywood movies. Robots are so rooted in our cultural landscape that public attitudes are, I believe, informed – or rather misinformed – primarily by fictional rather than real-world robots.

So I think robotics is already mainstream. But I understand the sentiment behind the question. In robotics we have a shared sense of a technology that has yet to reach its true potential; of a dream unfulfilled.

The question asks what is the single biggest obstacle. In my view some of the biggest immediate obstacles are not technical but human. Let me explain with an example. We already have some very capable tele-operated robots for disaster response. They are rugged, reliable and some are well field-tested. Yet why it is that robots like these are not standard equipment with fire brigades? I see no technical reason that fire tenders shouldn’t have, as standard, a compartment with a tele-operated robot – charged and ready for use when it’s needed. There are, in my view, no real technical obstacles. The problem I think is that such robots need to become accepted by fire departments and the fire fighters themselves, with all that this entails for training, in-use experience and revised operational procedures.

In the longer term we need to ask what it would mean for robotics to go mainstream. Would it mean everyone having a personal robot, in the same we all now have personal computing devices? Or, when all cars are driverless perhaps? Or, when everyone whose lives would be improved with a robot assistant, could reasonably expect to be able to afford one? Some versions of mainstream are maybe not a good idea: I’m not sure I want to contemplate a world in there are as many personal mobile robots, as there are mobile phones now (~4.5 billion). Would this create robot smog, as Illah Nourbakhsh calls it in his brilliant new book Robot Futures?

Right now I don’t have a clear idea of what it would mean for robots to go mainstream, but one thing’s for sure: we should be thinking about what kind of sustainable, humanity benefitting and life enhancing mainstream robot futures we really want.


Thursday, March 28, 2013

A Crisis of Expectations

At the first UK Robot Ethics workshop on 25th March 2013, I offered - for discussion - the proposition that robotics is facing a Crisis of Expectations. And not for the first time. I argue that one possible consequence is (another) AI winter.

Here is a hypertext linked version of my paper.

Introduction

In this talk I set out the proposition that robotics is facing a crisis of expectations. As a community we face a number of expectation gaps - significant differences between what people think robots are and do, and what robots really are and really do, and (more seriously) might reasonably be expected to do in the near future. I will argue that there are three expectation gaps at work here: public expectations, press and media expectations and funder or stakeholder expectations, and that the combined effect of these amounts to a crisis of expectations. A crisis we roboticists need to be worried about.

Public Expectations

Here's a simple experiment. Ask a non-roboticist to give you an example of a robot - the first that comes into their mind. The odds are that it will be a robot from a Science Fiction movie: perhaps Terminator, R2-D2 or C-3PO or Data from Star Trek. Then ask them to name a real-world robot. Unlike your first question, which they will have answered quickly, this one generally needs a little longer. You might get an answer like "the robot in the advert that spray-paints cars" or, if you're lucky, they might know someone with a robot vacuum cleaner. So, although most people have a general idea that there are robots in factories, or helping soldiers to defuse bombs, the robots that they are most familiar with - the ones they can name and describe - are fictional.

None of this is surprising. The modern idea of a robot was, after all, conceived in Science Fiction. Czech playwright K. Capek first used the word Robot to describe a humanoid automaton in his play Rossum’s Universal Robots (RUR) and Isaac Asimov was the first to coin the word Robotics, in his famous short stories of the 1940s. The idea of a robot as an artificial mechanical person has become a ubiquitous fictional trope, and robots have, for half a century, been firmly rooted in our cultural landscape. We even talk about people acting robotically and, curiously, we don't mean like servants, we mean in a fashion that mimics the archetypal robot: stiff jointed and emotionally expressionless.

Furthermore, people like robots, as anyone who has had the pleasure of giving public talks will know. People like robots because robots are, to paraphrase W. Grey Walter, An Imitation of Life. Which probably accounts for the observation that we are all, it seems, both fascinated and disturbed by robots in equal measure. I have often been asked the question "how intelligent are intelligent robots?", but there's always an unspoken rider "...and should we be worried?". Robot dystopias, from Terminator, to The Matrix or out-of-control AI like HAL in Kubrick's 2001, make compelling entertainment but undoubtedly feed the dark side of our cultural love affair with robots.

It is not surprising then, that most people's expectations about robots are wrong. Their beliefs about what real-world robots do now are hazy, and their expectations about what robots might be like in the near future often spectacularly over optimistic. Some think that real-world robots are just like movie robots. Others are disappointed and feel that robotics has failed to deliver the promises of the 1960s. This expectation gap - the gap between what people think robots are capable of and what they're really capable of - is not one-dimensional and is, I argue, a problem for the robotics community. It is a problem that can manifest itself directly when, for instance, public attitudes towards robots are surveyed and the results used to inform policy [1]. It makes our work as roboticists harder, because the hard problems we are working on are problems many people think already solved, and because it creates societal expectations of robotics that cannot be met. And it is a problem because it underpins the next expectation gap I will describe.

Press and Media Expectations

You are technically literate, an engineer or scientist perhaps with a particular interest in robotics, but you've been stranded on a desert island for the past 30 years. Rescued and returned to civilisation you are keen to find out how far robotics science and technology has advanced and - rejoicing in the marvellous inventions of the Internet and its search engines - you scour the science press for robot news. Scanning the headlines you are thrilled to discover that robots are alive, and sending messages from space; robots can think or are "capable of human reasoning or learning"; robots have feelings, relate to humans, or demonstrate love, even behave ethically. Truly robots have achieved their promised potential. 

Then of course you start to dig deeper and read the science behind these stories. The truth dawns. Although the robotics you are reading about is significant work, done by very good people, the fact is - you begin to realise - that now, in 2013, robots cannot properly be said to think, feel, empathise, love or be moral agents; and certainly no robot is, in any meaningful sense, alive or sentient. Of course your disappointment is tempered by the discovery that astonishing strides have nevertheless been made.

So, robotics is subject to journalistic hype. In this respect robotics is not unique. Ben Goldacre has done much to expose bad science reporting, especially in medicine. But a robot is different to, say, a new strain of MRSA because - as I outlined above - most people think they know what a robot is. Goldacre has characterised bad science stories as falling into three categories: wacky stories, scare stories and breakthrough stories [2]. My observation is that robots in the press most often headline as either wacky or scary, even when the development is highly innovative.

I believe that robohype is a serious problem and an issue that the robotics community should worry about. The problem is this. Most people who read the press reports are lay readers who - perfectly reasonably - will not read much beyond the headline; certainly few will look for the source research. So every time a piece of robohype appears (pretty much every day) the level of mass-delusion about what robots do increases a bit more, and the expectation gap ratchets a little wider. Remember that the expectation gap is already wide. We are at the same time fascinated and fearful of robots, and this fascination feeds the hype because we want (or dread) the robofiction to become true. Which is of course one of the reasons for the hype in the first place.

Who's to blame for the robohype? Well we roboticists must share the blame. When we describe our robots and what they do we use anthropocentric words, especially when trying to explain our work to people outside the robotics community. Within the robotics and AI community we all understand that when we talk about an intelligent robot, what we mean is a robot that behaves as if it were intelligent; 'intelligent robot' is just a convenient shorthand. So when we talk to journalists we should not be too surprised when "this robot behaves, in some limited sense, as if it has feelings" gets reported as "this robot has feelings". But science journalists must, I think, also do better than this.

Funder and Stakeholder Expectations

Many of us rely on research grants to fund our work and - whether we like it or not - we have to become expert in the discipline of grantology. We pore over the small print of funding calls and craft our proposals with infinite care in an effort to persuade reviewers (also skilled grantologists) to award the coveted 'outstanding' scores. We are competing for a share of a limited resource, and the most persuasive proposals - the most adventurous, which also promise the greatest impact while matching themes defined to be of national importance - tend to succeed. Of course all of this is more or less equally true whether you are bidding for a grant in history, microbiology or robotics. But the crisis of expectations makes robotics different.

There are, I think, three factors at work. The first is the societal and cultural context - the expectation gaps I have outlined above. The second is the arguably disgraceful lack of useful and widely accepted benchmarks in robotics, which means that it is perfectly possible to spend 3 years developing a new robot which is impossible to quantifiably demonstrate as superior to comparable robots, including those that already existed when that project started. And the third is the fact that policymakers, funders and stakeholders are themselves under pressure to deliver solutions to very serious societal or economic challenges and therefore perhaps too eager to buy into the promise of robotics. Whether naively or wittingly, we roboticists are I believe guilty of exploiting these three factors when we write our grant applications.

I believe we now find ourselves in an environment in which it is now almost de rigueur to over-promise when writing grant applications. Only the bravest proposal writer will be brutally honest about the extreme difficulty of making significant progress in, for instance, robot cognition and admit that even a successful project, which incrementally extends the state of the art, may have only modest impact. Of course I am not suggesting that all grants over promise and under deliver, but I contend that many do and - because of the factors I have set out - they are rarely called to account. Clearly the danger is that sooner or later funding bodies will react by closing down robotics research initiatives and we will enter a new cycle of AI Winter.

AI has experienced "several cycles of hype, followed by disappointment and criticism, followed by funding cuts, followed by renewed interest years or decades later" [3]. The most serious AI Winter in the UK was triggered by the Lighthill Report [4] which led to a more or less complete cancellation of AI research in 1974. Are we heading for a robotics winter? Perhaps not. One positive sign is the identification of Robotics and Autonomous Systems as one of eight technologies of strategic importance to the UK [5]. Another is the apparent health of robotics funding in the EU and, in particular, Horizon 2020. But a funding winter is only the most extreme consequence of the culture of over-promising I have outlined here.

Discussion

I want to conclude this talk with some thoughts on how we, as a community, should respond to the crisis of expectations. And respond we must. We have, I believe, an ethical duty to the society we serve, as well as to ourselves, to take steps to counter the expectation gaps that I have outlined. Those steps might include:
  • At every opportunity, individually and collectively, we engage the public in honest explanation and open dialogue to raise awareness of the reality of robotics. We need to be truthful about the limitations of robots and robot intelligence, and measured with our predictions. We can show that real robots are both very different and much more surprising than their fictional counterparts.
  • When we come across particularly egregious robot reporting in the press and media we make the effort to contact the reporting journalist, to explain simply and plainly the true significance of the work behind the story. 
  • Individually and collectively we endeavour to resist the pressure to over-promise in our bids and proposals, and when we review proposals or find ourselves advising on funding directions or priorities, we seek to influence towards a more measured and ultimately sustainable approach to the long term Robotics Project.
References

[1] Public Attitudes towards Robots. Special Eurobarometer 382, European Commission, 2012.

[2] Ben Goldacre. Don't dumb me down. The Guardian, 8 September 2005.
  
[3] AI Winter. Wikipedia, accessed 14 March 2013.
  
[4] James Lighthill. Artificial Intelligence: A General Survey. In Artificial Intelligence: a paper symposium, Science Research Council, 1973. Here is a BBC televised debate which followed publication of the Lighthill report, in which Donald Michie, Richard Gregory and John McCarthy challenge the report and its recommendations (1973).

Sunday, March 24, 2013

Robotics has a new kind of Cartesian Dualism, and it's just as unhelpful

I believe robotics has re-invented mind-body dualism.

At the excellent European Robotics Forum last week I attended a workshop called AI meets Robotics. The thinking behind the workshop was:
The fields of Artificial Intelligence (AI) and Robotics were strongly connected in the early days of AI, but became mostly disconnected later on. While there are several attempts at tackling them together, these attempts remain isolated points in a landscape whose overall structure and extent is not clear. Recently, it was suggested that even the otherwise successful EC program "Cognitive systems and robotics" was not entirely effective in putting together the two sides of cognitive systems and of robotics.
I couldn't agree more. Actually I would go further and suggest that robotics has a much bigger problem than we think. It's a new kind of dualism which parallels Cartesian brain-mind dualism, except in robotics, it's hardware-software dualism. And like Cartesian dualism it could prove just as unhelpful, both conceptually, and practically - in our quest to build intelligent robots.

While sitting in the workshop last week I realised rather sheepishly that I'm guilty of the same kind of dualistic thinking. In my Introduction to Robotics one of the (three) ways I define a robot is: an embodied Artificial Intelligence. And I go on to explain:
...a robot is an Artificial Intelligence (AI) with a physical body. The AI is the thing that provides the robot with its purposefulness of action, its cognition; without the AI the robot would just be a useless mechanical shell. A robot’s body is made of mechanical and electronic parts, including a microcomputer, and the AI made by the software running in the microcomputer. The robot analogue of mind/body is software/hardware. A robot’s software – its programming – is the thing that determines how intelligently it behaves, or whether it behaves at all.
But, as I said in the workshop, we must stop thinking of cognitive robots as either "a robot body with added AI", or "an AI with added motors and sensors". Instead we need a new kind of holistic approach that explicitly seeks to avoid this lazy with added thinking.


Thursday, March 07, 2013

Extreme debugging - a tale of microcode and an oven

It's been quite awhile since I debugged a computer program. Too long. Although I miss coding, the thing I miss more is the process of finding and fixing bugs in the code. Especially the really hard-to-track-down bugs that have you tearing your hair out - convinced your code cannot possibly be wrong - that something else must be the problem. But then when you track down that impossible bug, it becomes so obvious.

I wanted to write here about the most fun I've ever had debugging code. And also the most bizarre, since fixing the bugs required the use of an oven. Yes, an oven. It turned out the bugs were temperature dependent.

But first some background. The year is 1986. I'm the co-founder of a university spin-out company in Hull, England, called Metaforth Ltd. The company was set up to commercialise a stack-based computer architecture that runs the language Forth natively. In other words Forth is the equivalent of the CPU's assembly language. Our first product was a 16-bit industrial processor which we called the MF1600. It was a 2-card module, designed to plug into the (then) industry standard VME bus. One of the cards was the Central Processing Unit (CPU) - not using a microprocessor, but a set of discrete components using fast Transistor Transistor Logic devices. The other card provided memory, input-output interfaces, and the logic needed to interface with the VME bus.

The MF1600 was fast. It ran Forth at 6.6 Million Forth Instructions Per Second (MIPS). Sluggish of course by today's standards, but in 1986 6.6 MIPS was faster than any microprocessor. Then PCs were powered by the state-of-the-art Intel 286 with a clock frequency of 6MHz, managing around 0.9 Assembler MIPS. And because Forth instructions are higher level than assembler, the speed differential was greater still when doing real work.

Ok, now to the epic debugging...

One of our customers reported that during extended tests in an industrial rack the MF1600 was mysteriously crashing. And crashing in a way we'd not experienced before when running tried and tested code. One of their engineers noted that their test rack was running very hot, almost certainly exceeding the MF1600's upper temperature limit of 55°C. Out of spec maybe, but still not good.

So we knew the problem was temperature related. Now any experienced electronics engineer will know that electrical signals take time to get from one place to another. It's called propagation delay, and these delays are normally measured in billionths of a second (nanoseconds). And propagation delays tend to increase with temperature. Like any CPU our MF1600 relies on signals getting to the right place at the right time. And if several signals have to reach the same place at the same time then even a small extra delay in one of them can cause major problems.

On most CPUs when each basic instruction is executed, a tiny program inside the CPU actually does the work of that instruction. Those tiny programs are called microcode. Here is a blog post from several years ago where I explain what microcode is. Microcode is magic stuff - it's the place where software and hardware meet. Just like any program microcode has to be written and debugged, but uniquely - when you write microcode - you have to take account of how long it takes to process and route signals and data across the CPU: 100nS from A to B; 120nS from C and D, and so on. So if the timing in any microcode is tight (i.e. only just allows for the normal delay and leaves no margin of error), it could result in that microcode program crashing at elevated temperatures.

So, we reckoned we had one, or possibly several, microcode programs in the MF1600 CPU with 'tight' timing. The question was, how to find them.

The MF1600 CPU had around 86 (Forth) instructions, and the timing bugs could be in any of them. Now testing microcode is very difficult, and the nature of the problem made the testing problem even worse. A timing problem at elevated temperatures means that testing the microcode by single-stepping the CPU clock and tracing the signals through the CPU with a logic analyser wouldn't help at all. We needed a way to efficiently identify the buggy instructions. Then we could worry about debugging them later. What we wanted was a way to test (i.e. exercise single instructions, one by one), on a running system at high temperatures.

Then we remembered that we don't need all 86 instructions to run the computer. Most of them can be emulated by putting together a set of simpler instructions. So a strategy formed: (1) write a set of tiny Forth programs that replace as many of the CPU instructions as possible, (2) recompile the operating system, then (3) hope that the CPU runs ok at high temperature. If it does then (4) run the CPU in an oven and one by one test the replaced instructions.

Actually it didn't take long to do steps (1) and (2), because the Forth programs already existed to express more complex instructions as sets of simpler ones. Many Forth systems on conventional microprocessor systems were built like that. In the end we had a minimal set of about 24 instructions. So, with the operating system recompiled and installed we put the CPU into the oven and switched on the heat. The system ran perfectly (but a little slower than usual), and continued to run well above the temperature it had previously crashed. A real stroke of luck.

Here's an example of a simple Forth instruction to replace two values on the stack with the smaller of those values, expressed as a Forth program we call MIN
: MIN  OVER OVER > IF SWAP THEN DROP ;
(From my 1983 book The Complete Forth).

From then on it was relatively easy to run small test programs to exercise the other 62 instructions (which were of course still there in the CPU - just not used by the operating system). A couple of days work and we found the rogue 2 instructions that were crashing at temperature. They were - as you might have expected - rather complex instructions. One was (LOOP) an instruction for do loops.

Then debugging those instructions simply required studying the microcode and the big chart with all the CPU delay times, over several pots of coffee. Knowing (or strongly suspecting) that what we were looking for were timing problems, called race hazards, where the data from one part of the CPU just doesn't have time to get to another part in time to be used for the next step of the microcode program. Having identified the suspect timing I then re-wrote the microcode for those instructions to leave a bit more time - by adding one clock cycle to each instruction (50nS).

Then reverting to the old non-patched operating system, it was the moment of truth. Back in the oven, cranking up the temperature, while the CPU was running test programs specifically designed to stress those particular instructions. Yes! The system didn't crash at all, over several days of running at temperature. I recall pushing the temperature above 100°C. Components on the CPU circuit board were melting, but still it didn't crash.

So that's how we debugged code with an oven.