Saturday, September 15, 2012

How to make an artificial companion you can really trust

Here are the slides of my Bristol TEDx talk:



My key take home messages from this talk are:
  • Big transition happening now from industrial robots to personal robots.
  • Current generation android robots are disappointing: they look amazing but their AI - and hence their behaviour - falls far short of matching their appearance. I call this the brain-body mismatch problem. That's the bad news.
  • The good news is that current research in personal human robot interaction (pHRI) is working on a whole range of problems that will contribute to future artificial companion robots.
  • Taking as a model from SF, the robot butler Andrew in the movie Bicentennial Man, I outline capabilities that an artificial companion will need to have, and current research - some at the Bristol Robotics lab.
  • Some of these capabilities are blindingly obvious: like safety, others less so, like gestural communication using body language. Most shocking perhaps is that an artificial companion will need to be self-aware, in order to be safe and trustworthy.
  • A very big challenge will be putting all of these capabilities together, blending them seamlessly into one robot. This is one of the current Grand Challenges of robotics.

And here are my speaker notes, for each slide, together with links to the YouTube videos for the movie clips in the presentation. Many of these are longer clips, with commentary by our lab or project colleagues.

Slide 2
There are currently estimated as between 8 and 10 million robots in the world, but virtually none of these are ‘personal’ robots. Robots that provide us with companionship, or assistance if we’re frail or infirm. Or as helpmates in our workplace. This is odd, when we consider that the word 'robot' was first coined 90 years ago to refer to artificial humanoid person.

But there is a significant transition happening right now in robotics from robots like these - working by and large out of sight and out of mind in factories, or warehouses, servicing underwater oil wells, or exploring the surface of Mars, to robots working with us in the home or workplace.

What I want to do in this talk is outline the current state of play in artificial companions, and the problems that need to be solved before artificial companions become commonplace.

Slide 3
But first I want to ask what kind of robot companion you would like - taking as a cue robots from SF movies...? I would choose WALL-E!

Before I begin to outline the challenges of building an artificial companion you could trust, let me first turn to the question of robot intelligence.

Slide 4
How intelligent are intelligent robots - not SF robots, but real world robots.

Our intuition tells us that a cat is smarter than a crocodile, which is in turn smarter than a cockroach. So we have a kind of animal intelligence scale, Where would robots fit?

Of course this scale assumes 'intelligence' is one thing that animals, humans or robots have more or less of, which is quite wrong - but let's go with it to at least make an attempt to see where robots fit.

Slide 5
Humans are, of course, at the 'top' of this scale. We are, for the time being, the most 'intelligent' entities we know.

Slide 6
Here is a robot vacuum cleaner. Some of you may have one. Where would it fit?

I would say right at the bottom. It's perhaps about as smart as a single celled organism - like an amoeba.

Slide 7
This android, called an Actroid robot from University of Osaka, looks as if it should be pretty smart. Perhaps toward the right of this scale..?

But looks can be deceptive. This robot is - I would estimate - little smarter than your washing machine.

Slide 8
Actroid is an illustration of what I can the brain-body mismatch problem: we can build humanoid - android robots that look amazing, beautiful even, but their behaviours fall far far short of what we would expect from their appearance. We can build the bodies but not the brains - and its the problem of bridging this gap that I will focus on now.

But we should note - looking at the amazing Paro robot baby seal - that the brain-body mismatch problem is much less serious for zoomorphic robots. Robot pets. This is why robot pets are, right now, much more successful artificial companions than humanoid robots.

Slide 9
In order to give us a mental model of the kind of artificial companion robot we might be thinking about, let's choose Andrew, the butler robot from the movie Bicentennial Man. The model, perhaps, of an ideal companion robot.

Although I prefer the robotic 'early' Andrew in the movie, rather than the android Andrew that the robot becomes. I think that robots should look like robots, and not like people.

So what are the capabilities that Andrew, or a robot like Andrew, would need?

Slide 10
The first, I would strongly suggest, is that our robot need to be mobile and very, very safe. Safety, and how to guarantee it, is a major current research problem in human robot interaction, and here we see a current generation human assistive robot used in this research.

At this point I want to point out that almost all of the robots, and projects, I'm showing you now are right here in Bristol, at the Bristol Robotics Laboratory. A joint research lab of UWE, Bristol and the University of Bristol, and the largest robotic research lab in the UK.

Slide 11
Humans use body language, especially gesture, as a very important part of human-human communication, and so an effective robot companion needs to be able to understand and use human body language, or gestural communication. This is the Bristol Elumotion Robot Torso BERT, used for research in gestural communication.

Another important part of human-human communication is gaze. We unconsciously look to see where each others' eyes are looking, and look there too. On the right here we see the Bristol digital robot head used for research in human robot shared attention through gaze tracking.

This, and the next few slides, all represent research done as part of the EU funded CHRIS project, which stands for Cooperative Human Robot Interaction Systems. The BRL led the CHRIS project.

Video clip BERT explains its purpose.

Slide 12
A robot companion needs to be able to learn and recognise everyday objects, even reading the labels on those objects, as we see in the movie clips here.

Video clip BERT learns object names.

Slide 13
And of course our robot companion needs to be able to interact directly with humans, able to give objects, safely and naturally, to a human, and take objects from a human. This remain a difficult challenge - especially to assure the safety of these interactions - but here we see the iCub robot used in the CHRIS project for work in on this problem.

Also important, not just to grasping objects but for any possible direct interaction with humans, is touch sensitive hands and fingertips - and here we see parallel research in the BRL on touch sensitive fingertips.

I think a very good initial test of trust for a robot companion would be to hold out your hand for a handshake. If the robot is able to recognise what you mean by the gesture, and respond with its hand, then safely and gently shake your hand - then it would have taken the first step in earning your trust in its capabilities.

Video clip of iCub robot passing objects is part of the wonderful CHRIS project video: Cooperative Human Robot Interactive System - CHRIS Project - FP7 215805 - project web http://www.chrisfp7.eu

Slide 14
Let's now turn to emotional states. An artificial companion robot needs to be able to tell when you are angry, or upset, for instance, because it may well moderate its behaviour accordingly and ask - what's wrong? Humans use facial expression to express emotional states, and in this movie clip we see the android expressive robot head Jules pulling faces with research Peter Jaeckel.

If our robot companion had an android face (even though I'm not at all sure its necessary or a good idea) then it too would be able to express 'artificial' emotions, through facial expression.

Video clip Jules, an expressive android robot head is part of a YouTube video: Chris Melhuish, Director of the Bristol Robotics Laboratory discusses work in the area of human/robot interaction.

Slide 15
Finally I want to turn to perhaps the oddest looking robot in this roundup. Cronos, conceived and built in a project led by my old friend Owen Holland. Owen was a co-founder of the robotics lab at UWE, together with its current director Chris Melhuish, and myself.

Cronos is quite unlike the other humanoid robots we've seen because it was designed to be humanoid from the inside, not the outside. Owen Holland calls this approach 'anthropomimetic'.

Cronos is made from hand-sculpted 'bones' made from thermo-softening plastic, held together with elastic tendons and motors that act as muscles. As we can see in this move Cronos 'bounces' whenever it moves even just a part of its body. Cronos is light, soft and compliant. This makes Cronos very hard to control, but this is in fact the whole idea. Cronos was designed to test ideas on robots with internal models. Cronos has, inside itself, a computer simulation of itself. This means that Cronos can in a sense 'imagine' different moves and find ones that work best. It can learn to control its own body.

Cronos therefore has a degree of self-awareness, that most other humanoid robots don't have.

I think this is important because a robot with an internal model and therefore able to try out different moves in its computer simulation before enacting them for real, will be safer as a result. Paradoxically therefore I think that a level of self-awareness is needed for safer, and therefore more trustworthy robots.

Video clip ECCE Humanoid Robot presented by Hugo Gravato Marques

Slide 16
The various skills and capabilities I've outlined here are almost certainly not enough for our ideal artificial companion. But suppose we could build a robot that combines  all of these technologies in a single body - I think we would have moved significantly closer to an artificial companion like Andrew.

Slide 17
Thankyou for listening and thank you for affording me the opportunity to talk about the work of the many amazing roboticists I have represented - I hope accurately - in this talk.

All of the images and movies in this presentation are believed to be copyright free. If any are not then please let me know and I will remove them.


Related blog posts:
On self-aware robots: Robot know thyself
60 years of asking Can Robots Think?
On Robot ethics: The Ethical Roboticist (lecture); Discussing Asimov's laws of robotics and a draft revision; Revising Asimov: the Ethical Roboticist
Could a robot have feelings?

Tuesday, July 24, 2012

When robots start telling each other stories...

About 6 years ago the late amazing Richard Gregory said to me, with a twinkle in his eye, "when your robots start telling each other stories, then you'll really be onto something". It was a remark with much deeper significance than I realised at the time.

Richard planted a seed that's been growing since. What I didn't fully appreciate then, but do now, is the profound importance of narrative. More than we perhaps imagine. Narrative is, I suspect, a fundamental property of both human societies and individual human beings. It may even be a universal property of all advanced societies of sentient social beings. Let me try and justify this outlandish claim. First, take human societies. We humans love to tell each other stories. Whether our stories are epic poems, love songs; stories told with sound (music), or movement (dance), or with stuff (sculpture or art). Stories about what we did today, or on our holidays, stories made with images (photos, or movies); true stories or fantasies, or stories about the Universe that strive to be true (science), or very formal abstract stories told with mathematics, stories are everywhere. Arguably human culture is mostly stories.

Since humans started remembering stories and passing them on orally, and more recently with writing, we have had history: the more-or-less-true grand stories of human civilisation. Even the many artefacts of our civilisation are kinds of stories. They are embodied stories, which narrate the process by which they were designed and made; the plans and drawings which we use to formally record those designs are literally stories which tell how to arrange and join materials in space to fashion the artefact. Project plans are narratives of a different kind: they tell the story of the future steps that must be taken to achieve a goal. Computer programs are stories too. Except that they contain multiple narratives (bifurcated with branches and reiterated with loops), whose paths are determined by input data, which are related over and over at blinding speed within the computer. 

Now consider individual humans. There is a persuasive view in psychology that each of us owes our identity, our sense of self, to our personal life stories. The physical stuff that makes us, the cells of our body, are regenerated and replaced continuously, so that there's very little of you that existed 5 years ago. (I just realised the fillings in my teeth are probably the oldest part of me!) Yet you are still you. You feel like the same you 10, 20 or in my case 50 years ago - since I first became self-aware. I think that it's the lived and remembered personal narrative of our lives that provides us with the feeling, the illusion if you like, of a persistent self. This is I think why degenerative brain diseases are so terrifying. They appear to eat away that personal narrative so devastatingly that the person is ultimately lost, even while their physical body continues living.

So I was tremendously excited to be invited to a cross-disciplinary workshop on Narrative and Complex Systems at the York Centre for Complex Systems Analysis a couple of weeks ago, co-organised by York Professors of English Richard Walsh, and Computer Science Susan Stepney. For the first time I found myself in a forum in which I could share and debate ideas on narrative.

In preparing for the workshop I realised that perhaps the idea of robots telling each other stories isn't as far fetched as it first appears. Think about a simple robot, like the e-puck. What does the story of its life consist of? Well, it is the complete history of all of the movements, including turns, etc, punctuated by interactions with its environment. Because the robot and its set of behaviours is simple, then those interactions are pretty simple too. It occurred to me that it is perfectly possible for a robot to remember everything that has ever happened to it. Now place a number of these robots together, in a simple 'society' of robots, and provide them with the mechanism to exchange 'life stories' (or more likely, fragments of life stories). This mechanism is something we already developed in the Artificial Culture project - it is social learning by imitation. These robots would be telling each other stories.

But, I hear you ask, would these stories have any meaning? Well, to start with I think we must abandon the notion that they would necessarily mean anything to us humans. After all, these are robots telling each other stories. Ok, so would the stories mean anything to the robots themselves, especially robots with limited 'cognition'? Now we are in the interesting territory of semiotics, or - to be more accurate - robosemiotics. What, for instance, would one robot's story signify to another? That signification would I think be the meaning. But I think to go any further we would need to do the robot experiment I have outlined here.

And what would be the point of my proposed robot experiment? It is, I suggest, this:
to explore, with an abstract but embodied model, the relationship between the narrative self and shared narrative, i.e. culture.
By doing this experiment would we be, as Richard Gregory suggested, really onto something?

Wednesday, June 27, 2012

Robot know thyself

How can we measure self-awareness in artificial systems?

This was a question that came up during a meeting of the Awareness project advisory board two weeks ago at Edinburgh Napier University. Awareness is a project bringing together researchers and projects interested in self-awareness in autonomic systems. In philosophy and psychology self-awareness refers to the ability of an animal to recognise itself as an individual, separate from other individuals and the environment. Self-awareness in humans is, arguably, synonymous with sentience. A few other animals, notably elephants, dolphins and some apes appear to demonstrate self-awareness. I think far more species may well experience self-awareness - but in ways that are impossible for us to discern.

In artificial systems it seems we need a new and broader definition of self-awareness - but what that definition is remains an open question. Defining artificial self-awareness as self-recognition assumes a very high level of cognition, equivalent to sentience perhaps. But we have no idea how to build sentient systems, which suggests we should not set the bar so high. And lower levels of self-awareness may be hugely useful* and interesting - as well as more achievable in the near-term.

Let's start by thinking about what a minimally self-aware system would be like. Think of a robot able to monitor its own battery level. One could argue that, technically, that robot has some minimal self-awareness, but I think that to qualify as 'self-aware' the robot would also need some mechanism to react appropriately when its battery level falls below a certain level. In other words, a behaviour linked to its internal self-sensing. It could be as simple as switching on a battery-low warning LED, or as complex as suspending its current activity to go and find a battery charging station.

So this suggests a definition for minimal self-awareness:
A self-aware system is one that can monitor some internal property and react, with an appropriate behaviour, when that property changes.
So how would we measure this kind of self-awareness? Well if we know the internal mechanism because we designed it), then it's trivial to declare the system as (minimally) self-aware. But what if we don't? Then we have to observe the system's behaviour and deduce that it must be self-aware; it must be reasonably safe to assume an animal visits the watering hole to drink because of some internal sensing of 'thirst'.



But it seems to me that we cannot invent some universal test for self-awareness that encompasses all self-aware systems, from the minimal to the sentient; a kind of universal mirror test. Of course the mirror test is itself unsatisfactory. For a start it only works for animals (or robots) with vision and - in the case of animals - with a reasonably unambiguous behavioural response that suggests "it's me!" recognition.

And it would be trivially easy to equip a robot with a camera and image processing software that compares the camera image with a (mirror) image of itself, then lights an LED, or makes a sound (or something) to indicate "that's me!" if there's a match. Put the robot in front of a mirror and the robot will signal "that's me!". Does that make the robot self-aware? This thought experiment shows why we should be sceptical about claims of robots that pass the mirror test (although some work in this direction is certainly interesting). It also demonstrates that, just as in the minimally self-aware robot case, we need to examine the internal mechanisms.

So where does this leave us? It seems to me that self-awareness is, like intelligence, not one thing that animals or robots have more or less of. And it follows, again like intelligence, there cannot be one test for self-awareness, either at the minimal or the sentient ends of the self-awareness spectrum.



Related posts:
Machine Intelligence: fake or real?
How Intelligent are Intelligent Robots?
Could a robot have feelings?

* In the comments below Andrey Pozhogin asks the question:What are the benefits of being a self-aware robot? Will it do its job better for selfish reasons?

A minimal level of self-awareness, illustrated by my example of a robot able to sense its own battery level and stop what it's doing to go and find a recharging station when the battery level drops below a certain level, has obvious utility. But what about higher levels of self-awareness? A robot that is able to sense that parts of itself are failing and either adapt its behaviour to compensate, or fail safely is clearly a robot we're likely to trust more than a robot with no such internal fault detection. In short, its a safer robot because of this self-awareness.

But these robots, able to respond appropriately to internal changes (to battery level, or faults) are still essentially reactive. A higher level of artificial self-awareness can be achieved by providing a robot with an internal model of itself. Having an internal model (which mirrors the status of the real robot as self-sensed, i.e. it's a continuously updating self-model) allows a level of predictive control. By running its self-model inside a simulation of its environment the robot can then try out different actions and test the likely outcomes of alternative actions. (As an aside, this robot would be a Popperian creature of Dennett's Tower of Generate and Test - see my blog post here.) By assessing the outcomes of each possible action for its safety the robot would be able to choose the action most likely to be the safest. A self-model represents, I think, a higher level of self-awareness with significant potential for greater safety and trustworthiness in autonomous robots.

To answer the 2nd part of Andrey's question, the robot would do its job better, not for selfish reasons - but for self-aware reasons.
(postscript added 4 July 2012)

Tuesday, June 19, 2012

60 years of asking Can Robots Think?

Last week at the Cheltenham Science Festival we debated the question Can robots think? It's not a new question. Here, for instance, is a wonderful interview from 1961 on the very same question. So, the question hasn't changed. Has the answer?


Well it's interesting to note that I, and fellow panelists Murray Shanahan and Lilian Edwards, were much more cautious last week in Cheltenham, than our illustrious predecessors. Both on the question can present day robots think: answer No. And will robots (or computers) be able to think any time soon: answer, again No.

The obvious conclusion is that 50 years of Artificial Intelligence research has failed. But I think that isn't true. AI has delivered some remarkable advances, like natural speech recognition and synthesis, chess programs, conversational AI (chatbots) and lots of 'behind the scenes' AI (of the sort that figures out your preferences and annoyingly presents personalised advertising on web pages). But what is undoubtedly true was Weisner, Selfridge and Shannon were being very optimistic (after all AI had only been conceived a decade earlier by Alan Turing). Whereas today, perhaps chastened and humbled, most researchers take a much more cautious approach to these kinds of claims.

But I think there are more complex reasons.

One is that we now take a much stricter view of what we mean by 'thinking'. As I explained last week in Cheltenham, it's relatively easy to make a robot that behaves as if it is thinking (and, I'm afraid, also relatively easy to figure out that the robot is not really thinking). So, it seems that a simulation of thinking is not good enough*. We're now looking for the real thing.

That leads to the second reason. It seems that we are not much closer to understanding how cognition in animals and humans works than we were 60 years ago. Actually, that's unfair. There have been tremendous advances in cognitive neuroscience but - as far as I can tell - those advances have brought us little closer to being able to engineer thinking in artificial systems. That's because it's a very very hard problem. And, to add further complication, it remains a philosophical as well as a scientific problem.

In Cheltenham Murray Shanahan brilliantly explained that there are three approaches to solving the problem. The first is what we might call a behaviourist approach: don't worry about what thinking is, just try and make a machine that behaves as if it's thinking. The second is the computational modelling approach: try and construct, from first principles, a theoretical model of how thinking should work, then implement that. And third, the emulate real brains approach: scan real brains in sufficiently fine detail and then build a high fidelity model with all the same connections, etc, in a very large computer. In principle, the second and third approaches should produce real thinking.

What I find particularly interesting is that the first of these 3 approaches is more or less the one adopted by the conversational AI programs entered for the Loebner prize competition. Running annually since 1992, the Loebner prize is based on the test for determining if machines can think, famously suggested by Alan Turing in 1950 and now known as the Turing test. To paraphrase: if a human cannot tell whether she is conversing with a machine or another human - and it's a machine - then that machine must be judged to be thinking. I strongly recommend reading Turing's beautifully argued 1950 paper.

No chatbot has yet claimed the $100,000 first prize, but I suspect that we will see a winner sooner or later (personally I think it's a shame Apple hasn't entered Siri). But the naysayers will still argue that the winner is not really thinking (despite passing the Turing test). And I think I would agree with them. My view is that a conversational AI program, however convincing, remains an example of 'narrow' AI. Like a chess program a chatbot is designed to do just one kind of thinking: textual conversation. I believe that true artificial thinking ('general' AI) requires a body.

And hence a new kind of Turing test: for an embodied AI, AKA robot.

And this brings me back to Murray's 3 approaches. My view is that the 3rd approach 'emulate real brains' is at best utterly impractical because it would mean emulating the whole organism (of course, in any event, your brain isn't just the 1300 or so grammes of meat in your head, it's the whole of your nervous system). And, ultimately, I think that the 1st (behaviourist - which is kind of approaching the problem from the outside in) and 2nd (computational modelling - which is an inside out approach) will converge.

So when, eventually, the first thinking robot passes the (as yet undefined) Turing test for robots I don't think it will matter very much whether the robot is behaving as if it's thinking - or actually is, for reasons of its internal architecture, thinking. Like Turing, I think it's the test that matters.


*Personally I think that a good enough behavioural simulation will be just fine. After all, an aeroplane is - in some sense - a simulation of avian flight but no one would doubt that it is also actually flying.

Tuesday, May 08, 2012

The Symbrion swarm-organism lifecycle

I've blogged before about the Symbrion project: an ambitious 5-year project to build a swarm of independently mobile autonomous robots that have the ability - when required - to self-assemble into 3D 'multi-cellular' artificial organisms. The organisms can then - if necessary - disassemble back into their constituent individual robots. The idea is that robots in the system can choose when to operate in swarm mode, which might be the optimal strategy for searching a wide area, or in organism mode, to - for instance - negotiate an obstacle that cannot be overcome by a single robot. We can envisage future search and rescue robots that work like this - as imagined on this ITN news clip from 2008.

Our main contribution to the project to date has been the design of algorithms for autonomous self-assembly and disassembly - that is the process of transition between swarm and organism. This video shows the latest version of the algorithm developed by my colleague Dr Wenguo Liu. It is demonstrated with 2 Active Wheel robots (developed at the University of Stuttgart - who also lead the Symbrion project) and 1 Backbone robot (developed at the Karlsruhe Institute of Technology).


Let me explain how this works. The docking faces of the robots have infra-red (IR) transmitters and receivers. When a 'seed' robot - in this case the Active Wheel robot on the left - decides to form an organism with a particular body plan, it broadcasts a 'recruitment' signal from its IR transmitters, with the 'type' of robot it needs to recruit - in this case a Backbone robot. The IR transmitters then act as a beacon which the responding robot uses to approach the seed robot, and the same IR system is then used for final alignment prior to physical docking.

Once docked, wired (ethernet) communication is established between robots, and the seed robot communicates the body-plan for the organism with the newly recruited Backbone robot. Only then does the Backbone robot know what kind of organism it is now part of, and where in the organism it is. In this case the Backbone robot determines that the partially formed organism then needs another Active Wheel and it recruits this robot using the same IR system. After the third robot has docked it too discovers the overall body plan and where in the organism it is. In this case it is the final robot to be recruited and the organism self-assembly is complete.

Using control coordinated via the wired ethernet intranet across its three constituent robots, the organism then makes the transition from 2D planar form to 3D, which - in this case - means that the 2 Active Wheel robots activate their hinge motors to bend and lift the Backbone robot off the floor. The 3D organism is now complete and can move as a single unit. The process is completely reversible, and the complete 'lifecycle' from swarm -> organism -> swarm is shown in this video clip.

It is important to stress that the whole process is completely distributed and autonomous. These robots are not being remotely controlled, nor is there a central computer coordinating their actions. Each robot has the same controller, and determines its own actions on the basis of sensed IR signals, or data received over the wired ethernet. The only external signal sent was to tell the first robot to become the 'seed' robot to grow the whole organism. Later in the project we will extend the algorithm so that a robot will decide, itself, when to become a seed and which organism to grow.

The Symbrion system is not bio-mimetic in the sense that there are (as far as I know) no examples in nature of cells that spontaneously assemble to become functioning multi-cellular organisms and vice-versa. It is, however, bio-mimetic in a different sense. The robots, while in swarm mode, are analogous to stem cells. The process of self-assembly is analogous to morphogenesis, and - during morphogenesis - the process by which robot 'cells' discover their position, role and function within the organism is analogous to cell-differentiation.

While what I have described in this blog post is a milestone following several years of demanding engineering effort by a very talented team of roboticists, some of the ultimate goals of the project are scientific rather than technical. One is to address the question - using the Symbrion system as an embodied model - of under what environmental conditions is it better to remain as single cells, or symbiotically collaborate as multi-celled organisms. It seems far fetched but perhaps we could model - in some abstract sense - the conditions that might have triggered the major transition in biological evolution of some 1000 million years ago which saw the emergence of simple multi-cellular forms.

Saturday, April 21, 2012

What's wrong with Consumer Electronics?

When I was a boy the term consumer electronics didn't exist. Then the sum total of household electronics was a wireless, a radiogram and a telephone; pretty much everyone had a wireless, fewer a radiogram and on our (lower middle-class) street perhaps one in five houses had a telephone. (In an emergency it was normal to go round to the neighbour with the phone.) In the whole of my childhood we only ever had the same wireless set and gramophone and both looked more like furniture than electronics, housed in handsome polished wooden cabinets. Of course it was their inner workings, with the warm yellow glow of the thermionic valves that fascinated me and got me into trouble when I took them to pieces, that led to my chosen career in electronics.

How things have changed. Now most middle-class households have more computing power than existed in the world 50 years ago. Multiple TVs, mobile phones, computing devices (laptops, games consoles, iPads, Kindles and the like) and the supporting infrastructure of wireless routers, printer, and backup storage, are now normal. And most of this stuff will be less than five years old. If you're anything like me the Hi-Fi system will be the oldest bit of kit you own (unless you ditched it for the iPod and docking station). Of course this gear is wonderful. I often find myself shocked by the awesomeness of everyday technology. And understanding how it all works only serves to deepen my sense of awe. But, I'm also profoundly worried - and offended too - by the way we consume our electronics.

What offends me is this: modern solid-state electronics is unbelievably reliable - what's wrong with consumer electronics is nothing, yet we treat this magical stuff - fashioned of glass - as stuff to be consumed then thrown away. Think about the last time you replaced a gadget because the old one had worn out or become unrepairable. Hard isn't it. If you still possessed it the mobile phone you had 15 years ago would - I'd wager - still work perfectly. I have a cupboard here at home with all manner of obsolete kit. A dial-up modem for instance, circa 1993. It still works fine - but there's nothing to dial into. The fact is that we are compelled to replace perfectly good nearly-new electronics with the latest model either because the old stuff is rendered obsolete (because it's no longer compatible with current generation o/s, or applications or infrastructure - or unsupported), or worse still because the latest kit has 'must have' features or capabilities not present on the old.

I would like to see a shift in consumer electronics back to a model in which gadgets are designed to be repaired and consumers are encouraged to replace or upgrade every ten years or more, not every year. What I'm suggesting is of course exactly the opposite of what's happening now. Current devices are becoming less repairable, with batteries you can't replace and designs that even skilled technicians find difficult to take apart without risk of damage. The lastest iPad for example was given a very low repairability score (2/10) by iFixit.

And the business model most electronics companies operate is fixated on the assumption that profit, and growth, can only be achieved through very short product life cycles. But all of our stuff is not like this. We don't treat our houses, or gardens, or dining room tables, or central heating systems, or any number of things as consumer goods, but the companies that build and sell houses, or dining room tables, or landscape gardens, etc, still turn a profit. Why can't electronics companies find a business model that treats electronic devices more like houses and less like breakfast cereal?

I don't think consumer electronics should be consumed at all.

Wednesday, January 11, 2012

New experiments in the new lab

Last week my PhD student Mehmet started a new series of experiments in embodied behavioural evolution. The exciting new step is that we've now moved to active imitation. In our previous trials robot-robot imitation has been passive; in other words, when robot B imitates robot A, robot A receives no feedback at all - not even that its action has been imitated. With active imitation, robot A receives feedback - it receives information on which of its behaviours has been imitated, how well the behaviour been imitated and by whom.

The switch from passive to active imitation has required a major software rewrite, both for the robots' control code and for the infrastructure. We made the considered decision that the feedback mechanism - unlike the imitation itself - is not embodied. In other words the system infrastructure both figures out which robot has imitated which (not trivial to do) and radios the feedback to the robots themselves. The reason for this decision is that we want to see how that feedback can be used to - for instance - reinforce particular behaviours so that we can model the idea that agents are more likely to re-enact behaviours that have been imitated by other agents, over those that haven't. We are not trying to model active social learning (in which a learner watches a teacher, then the teacher watches the learner to judge how well they've learned, and so on) so we avoid the additional complexity of embodied feedback.

In the first tests with the new active imitation setup we've introduced a simple change to the behaviour selection mechanism. Every robot has a memory with all of its initialised or learned behaviours. Each one of those behaviours now has a counter that gets incremented each time that particular behaviour is imitated. A robot selects which of its stored behaviours to enact, at random, but with probabilities that are determined by the counter values so that a higher count behaviour is more likely to be selected. But, as I've discovered peering at the data generated from the initial runs, it's not at all straightforward to figure out what's going on and - most importantly - what it means. It's the hermeneutic challenge again.

So, for now here's a picture of the experimental setup in our shiny new* lab. Results to follow!















*In November 2011 the Bristol Robotics Lab moved from its old location, in the DuPont building, to T block on the extended Coldharbour Lane campus.

Thursday, January 05, 2012

Philip Larkin - a recollection

My first encounter with Philip Larkin was as a fresher in October 1974. As university librarian it was Larkin's annual duty to give an introductory lecture to the year's fresh intake. I recall seeing this tall, portly man with bottle-top glasses in a bank manager's suit. Not at all how I imagined a poet should look. (My Dad had told me about Larkin when I first announced I'd chosen to go to Hull University, otherwise I'm sure I'd have taken no notice at all.) To this audience of several hundred 18-year olds - more interested in eyeing each other for fanciableness than listening to some bloke in a suit - Larkin declared with a plummy, resonant voice and measured delivery, as if it was a line from Shakespeare, "...educated people should know three things: what words mean, where places are and when things happened".

My first encounter with his poetry was several months later. It was a vacation and I was at home with Mum and Dad, younger sister and brother. Larkin was to be featured in a TV documentary and the whole family gathered expectantly round the set at the appointed time. Then the first stanza was read "They fuck you up, your mum and dad/They may not mean to, but they do/They fill you with the faults they had/And add some extra, just for you." Cue acute, embarrassed silence. My Mum, I think, said "Well I don't think much of this" and without another word the TV was switched off. We didn't discuss this (in fact I don't think we ever discussed it). It was some years later that I got to know (and love) Larkin's poetry and to reflect on the idiot producer who chose to start that TV programme with arguably his worst poem, just for the shock value of the word fuck on the BBC (this was 1975). It's not that I'm prudish about the sentiment expressed, it's just not a good poem.

Fast forward about six years. I've accepted a junior lecturing post while finishing off my PhD, and find myself a member of the science faculty board. As librarian Larkin is an ex-officio member and I recall him contributing his opinions to the board's debates. I've long forgotten the subject of those debates but I vividly recall the manner of Larkin's contributions. He would stand, as if addressing parliament, and speak what I can only describe as Perfect English. His articulation, diction and metre was actor-perfect. If you had written down exactly what he said, and punctuation would have been easy for he paused in commas and semi-colons, you would get perfect prose; each word exactly the right word, each phrase perfectly turned. I was, at the time, going out with a girl who worked in the library and she told me Larkin's memoranda were the same: each a miniature essay, a perfectly formed construction of letters.

I never knew Larkin. Nobody did. He was a distant, unapproachable man and, by all accounts, not at all likeable. The closest he and I came to conversation was exchanging nods across the lunchtime staff common-room bar. I find it satisfyingly ironic therefore that a man so apparently detached and unemotional should have written what is, for me, the finest love poem of the 20th Century: An Arundel Tomb (1).

The poem starts: Side by side, their faces blurred, the earl and countess lie in stone, and then in the second verse the beautiful observation: Such plainness of the pre-baroque hardly involves the eye, until it meets his left-hand gauntlet, still clasped empty in the other; and one sees, with a sharp tender shock, his hand withdrawn, holding her hand. I love the words sharp tender shock; then in the next verse: Such faithfulness in effigy... A sculptor’s sweet commissioned grace.

In the fifth verse Larkin constructs a spine tingling evocation of the long passage of time: Rigidly they persisted, linked, through lengths and breadths of time. Snow fell, undated. Light each summer thronged the glass. A bright litter of birdcalls strewed the same bone-riddled ground. And then the remarkable conclusion of the poem: The stone fidelity they hardly meant has come to be their final blazon, and to prove our almost-instinct almost true: what will survive of us is love.

Forgive me for removing the line breaks in these extracts from the poem. In doing so I want to illustrate my observation that, in Larkin's writing, there is little distance between prose and poetry. When reading his poems I've reflected often on why it is that a man with such an apparently effortless ability to produce perfect English published so little, and agonised so much over his writing. I now realise that he didn't have a problem with writing, but with life. "The object of writing," Larkin once said, "is to show life as it is, and if you don't see it like that you're in trouble, not life.


(1) from The Whitsun Weddings, Faber and Faber, 1964. And here is both the full text of An Arundel Tomb and Larkin reading the poem.