Wednesday, December 26, 2007

Human consciousness could be immortal

Our subjective experience of the 'continuity' of consciousness is surely an illusion. But what makes that illusion and why is it so compelling? That's a deep question but here are I think two fundamental reasons:

1. Embodiment. You are an embodied intelligence. It is a mistake to think of mind and body as somehow separate. Our conscious experience and its awakening as a developing child is surely deeply rooted in our physical experience of the world as mediated through our senses.

2. Environmental continuity. Our experience of the world changes 'relatively' slowly. The word relative is important here since I mean relative to the rate at which our conscious experience updates itself. Of course we do experience the discontinuity of going to sleep then waking to the changed world of a new sunrise, but this is both deeply familiar and predictable.

A word that encompasses both of these is situatedness. Our intelligence, and hence also conscious experience is inextricably situated in our bodies and in the world. Let me illustrate what this might mean with a thought experiment.

Imagine a brain transplant. Your brain complete with its memories and life's experience, together with as much of your central nervous system as might be needed for it to function properly were to be transplanted into a different body. You would wake from the procedure into this new body. I strongly suspect that you would experience a profound and traumatic discontinuity of consciousness and, well, go mad. Indeed it's entirely possible that you simply couldn't (and perhaps mercifully) regain or experience any sort of consciousness at all. Why? Because the conditions for the emergence of consciousness and the illusion of its continuity have been irreversibly broken.

However, if what I have said above is true, there's a flip side to the story that could have extraordinary consequences.

If the continuity of consciousness is an illusion then, in principle at least, it might be possible to artificially perpetuate that illusion.

Imagine that at some future time we have a sufficiently deep understanding of the human brain that we can scan its internal structures for memories, acquired skills, and all of those (at present dimly understood) attributes that make you you. It's surely safe to assume that if we're able to decode a brain in this way, then we would also be able to scan the body structures (dynamics, musculature and deep wiring of the nervous system). It would then be a simple matter to scan, at or just before the point of death, and transfer those structures into a virtual avatar within a virtual world. The simulated brain structures would be 'wired' to the avatar's virtual body in exactly the same way as the real brain was wired to its real body, thus satisfying the requirement for embodiment. If the virtual world is also a high fidelity replica of the real world then we would also satisfy the second requirement, environmental continuity.

I would argue that, under these circumstances, the illusion of the continuity of consciousness could be maintained. Thus, on dying you would awake (in e-heaven), almost as if nothing had happened. Except, of course, that you could be greeted by the avatars of your dead relatives. Even better, because e-heaven is just a virtual environment in the real-world, then you could just as easily be visited by your living friends and relatives. Could this be the retirement home of the far future?

In this way human consciousness could, I believe, be immortal.

Monday, December 24, 2007

My love hate relationship with an Xbox 360

Ok, I admit it. I have an Xbox 360.

Of course I'm hopeless at playing video games, which is perhaps just as well because it means that I don't spend long playing (or wasting time, depending on your point of view). Just as long as it takes for me to get frustrated by how useless I am and give up. (But it's ok because my eldest son levels me up when he comes to stay;-).

However, as a piece of hardware it's awesome. Liquid cooled, 3 processors plus graphics processing unit. As someone who worked with the very first 8-bit microprocessors over 30 years ago (see my post By, you were lucky...) and with pretty much every generation since, I ought to be inured to the now expected doubling of performance roughly every two years (Moore's Law). But I'm not. I still easily get awestruck by next generation hardware and its capability. I'm equally impressed by the Xbox 360's operating system, its so called 'blade' interface is intuitive and a pleasure to use.

Of course this blog is really just an excuse for me to post a screen capture from the stunning Project Gotham Racing 4 of me driving (in my dreams) an Enzo Ferrari down The Mall, in the rain.

Saturday, December 08, 2007

The future doesn't just happen, we must own it

Had a remarkable couple of days last week. Walking with Robots (WWR), the Royal Academy of Engineering (RAE) and the London Engineering Project jointly ran a pilot Young Peoples' Visions Conference. The aim of the 2 day event was to give an opportunity for around 20 young adults (age 16-18) to "explore visions of their future and the part robots will play" in that future.

For me the event was important because it takes Walking with Robots into a new form of dialogue. At WWR events and activities I have been privileged to engage with many children, teenagers and adults, over the past year and I increasingly have a sense that our children, in particular, believe that the future is nothing to do with them. That technology is something that happens, or is imposed from outside and that they are merely passive consumers or targets for that technology. In this conference we really tried to change that view. It was wonderful to see the group change from - at the beginning - seeing this as 2 days out of college to - in the end - all being deeply engaged in the dialogue. I got a real sense of them feeling empowered and that their considered opinions are important - not least because those opinions will be published by the RAE and send to policymakers. In other words that they do have a say in their own technological futures.

Friday, October 26, 2007

A Mac with new spots: installing Leopard

A day off sick with a head cold and painful sinuses had one consolation. I had a little time this afternoon to install Leopard - the new version of Mac OS X (which interestingly arrived this morning before it had been officially launched at 6.00pm this evening).

How did the installation go? Well I'm happy to report that it was remarkable for being unremarkable. Just two minor comments: firstly, there was a very long pause (5 minutes perhaps) at the start of the install process proper, when the time remaining said 'calculating' and there was no apparent hard disk or DVD activity - I was beginning to have doubts about whether all was ok before the process sprang into life again (note to Apple: any kind of long and worrying pause like this really should still have some sort of progress indicator no matter how simple). Secondly, the time remaining calculation appeared to have difficulty making its mind up. Initially it said something over three and a half hours and then revised its estimate downwards over the next 30 minutes or so. In the end it took about an hour and a half from start to finish, and that included a long time for install disk verification.

First impression? Well it's fine. It's an operating system which means - in my view - should not be the main event but just get on and do its thing in the background while letting me get on with my work. It looks very nice of course, but so did Tiger. Cosmetically not such a big difference, especially for me since I place the dock on the left rather than at the bottom. (Ergonomically it makes more sense there because a left mouse movement to reach the dock is far easier than a down hand movement.)

The main new feature that I am immediately and gratefully using is called 'spaces'. It is basically the same thing that Linux window managers have had for years - which I have missed since switching (back) to Mac - that means I can open applications across four virtual screens and then quickly switch between them. This is great for me because when writing I like to have Firefox open for web searches, OpenOffice for drawing diagrams, Preview to read pdf papers, BibDesk and TeXShop for the actual writing. A single screen gets pretty crowded. (Of course what I'd really like is a bank of LCD displays so I can see everything at once but - for now - spaces will have to do.)

What else? Well the ability to instantly search and then - again with almost no delay - view the search results with 'cover flow' and the use 'quick look' to review what you find in more detail is terrific. The way that quick look opens everything from powerpoint presentations to movies and allows you to skip through the files with the left and right arrow keys but also scroll up and down individual files is just great. For the first time in 33 years of using computers I really think I don't need to remember filenames anymore. Given that this is still a good old fashioned traditional Unix file system underneath, Leopard is probably as close as you can get to feeling like an associative 'contents addressable' file system.

----------------------------------------------------------
*Footnote: I returned to Mac earlier this year after a 20 year separation. The first computers at APD (that we didn't design and build ourselves) were 128K Macs in 1985. Lovely machines with a proper windows OS (while the PC was still running DOS) that were used from everything from word processing and accounts to technical drawing.

Sunday, October 07, 2007

You really need to know what your bot(s) are thinking (about you)

The projected ubiquity of personal companion robots raises a range of interesting but also troubling questions.

There can be little doubt that an effective digital companion, whether embodied or not, will need to be both sensitive to the emotional state of its human partner and be able to respond sensitively. It will, in other words, need artificial empathy - such a digital companion would (need to) behave as if it has feelings. One current project at the Bristol Robotics Laboratory is developing such a robot, which will of course need some theory of mind if it is to respond appropriately. Robots with feelings takes us into new territory in human-machine interaction. We are of course used to temperamental machinery and many of us are machine junkies. We are addicted to our cars and dishwashers, our mobile phones and iPods. But what worries me about a machine with feelings (and frankly it doesn’t matter whether it really has feelings or not) is how it will change the way humans feel about the machine.

Human beings will develop genuine emotional attachments to companion bots. Recall Weizenbaum’s secretary’s sensitivity about her private conversation with ELIZA - arguably the worlds first chat-bot in the 1960s. For more recent evidence look no further than the AIBO pet owners clubs. Here is a true story from one such club to illustrate how blurred the line between pet and robot has already become. One AIBO owner complained that her robot pet kept waking her at night with its barking. She would “jump out of bed and try to calm the robo-pet, stroking its smooth metallic-gray back to ease it back to sleep”. She was saved from “going crazy” when it was suggested that she switch the dog off at night to prevent its barking.

It is inevitable that people will develop emotional attachments, even dependencies, on companion bots. This, of course, has consequences. But what interests me is if the bots acquire a shared cultural experience. Another BRL project called ‘the emergence of artificial culture in robot societies’ is investigating this possibility. Consider this scenario. Your home has a number of companion bots. Some may be embodied, others not. It is inevitable that they will be connected, networked via your home wireless LAN, and thus able to chat with each other at the speed of light. This will of course bring some benefits - the companion bots will be able to alert each other to your needs: “she’s home”, or “he needs help with getting out of the bath”. But what if your bots start talking about you?

Herein lies the problem that I wish to discuss. The bots shared culture will be quintessentially alien, in effect an exo-culture (and I don’t mean that to imply sinister). Bot culture could well be inscrutable to humans, which means that when bots start gossiping with each other about you, you will have absolutely no idea what they’re talking about because - unlike them - you have no theory of mind for your digital companions.

--------------------------------------------------------------
This is a short 'position statement' prepared for e-horizons forum Artificial Companions in Society: Perspectives on the Present and Future, 25th and 26th October, Oxford Internet Institute.

Sunday, September 16, 2007

A night train to Lisbon*

Fed up with airports and - perhaps rather pathetically trying to do my bit for climate change - I'm taking the train from Bristol to Lisbon. In planning the trip I quickly discovered that taking the planet friendly option is neither quick, cheap, or particularly easy to organise. Of course I didn’t expect it to be quick and anyway part of the attraction was to actually see some countryside en-route. Nor did I especially expect it to be cheap - around 29 hours of train travel across four countries including a sleeper is never going to be able to compete with a point to point budget (or even regular) airline. As for organisation, amazingly I found a great web-page dedicated to the business of getting from the UK to Lisbon by train.

There are a couple of options but the one I chose (and the shortest in travelling time) was Eurostar to Paris, TGV to Irun, then the sleeper ‘Sud Express’ from Irun to Lisbon. So, an early start from Bristol to get the 06.56 Bristol Parkway to Paddington, to be in good time for the 10.10 Eurostar to Paris Nord. Not my first time on Eurostar but, I’m ashamed to say, my first time in Paris, so I got a cab from Gare du Nord to Gare Montparnasse, to get at least a glimpse of the city. The taxi route crossed the Seine and passed the Piramide Louvre and the Arc de Triomphe. Wonderful. Mental note to self: must come here for a proper visit. Then had less than an hour to wait for the TGV. No time to do anything but wait in an especially dismal cafĂ© on the station concourse - with amazingly aggressive sparrows trying to fight me for my tarte de pomme. Good coffee though.

The 15.50 TGV from Paris to Irun was packed and - I must confess - slightly disappointing. Not in speed or timeliness - which couldn’t be faulted - but I was somewhat nonplussed to find there was no restaurant car. For some reason (probably a romantic impression gleaned from too many European train movies - Poirot and the like), I expected that on a five and a half hour journey I would be able to get dinner. Oh well, at least the buffet was a significant improvement on English trains, and the French seem to prefer to stay in the buffet car to consume their sandwich and coffee, which I rather liked. The loos were also pretty inadequate for the number of people on the train and the length of the journey. There’s nothing quite like the dismal experience of putting a generous gloop of soap on your hands to then discover that the water has run out and there are no hand towels. Still - pretty impressive to find myself looking out of the window on the Atlantic at Biarritz a little over 5 hours after leaving Paris.

I did notice that the French have not banned smoking from stations - only trains. Thus, on every approach to a station a handful of smokers would gather by the carriage doors ready to leap onto the platform for a tobacco fix in the few brief minutes of the halt. I don’t think I ever saw anyone drag so deeply or gratefully on his kingsize Gitanes as one fellow.

Resumed 11.30pm. The romance of long distance trains fully restored on discovering a restaurant car on the Sud Express. The slick air-con and Starbucks-alike chic of the TGV buffet is replaced by 1960s wood-effect plastic and a no-nonsense bar of the sort you can prop yourself up against. And I did. No namby-pamby air conditioning here. The only way to reduce the temperature to anything bearable is to open all the windows which means the noise level in the restaurant car is deafening. Which is fine really because it helps to mask the fact that I don’t speak a word of either Spanish or Portuguese, and the waiter doesn’t speak more than a word or two of English. There were only three of us eating, me and a Spanish looking couple, his back to me and her huge beautiful brown all-seeing-all-knowing-female-wise-eyes glancing at me from time to time. After a while I realised that the reason there were so few of us in the restaurant was that the doors at the other end of the carriage required superhuman strength to open - eventually a few did manage the necessary feat of strength and the bar achieved something like a friendly buzz. I think there must be something very strange about me that I could so much enjoy this solitary repast while clattering through Spain in the middle of the night.

The following morning. The night seemed too hot for anything like proper sleep on top of which the sleeping car was right behind the engine which sounded to me like the oldest diesel locomotive they could possibly muster. However, the rhythm of the train did get the better of the noise and I awoke surprisingly refreshed. A leisurely breakfast in the restaurant car and in what seemed no time at all we were pulling into Lisbon's Santa Apolonia station exactly on time at 11.03am. Great. No queuing at passport control or the baggage claim. Just step off the train into the city. A civilised way to arrive.

-----------------------------------------
*With apologies to Emily Grayson

Friday, August 17, 2007

A truly Grand Challenge

Between 13th and 16th August thirteen teams descended on Monte Ceneri in Ticino, Switzerland, for C-ELROB - the Civilian European Land Robotics Trials. Simply arriving was, in fact, something of a trial for the teams; given the physical size of many of the robots and the amount of supporting equipment and tools, air travel was clearly impossible. Teams therefore drove from the four corners of continental Europe, from as far apart as Portugal, Poland and Finland. For the Finnish lads this meant an epic 3,500km road trip.

The event was staged on and around the Monte Ceneri Swiss army base in the beautiful forested mountains above Lake Maggiore. The base provided a perfect environment for the four outdoor robotics challenges; all challenges were based upon the same basic task: as-far-as-possible autonomous search and location of number-plate sized orange markers including reading the letters and numbers on the marker plates (an abstracted search and rescue task). The winner of each challenge would be the robot which found and read the most marker plates - in the shortest time - with the least human intervention. (The rules allowed for human tele-operation of robots, but this carried a points penalty over fully autonomous operation.)

I was privileged to be able to closely observe the competition, as one of the small team of judges. Although closely involved in robotics research for 15 years this was my first up-close-and-personal experience of outdoor robotics, and my first impression was the sheer difficulty of the physical terrain: robots facing the non-urban challenges in particular had to be able to move and navigate through hilly forest and undergrowth, across rutted and muddy trial paths and, on day two, cope with heavy rain as well. The ‘urban’ challenge of day three was perhaps marginally easier for the robots, but still very tough with steps, ramps and indoor as well as outdoor aspects to negotiate.

Some teams were clearly well financed with, for instance impressive 4x4 vehicles fitted with state-of-the-art high performance sensors whereas other teams brought robots that had clearly been hand-built by the students themselves, on a limited budget. My observation was that the well financed teams did not have a significant advantage. The nature of the challenges was such that - provided the robot could cope with the physical demands of the trials - then it was the quality and ingenuity of the autonomous control strategies that was being tested.

All four challenges were demanding, but the ultimate trial on day four was undoubtedly the most testing of all. Robots were required to drive around a 3km forest road while spotting, reading and recording the location of the orange markers, with the optional addition of a 2km off-road section. Although tele-operation was allowed within the rules (albeit with a penalty), the hilly terrain meant that maintaining a radio signal between base and robot was more or less impossible. This challenge required no less than full autonomy which - given the difficulty of the mountain roads including no road markings, no clear road edges, deep shadows caused by overhanging trees and hazards like steep drops and hairpin bends - amounted to a significantly harder test than the much feted DARPA grand challenge. Remarkably, the winner of this event - the unassuming University of Hanover robot not much bigger than a wheelchair shown here - completed the course in a little over 40 minutes.

Quite apart from the demonstration of the state-of-the-art in outdoor robotics, this event was an extraordinarily valuable experience for the teams who entered. Most of the student teams worked around the clock in their makeshift mini-workshops of clustered laptops to test and refine their robots, right up to the last minute, (one lead professor remarked to me that all he had to do was keep his team supplied with pizza). Modern robots are very complex machines which must closely integrate mechanical and propulsion systems; power and energy management; sensors, vision and navigation; wireless communications and command and control software. The key to success here is systems integration and that integration was reflected not only in the engineering but in the very effective teamwork that was much in evidence at C-ELROB. That teamwork, together with the sense of healthy rivalry between teams, contributed to a remarkably successful event.

The full list of teams, and pictures of their robots are here.

If you can read German (or if not check out the pictures anyway), here is a detailed account of each of the 4 days of the event by science journalist Hans-Arthur Marsiske:
Day 1: Combined air and land trial
Day 2: Off-road (woodland) trial
Day 3: Urban marketplace search trial
Day 4: Autonomous reconnaissance

Sunday, July 08, 2007

Could a robot have feelings?

One of the great pleasures of giving public lectures is the questions that come from the audience, and my talk last week to the Bath U3A group was no different. A great question (and one I've been challenged with before) was "could a robot have feelings - emotions like fear, sadness or love?"

My questioner (and I guess most people) would be hoping for a comforting answer: No, only people and maybe some animals (your dog) can have feelings. In one sense that answer would be correct, because if we define feelings simply and subjectively as 'the feelings we experience when we are afraid, sad, or in love', such a definition would make those emotions uniquely human and something robots could not have by definition.

For the sake of argument let's get over that by asking a slightly different question: could a robot have artificial emotions that - in some way - are an analogue of human feelings. At this point I would normally say to my questioner, well, yes, I can conceive of a robot that behaves as if it is experiencing emotions. A humanoid robot, in other words, that acts is if it were afraid, or sad. The better the technology, then the more convincing and subtle would the (artificial) body language be. Actually we don't have to try very hard to imagine such a robot - colleagues in the lab are working on robot (heads) with artificial empathy. Here a picture.

At this point my questioner says "ah, but that robot doesn't really have feelings. It's just pretending to have feelings".

True! But don't human beings do that all the time. Isn't it the case, in fact, that we value the ability to be able to pretend to express feelings very highly, providing it's the honest deception of the actor's trade? We also value people who can make us feel in different ways: musicians, artists, poets or storytellers. Ok, I admit I'm being a little tricky here, but the point I'm trying to make is that humans can be very good at pretending to have feelings or at deceiving others into having feelings, and some (me included) have a hard time telling the difference.

Back to robots though. Yes, a robot that is programmed to behave as if it has feelings cannot, I grant, be said to actually have feelings. At best that robot is a good actor. This might seem to be the end of the issue but it's not, because the possibility of a robot that is very good at behaving as if it has feelings raises some pretty interesting issues. Why? Because in human robot interaction such a robot could have quite some power over the human. What interests me here is how such a robot would make humans feel towards it. Could a human, for instance, fall in love with an deeply (albeit artificially) empathic robot? I suspect the answer is yes.

This is a subject I'll return to in future blogs: not only the social implications of robots with artificial feelings, but also the question of whether robots could be designed that really do have feelings.

Friday, May 11, 2007

Anthropomorphism and robotics

Spent the last two days at a really interesting workshop: Practices of Anthropomorphism: from Automata to Robotics. The best thing about the meeting was the mix of disciplines, including anthropology, psychology, art and robotics. Terrific. The key to creativity, IMHO, is working with people outside your own discipline. It can be tough, mind you, outside the comfort zone of familiar ideas and frameworks. But that's precisely the point. Recall Koestler's brilliant "The Act of Creation" - creativity happens when two previously disconnected ideas intersect, meaningfully. (Or, humour, if the intersection is absurd.)

So, what were the creative outcomes of the workshop? Well, there were plenty of "ah ha!" moments (as well of lots of "ha ha"). For me the big insight was the realisation (obvious I guess to the anthropologists) that human beings are so utterly pre-disposed, hard-wired even, to anthropomorphise. Whether we like it or not we are, it seems, anthropomorphiliacs with a compulsion to attribute anthropic qualities to and develop emotional attachments with animals or artefacts, almost anything from the everyday to the exotic, the banal to the sublime. This is very interesting to a roboticist on a number of levels. For example, even simple robots are imbued with characteristics they don't have (cute, inquisitive, happy, ill) and at the other end of the robot spectrum humans will anthropomorphise and thereby compensate for the shortcomings of humanoid or android robots. In other words robot builders don't need to worry about making android robots into perfect artificial humans - cartoon robots will do - like NEC's PaPeRo shown here (thus also neatly avoiding the Uncanny Valley). You may think this is just another fraud, like de Vaucanson's defacating duck. And, in a way, it is. But as long as we, the roboticists, are completely honest and transparent about the real capabilities of our robots: what they can but especially what they cannot do, then it is a fraudulent contract that humans and robots can willingly and beneficially submit to.

This brings me to another question. Why are we so fascinated by robots? I think the answer lies in another surprising innate ability of humans. That is our ability to tell the difference between animate and inanimate. I think we love robots because they behave as if they are animate, yet we know them to be inanimate artefacts. We are, I believe, delighted by this deception.

A final thought: if, as Gabriella Airenti brilliantly argued at the workshop, anthropomorphism is an (inevitable) consequence of imitation and theory of mind, then it's surely not inconceivable that future intelligent robots might also develop this tendency. Except that, for them, it would be robomorphism.

Sunday, April 15, 2007

Walterian Creatures

In Daniel Dennett's remarkable book "Darwin's Dangerous Idea" he describes the Tower of Generate-and-Test; a brilliant conceptual model for the evolution of intelligence that has become known as Dennett's Tower. I propose here another storey to the Tower, for what I want to call Walterian Creatures, after the pioneering neurophysiologist W. Grey Walter, inventor of the world's first electro-mechanical autonomous mobile robot.

In a nutshell Dennett's tower is set of conceptual creatures each one of which is successively more capable of reacting to (and hence surviving in) the world through having more sophisticated strategies for 'generating and testing' hypotheses about how to react. Read chapter 13 of Darwin's Dangerous Idea for the full account, but there are some reasonable précis to be found on the web; here's one fullsome description. But for now here's my very brief outline of the storeys of Dennett's tower, starting on the ground floor:
  • Darwinian creatures have only natural selection as the generate and test mechanism, so mutation and selection is the only way that Darwinian creatures can adapt - individuals cannot.
  • Skinnerian creatures can learn but only by literally generating and testing all different possible actions then reinforcing the successful behaviour (which is ok providing you don't get eaten while testing a bad course of action).
  • Popperian creatures have the additional ability to internalise the possible actions so that some (the bad ones) are discarded before they are tried out for real.
  • Gregorian creatures are tool makers including - importantly - mind tools like language, which means that individuals no longer have to generate and test all possible hypotheses since others have done so already and can pass on that knowledge.
  • Scientific creatures. Here Dennett proposes that a particular way of rigorously, collectively and publically testing hypotheses - namely the scientific method - is sufficiently powerful and distinct to merit a further floor of the tower. (I'm not sure that I agree, however that isn't important to the point I'm trying to make in this blog.)

Like the Tower of Hanoi each successive storey is smaller (a sub-set) of the storey below, thus all Skinnerian creatures are Darwinian, but only a sub-set of Darwinian creatures are Skinnerian and so on.

Gregorian creatures (after Richard Gregory) are tool makers, of both physical tools (like scissors) and mind-tools (like language and mathematics), and Dennett suggests that these tools are 'intelligence amplifiers'. Certainly they give Gregorian creatures a significant advantage over merely Popperian creatures, because they have the benefit of the shared experience of others, expressed either through using the tools they have made or refined or, more directly, through their knowledge or instructions as spoken or written. Arguably the most powerful intelligence amplifier so far created by one particular species of Gregorian-Scientific creature: man, is the computer, for with it we are able to simulate almost any reality we can imagine. Simulation is potent stuff, gedanken thought experiments are no longer doomed to remain flights of fancy and mathematical models need no longer remain dry abstractions. And one of the most remarkable kinds of computer simulation is of intelligence itself: Artificial Intelligence.

What if the tools made by Gregorian creatures take on a life of their own and become, in a sense, independent of the tool-makers? Embodied AI (= Artificial Life) has this potential. Walterian creatures are, I propose, smart tools that have learned to think, grown up and left the toolbox. Think of future intelligent robots (far more capable than the crude prototypes we can currently build) that might co-exist with humans in an extraordinary and fulfilling symbiosis.

The defining characteristic of Walterian creatures is that they are artificial. They've not only left the toolbox but crawled out of the gene pool. No longer bound by the common biochemistry of Earth's biota, yet sharing both the inheritance and evolutionary (albeit artificial) processes of their Darwinian ancestors. So what does this mean for Walterian creatures? Well, all of Walterian’s ancestors share the fact that however simple or sophisticated their strategies for hypothesising about possible actions those actions have to be undertaken by the self-same physical creatures that do the hypothesising. Ok, Gregorian- Scientific creatures can augment themselves with magnificent tools for compensating for their own sensory or physical limitations, like electron microscopes, submarines or manned spacecraft, or remotely operated robot space probes that act as sense extenders, but one thing Gregorian individuals cannot do is evolve themselves as part of the generate and test process. Consider this scenario. A future intelligent autonomous robot is exploring a planet about which very little is known. As part of its generate and test strategy this Walterian can in simulation fast-forward artificial genetic algorithms to evolve its own physical capabilities and then re-build parts of itself on-the-fly to best deal with the situation it has encountered. It could, for instance, artificially evolve and re-engineer itself the means to make best use of whatever energy sources are to hand. (It would be like you or I falling into a river and being able to artificially evolve and grow gills in less time than it takes to drown.)

Walterian creatures are, like Gregorians, able to share tools, knowledge and experience. They will be fully interconnected, so that any individual - subject only to the physical delays of the networking technology - can instantly seek information or resources from the shared Walterian artificial culture. However, unlike Gregorians, these individuals are capable of Lamarckian learning. Need a skill fast? If you’re a Walterian creature then, providing at least one other individual has already learned the skill and is either online or has previously uploaded that skill, then you simply download it. Walterian creatures would surely be profoundly different - and perhaps unimaginable by our merely Gregorian - kinds of minds.

Sunday, March 25, 2007

The Mozart meme

Heard some great lectures yesterday evening at the Bath Royal Literary and Scientific Institute. John Sloboda gave a wonderful talk about the psychology of music, and in particular what makes the difference between a musical genius and the rest of us. He showed, with some pretty compelling evidence, that firstly you need to start learning an instrument very young, and secondly you need around 10 years of averaging 2 hours per day of practice - but not just any old practice, it has to be strongly supported and guided - before then starting composition. Are musical geniuses born or made? John offered the view that it's much more down to nurture than nature. He suggested that genetics may account for musical deficits (such as the very small number of people who are genuinely tone-deaf) but probably not musical genius.

John told a story so fascinating that I want to recount it here. A US research team ran some IQ tests on a group of adults, one group immediately after listening to Mozart, the other (control) group without. The Mozart group showed higher IQ scores than the control group. Now that's interesting enough, but it's what happened subsequently and outside the research lab, that is really quite shocking. John recounted that a journalist reported this finding as "The Mozart Effect" and parents anxious to improve their youngsters' IQs started playing them Mozart, schools introduced the same practice, and in some US districts this became a requirement of the education authority. Pop-psych books were published and money was made. Google the Mozart effect and you'll see what I mean.

But does it work? No. John explained that the original study was done on adults and subsequent work has shown that the same effect isn't apparent in children, and even on adults the IQ raising effect wears off after 10 minutes or so. But that's the power of a great meme. The idea is so attractive that as soon as it catches hold the truth behind the idea becomes irrelevant. And of course Mozart already has almost mythical super-genius status, so the Mozart effect meme was already riding on a winner. Someone asked John if the same would have happened if the original study had used another composer's music. "Almost certainly not", he replied, "the Couperin effect, doesn't have anything like the same ring to it!"

But the world is full of such memes. Some emerge from flaky science, others from a flawed interpretation of otherwise good science. A particular hobby horse of mine is "The Big Bang". Popular culture regards the Big Bang as an established fact. But it isn't. There are two competing theories for the origin of the Universe: one is the big bang theory, the other is the steady-state theory. The problem with the steady-state theory is that it's just dull. Where's the excitement in the idea that the Universe has always existed? Like the Mozart effect, the big bang theory feeds a need. Finite creatures that we are, we like the idea that the Universe has a birth and a death. And if you believe in God, even better. The steady-state theory is not good news for theists.

Memes really are powerful magic.

Wednesday, March 14, 2007

Homo dinosauroid

Last night's Horizon was promising: what might have happened if the asteroid that is generally agreed to have triggered the extinction event at the end of the cretaceous period 65 million years ago had missed? This should be good, I thought. Interesting to speculate about how dinosaurs might have continued to evolve. What forms might they have evolved into by now..?

But the programme was spoiled by an unnecessary and scientifically dubious focus on the question "what would have happened if humans had co-evolved along with dinosaurs?".

Given the extraordinary success of the dinosaurs in exploiting ecological niches (as the programme pointed out) the likelihood that mammals would have evolved very much beyond the rodent-like animals (like Repenomamus) that managed to just about co-exist with dinosaurs must be vanishingly small. (Clutching at straws perhaps) the programme suggested that the tree-tops might have provided a dinosaur-free niche in which primates might have evolved, but failed to address the question of why dinosaurs would not have also moved into the same eco-space, especially with fresh mammalian meat to tempt them.

But for me the programme makers lost it completely with the suggestion that intelligent humanoid dinosaurs might have co-evolved with humans. Now I love thought experiments, but the idea that homo dinosauroid would now be peacefully sharing our 21st C. cafe culture is, frankly, insulting to dinosaurs. We were shown a rather meek and frightened looking specimen (well you would be too with no clothes on) - clearly 21st C. homo d. needs to get down to the gym.

Now I have no problem at all with the idea that dinosaur evolution, if it had not been rudely interrupted by the Chicxulub asteroid, might have resulted in highly intelligent dinosaurs, language, culture and so on (especially given emerging evidence for gregarious behaviour in dinosaur groups). If the asteroid had missed, and (against the odds) primates and hominids had evolved alongside intelligent dinosaurs, the suggestion that the two lineages would have somehow co-evolved into a peaceful vision of Dinotopia is, well, just unbelievable*. Much more likely is that the dinosaurs would have been subject to another and equally lethal extinction event. Man.

--------------------------------------------------------------------
*I say this with the greatest respect for the wonderful books of James Gurney.


Thursday, March 01, 2007

"By, you were lucky..."

My friend, erstwhile mentor and visiting professor colleague Rod Goodman and I were reminiscing a few days ago about our first experiences (~1977) with the Intel 8080, which arrived on a circuit board with 1K bytes RAM, a 1K byte EPROM and absolutely no software. We were having one of those conversations inspired by Monty Python's four Yorkshiremen sketch (and thanks to Dave Snowden for this link from his excellent blog):

"When I were a lad, we only had 4K bytes of RAM and a hex keypad"

"Hex keypad! By, you were lucky. We only 'ad 1K of memory and had to key in t'boot loader by 'and in noughts and ones before we could even start work".

"Well you were lucky. We were so poor we could only afford noughts..." and so on.

But the truth is (and I realise how perilously close I am to becoming a grumpy old man parody here) that my fellow graduate students and I really did have to start from scratch and make all of our own development tools. I recall that we first had to write a cross-assembler, in Algol-68, on the university mainframe: an ICL 1904S. We took advantage of the fact that the mainframe was accessed by electro-mechanical 'teletypes' which were fitted, as standard, with paper-tape punches. We got hold of a paper tape reader and interfaced it to the Intel 8080 development board (designing by hand the necessary interface electronics and device driver code - remember this is long before 'plug and play'). Then we were able to write symbolic 8080 assembler on the mainframe, generate 8080 machine code on paper tape, and load that directly into the 8080 development board to test it. Of course the edit test cycle was pretty long, and not helped by the fact that our lab was two floors from the mainframe terminals, so to speed things up we invested in a special device that allowed us to directly 'edit' the paper tape. The device allowed us to make extra holes and cover over - with a special kind of sticky tape - unwanted holes. Here's a picture of this marvellous device.

So, to anyone out there who grumbles about their software development tools I have only one thing to say. "You're lucky you are. When I were a lad..."

Friday, February 23, 2007

An e-puck outing

At a little over 5 cm tall the e-pucks are remarkable little robots. Here is a picture from the web pages of supplier and all round good people at Cyberbotics. Our e-pucks got their first outing at the Brighton Science Festival's Big Science Sunday, on February 18th (and let me pay tribute to festival organiser Richard). A small pack of 4 or 5 e-pucks in a table top arena proved to be a compelling attraction for kids of all ages. A great talking point that allowed us to pontificate about everything from ants and swarm intelligence to the future of robots in society. Here is a picture with my colleague Claire Rocks in mid-demonstration showing part of the arena with two of the e-pucks contrasting with the old Linuxbot on the left. It's amazing to think that the Linuxbot was state-of-the-art technology just 10 years ago. The e-pucks, with sound (microphones and speaker), vision (camera and LEDs), bluetooth radio, proximity sensors and accelerometer are astonishingly sensor-rich compared with the venerable Linuxbot and its generation.

Now the small size of the e-puck can be deceptive. A week or so before the Brighton gig I thought I would try and code up some new swarm behaviours for the robots. "Little robot - how hard can it be", I thought to myself as I sat down to an evening's light coding. Boy was I mistaken. Within the e-puck's densely packed motherboard is a complex system which belies its small size. The Microchip dsPIC microcontroller at the heart of the e-puck has come a long way from the reduced-instruction-set and reduced-everything-else 8 bit PIC I programmed with a few dozen lines of assembler for our early Bismark robot 10 years ago. And in the e-puck the microcontroller is surrounded by some pretty complex sub-systems, such as the sound i/o codec, the camera and the bluetooth wireless. It's a complex system of systems. So, suitably humbled, I shall have to take some time to learn to program the e-puck*.

Just goes to show that with robots too, appearances can be deceptive.

----------------------------------------------------------------------
*Fortunately, and with remarkable generosity, the e-puck's designers have released the whole of the e-puck design - hardware and software - under an open source licence. So there are lots of function libraries and example programs to be explored.
And I should have mentioned that, in addition to public engagement, we're also evaluating the e-pucks as possible robots for our new Artificial Culture project. More blogs about this in due course.

Tuesday, February 13, 2007

The Rights of Robot

Almost exactly a year ago I wrote about wild predictions of human level AI. Another prediction that has caught the attention of the general press is about robot rights. See for instance this piece in the otherwise sensible Financial Times: uk report says robots will have rights, or the BBC technology news here, and elsewhere. 

The prediction that provoked these responses is worth a look: Robo-rights: Utopian dream or rise of the machines? 

The report, by Outsights - Ipsos MORI, was part of the UK government's strategic horizon scanning exercise and is pretty brief at a little over 700 words. In a nutshell, the report says that if robots gain artificial intelligence then calls may be made for them to be granted human rights. The report doesn't make it clear whether such calls would be made by humans on robots' behalf, or by the robots themselves (although the only link given is to the American Society for Prevention of Cruelty to Robots, which seems to imply the former). The likelihood of this is rated 1 out of 3 stars (33%..?), and timescale 21-50+ years. The report, which is clearly written from a legal perspective (nothing wrong with that), goes on to make some frankly surreal speculations about robots voting, becoming tax payers or enjoying social benefits like housing or health-care. 

Hang on, is this really a UK government commissioned report, or a script from Futurama..? I'm surprised it didn't go on to warn of loutish robots subject to ASBOs. 

Ok, let's get real. 

Do I think robots will have (human) rights within 20-50 years? No, I do not. Or to put it another way, I think the likelihood is so small as to be negligible. Why? Because the technical challenges of moving from insect-level robot intelligence, which is more or less where we are now, to human-level intelligence are so great. 

Do I think robots will ever have rights? Well, perhaps. In principle I don't see why not. Imagine sentient robots, able to fully engage in discourse with humans, on art, philosophy, mathematics; robots able to empathise or express opinions; robots with hopes, or dreams. Think of Data from Star Trek. It is possible to imagine robots smart, eloquent and persuasive enough to be able to argue their case - like Bicentennial Man - but, even so, there is absolutely no reason to suppose that robot emancipation would be rapid, or straightforward. After all, even though the rights of man* as now generally understood were established over 200 years ago, human rights are still by no means universally respected or upheld. Why should it be any easier for robots?

*or, to be accurate, 'men'.