Sunday, March 25, 2007
The Mozart meme
John told a story so fascinating that I want to recount it here. A US research team ran some IQ tests on a group of adults, one group immediately after listening to Mozart, the other (control) group without. The Mozart group showed higher IQ scores than the control group. Now that's interesting enough, but it's what happened subsequently and outside the research lab, that is really quite shocking. John recounted that a journalist reported this finding as "The Mozart Effect" and parents anxious to improve their youngsters' IQs started playing them Mozart, schools introduced the same practice, and in some US districts this became a requirement of the education authority. Pop-psych books were published and money was made. Google the Mozart effect and you'll see what I mean.
But does it work? No. John explained that the original study was done on adults and subsequent work has shown that the same effect isn't apparent in children, and even on adults the IQ raising effect wears off after 10 minutes or so. But that's the power of a great meme. The idea is so attractive that as soon as it catches hold the truth behind the idea becomes irrelevant. And of course Mozart already has almost mythical super-genius status, so the Mozart effect meme was already riding on a winner. Someone asked John if the same would have happened if the original study had used another composer's music. "Almost certainly not", he replied, "the Couperin effect, doesn't have anything like the same ring to it!"
But the world is full of such memes. Some emerge from flaky science, others from a flawed interpretation of otherwise good science. A particular hobby horse of mine is "The Big Bang". Popular culture regards the Big Bang as an established fact. But it isn't. There are two competing theories for the origin of the Universe: one is the big bang theory, the other is the steady-state theory. The problem with the steady-state theory is that it's just dull. Where's the excitement in the idea that the Universe has always existed? Like the Mozart effect, the big bang theory feeds a need. Finite creatures that we are, we like the idea that the Universe has a birth and a death. And if you believe in God, even better. The steady-state theory is not good news for theists.
Memes really are powerful magic.
Wednesday, March 14, 2007
Homo dinosauroid
But the programme was spoiled by an unnecessary and scientifically dubious focus on the question "what would have happened if humans had co-evolved along with dinosaurs?".
Given the extraordinary success of the dinosaurs in exploiting ecological niches (as the programme pointed out) the likelihood that mammals would have evolved very much beyond the rodent-like animals (like Repenomamus) that managed to just about co-exist with dinosaurs must be vanishingly small. (Clutching at straws perhaps) the programme suggested that the tree-tops might have provided a dinosaur-free niche in which primates might have evolved, but failed to address the question of why dinosaurs would not have also moved into the same eco-space, especially with fresh mammalian meat to tempt them.
But for me the programme makers lost it completely with the suggestion that intelligent humanoid dinosaurs might have co-evolved with humans. Now I love thought experiments, but the idea that homo dinosauroid would now be peacefully sharing our 21st C. cafe culture is, frankly, insulting to dinosaurs. We were shown a rather meek and frightened looking specimen (well you would be too with no clothes on) - clearly 21st C. homo d. needs to get down to the gym.
Now I have no problem at all with the idea that dinosaur evolution, if it had not been rudely interrupted by the Chicxulub asteroid, might have resulted in highly intelligent dinosaurs, language, culture and so on (especially given emerging evidence for gregarious behaviour in dinosaur groups). If the asteroid had missed, and (against the odds) primates and hominids had evolved alongside intelligent dinosaurs, the suggestion that the two lineages would have somehow co-evolved into a peaceful vision of Dinotopia is, well, just unbelievable*. Much more likely is that the dinosaurs would have been subject to another and equally lethal extinction event. Man.
--------------------------------------------------------------------
*I say this with the greatest respect for the wonderful books of James Gurney.
Thursday, March 01, 2007
"By, you were lucky..."
My friend, erstwhile mentor and visiting professor colleague Rod Goodman and I were reminiscing a few days ago about our first experiences (~1977) with the Intel 8080, which arrived on a circuit board with 1K bytes RAM, a 1K byte EPROM and absolutely no software. We were having one of those conversations inspired by Monty Python's four Yorkshiremen sketch (and thanks to Dave Snowden for this link from his excellent blog):"When I were a lad, we only had 4K bytes of RAM and a hex keypad"
"Hex keypad! By, you were lucky. We only 'ad 1K of memory and had to key in t'boot loader by 'and in noughts and ones before we could even start work".
"Well you were lucky. We were so poor we could only afford noughts..." and so on.
But the truth is (and I realise how perilously close I am to becoming a grumpy old man parody here) that my fellow graduate students and I really did have to start from scratch and make all of our own development tools. I recall that we first had to write a cross-assembler, in Algol-68, on the university mainframe: an ICL 1904S. We took advantage of the fact that the mainframe was accessed by electro-mechanical 'teletypes' which were fitted, as standard, with paper-tape punches. We got hold of a paper tape reader and interfaced it to the Intel 8080 development board (designing by hand the necessary interface electronics and device driver code - remember this is long before 'plug and play'). Then we were able to write symbolic 8080 assembler on the mainframe, generate 8080 machine code on paper tape, and load that directly into the 8080 development board to test it. Of course the edit test cycle was pretty long, and not helped by the fact that our lab was two floors from the mainframe terminals, so to speed things up we invested in a special device that allowed us to directly 'edit' the paper tape. The device allowed us to make extra holes and cover over - with a special kind of sticky tape - unwanted holes. Here's a picture of this marvellous device.

So, to anyone out there who grumbles about their software development tools I have only one thing to say. "You're lucky you are. When I were a lad..."
Friday, February 23, 2007
An e-puck outing
At a little over 5 cm tall the e-pucks are remarkable little robots. Here is a picture from the web pages of supplier and all round good people at Cyberbotics. Our e-pucks got their first outing at the Brighton Science Festival's Big Science Sunday, on February 18th (and let me pay tribute to festival organiser Richard). A small pack of 4 or 5 e-pucks in a table top arena proved to be a compelling attraction for kids of all ages. A great talking point that allowed us to pontificate about everything from ants and swarm intelligence to the future of robots in society. Here is a picture with my colleague Claire Rocks in mid-demonstration showing part of the arena with two of the e-pucks contrasting with the old Linuxbot on the left. It's amazing to think that the Linuxbot was state-of-the-art technology just 10 years ago. The e-pucks, with sound (microphones and speaker), vision (camera and LEDs), bluetooth radio, proximity sensors and accelerometer are astonishingly sensor-rich compared with the venerable Linuxbot and its generation.Now the small size of the e-puck can be deceptive. A week or so before the Brighton gig I thought I would try and code up some new swarm behaviours for the robots. "Little robot - how hard can it be", I thought to myself as I sat down to an evening's light coding. Boy was I mistaken. Within the e-puck's densely packed motherboard is a complex system which belies its small size. The Microchip dsPIC microcontroller at the heart of the e-puck has come a long way from the reduced-instruction-set and reduced-everything-else 8 bit PIC I programmed with a few dozen lines of assembler for our early Bismark robot 10 years ago. And in the e-puck the microcontroller is surrounded by some pretty complex sub-systems, such as the sound i/o codec, the camera and the bluetooth wireless. It's a complex system of systems. So, suitably humbled, I shall have to take some time to learn to program the e-puck*.
Just goes to show that with robots too, appearances can be deceptive.
----------------------------------------------------------------------
Tuesday, February 13, 2007
The Rights of Robot
Almost exactly a year ago I wrote about wild predictions of human level AI. Another prediction that has caught the attention of the general press is about robot rights. See for instance this piece in the otherwise sensible Financial Times: uk report says robots will have rights, or the BBC technology news here, and elsewhere.
The prediction that provoked these responses is worth a look: Robo-rights: Utopian dream or rise of the machines?
The report, by Outsights - Ipsos MORI, was part of the UK government's strategic horizon scanning exercise and is pretty brief at a little over 700 words. In a nutshell, the report says that if robots gain artificial intelligence then calls may be made for them to be granted human rights. The report doesn't make it clear whether such calls would be made by humans on robots' behalf, or by the robots themselves (although the only link given is to the American Society for Prevention of Cruelty to Robots, which seems to imply the former). The likelihood of this is rated 1 out of 3 stars (33%..?), and timescale 21-50+ years. The report, which is clearly written from a legal perspective (nothing wrong with that), goes on to make some frankly surreal speculations about robots voting, becoming tax payers or enjoying social benefits like housing or health-care.
Hang on, is this really a UK government commissioned report, or a script from Futurama..? I'm surprised it didn't go on to warn of loutish robots subject to ASBOs.
Ok, let's get real.
Do I think robots will have (human) rights within 20-50 years? No, I do not. Or to put it another way, I think the likelihood is so small as to be negligible. Why? Because the technical challenges of moving from insect-level robot intelligence, which is more or less where we are now, to human-level intelligence are so great.
Do I think robots will ever have rights? Well, perhaps. In principle I don't see why not. Imagine sentient robots, able to fully engage in discourse with humans, on art, philosophy, mathematics; robots able to empathise or express opinions; robots with hopes, or dreams. Think of Data from Star Trek. It is possible to imagine robots smart, eloquent and persuasive enough to be able to argue their case - like Bicentennial Man - but, even so, there is absolutely no reason to suppose that robot emancipation would be rapid, or straightforward. After all, even though the rights of man* as now generally understood were established over 200 years ago, human rights are still by no means universally respected or upheld. Why should it be any easier for robots?
*or, to be accurate, 'men'.
Thursday, August 31, 2006
In praise of Ubuntu
I strongly recommend Ubuntu, and commend the good people supporting this distribution who really do appear to live up to the ideals implied by the word "Ubuntu".
Thursday, June 15, 2006
Life was tough for early memes
And surely many of those early world-changing memes are things that can't be half-invented or discovered bit by bit. Like how to make fire, for example.
From a meme-perspective, it's hard for us to imagine what it must have been like for early memes. At that time our ancestors were animal-smart, instinctive creatures, probably living much as we see modern higher primates: social groups of chimpanzee or gorilla. Then development was slow, driven by gene- rather than meme-evolution. But there must have been a cusp, a point in evolutionary time when memes start to take hold and gene-meme coevolution starts up. How long was that cusp? Thousands... tens of thousands of years, perhaps.
Think of that period. There must have been countless instances when one smart individual in a group hits upon something useful, but for any number of reasons that discovery dies with its creator. Take fire-making, as an example. Perhaps none of the other individuals in the group are smart enough to recognise the value, or utility of fire-making. Or, worse-still, they are so terrified of the fire-maker's magic that they banish or kill the unfortunate innovator. Alternatively, there may be one or two individuals who do see that this is not something to be feared, but valued. But what if they're just not smart enough to be able to mimic the actions of the fire-maker? To propagate, memes need meme-copiers just as much as meme-originators, and so the fire-making recipe is lost because no-one can copy it. Now consider the larger context. Imagine that one tribe has learned fire-making, and is able to refine and pass the technique from one generation to the next. But then another tribe, larger and stronger, wipes out the fire-maker tribe because of fear, or envy. Or they get wiped out anyway because of famine, or any number of other natural disasters.
Life was precarious then, and so it was for memes too. My point is that many memes were probably thought-of, discovered or created, only to be lost again. Then a few hundred or few thousand years passes before they are thought-of all over again. How many times over did those early memes have to be re-invented before they finally found a foothold and became so widespread that only a major catastrophe affecting the whole population would threaten the meme?
One reason, I think, that this is hard for us to imagine (and construct models of), is that we are used to living in a time when life is easy for memes. Too easy perhaps. We are all surrounded by unbelievably expert meme-copiers. Indeed human beings have become so good at it that meme-copying is surely something that now characterises us as a species. Modern society, from a meme-perspective, is a rich and fertile substrate in which even the most inconsequential memes can thrive (like mobile phone ring-tones).
But it wasn't always so.
Life was tough for early memes.
-------------------------------
*For internet definitions of 'meme' see Google define: meme
And for Susan Blackmore's longer description click here.
Wednesday, May 31, 2006
What is a robot?
Of course he got a different answer from everyone who offered a definition. No surprises there then. But that got me thinking, what is a robot? Of course the word has a well known dictionary definition from the Czech word Robota (meaning 'compulsory service'), the 'mechanical men and women' in Capek's play Rossum's Universal Robots.
In fact the OED gives a second definition 'A machine devised to function in place of a living agent; one which acts automatically or with a minimum of external impulse'. For me, this definition is also somehow archaic, since it does not admit to the possibility that a robot could have a function other than as a subservient machine. We can now contemplate robots that are not designed as servile machines, but are perhaps designed or evolved to exist because, well for no reason, they just exist.
Wikipedia gives a fullsome definition for 'robot' which starts with the (for me) deeply flawed statement 'A robot is a mechanical device that can perform preprogrammed physical tasks'. The parts I have a problem with here are 'preprogrammed' and 'tasks'. There are many research robots that are not preprogrammed - their behaviours are either learned, evolved or emergent (or some combination of those). My objection to 'tasks' is that some robots may not have tasks in any meaningful sense.
I believe we need a new definition for 21st century robots, that shakes off 20th C notions of subservient machines performing menial tasks for lazy humans. A definition that instead encompasses the possibility of future robots as a form of artificial life, neither preprogrammed or task oriented.
Ok, so here's my attempt at a definition.
A robot is a self-contained artificial machine that is able to sense it's environment and purposefully act within or upon that environment.
An important characteristic of a robot is, therefore, that its sense-action loop is closed through the environment in which it operates (the act of moving changes a robot's perception of its environment, thus giving the robot a fresh set of sense inputs). Thus, even simple robots may behave in a complex or unpredictable way, when placed in real-world environments. This is why designing robots for unsupervised operation in real-world environments is difficult.
A second characteristic of a robot is the degree of autonomy.
In respect of 'control' autonomy there is a spectrum of autonomy from none (i.e. a tele-operated robot) to full autonomy (no human supervision). A robot that has a high level of control autonomy will require an embedded and embodied artificial intelligence in order that the robot can choose the right actions and, perhaps, also adapt or learn from changes in the environment or its own actions.
Finally, and often overlooked is 'energy' autonomy. A robot typically requires its own self-contained power source, but if sustained operation over extended periods is required then the robot will need to have the ability to autonomously replenish its power source.
Wednesday, March 08, 2006
On Linux
I just spent a gruelling weekend installing the excellent open source Player/Stage/Gazebo robot simulation suite of programs. I know... sad or what. But Player/Stage/Gazebo is really essential toolkit for hard-core roboticists.
Now I'm no Linux virgin. I first installed Linux on some real lab robots early in 1998. It was quite a challenge to shoehorn Linux into a 25MHx 386 processor with 4MB RAM and a first generation 80MB solid-state disk drive. We had first generation wireless LAN cards (well before the IEEE 802.11 aka WiFi specification was established), and the Linux drivers were somewhat experimental and needed a good deal of tender loving care to compile, install and coax into reliable operation. More by luck than judgement I used the excellent and highly respected Slackware distribution of Linux. Slackware's organisation into (floppy) disk sets made it very easy to install just the parts I needed. For example, the robots have no keyboard or display. Access is wirelessly via telnet/ftp/http so there is no need for X-windows or any of the usual GUI superstructure that desktop installations need. So Slackware lent itself to a lean, mean stripped-down embedded installation.
The other thing I liked (and still do) about Slackware is that it is not at the bleeding edge of Linux, but takes a very cautious and conservative approach to keeping up with new versions of Linux kernel, libraries and so on. For this reason and deservedly so Slackware has a reputation for reliability. It's an operating system you can install and forget. It was a good decision because the LinuxBots, as they then became known, have been used since in many multi-robot projects with very high reliability.
Having said all of that you may be surprised that it was only about two years ago that I switched from MS Windows to Linux on my trusty workhorse laptop. I tried with Windows, I really did. On my previous Toshiba Libretto Windows 95 was fine and reliable, but this HP laptop came with Windows 98 pre-installed. Hopeless. I migrated fairly quickly to Windows ME (even more hopeless) then to XP. It crashed inexplicably on average about once a week. I got used to that. I got used to having to worry about up-to-date virus checkers, and then windows security updates, and then spyware checkers. In retrospect it was amazing - I was nurse-maiding my computer's operating system! (Mostly because of one killer application: MS Outlook.) Finally after one crash that proved unrecoverable (FAT table corrupted) I gave up and installed Slackware.
Bliss. It boots in a quarter of the time. Gone are the inexplicable flurries of disk or network activity that happen when you've done nothing. Gone is the paranoia of worrying about viruses or spyware or security updates. Running Linux my laptop is sweeter, cooler, more responsive and, best of all, in two years it has never, yes never, crashed.
So why am I complaining?
Well the achilles heel of Linux is that installing new software is not as straightforward as it should be. I should first explain that in Linux it's quite normal to download source code then compile and install; actually that's the easy part, since there are very simple command line scripts to automate the process. The problem is deeper. In fact there are two problems: pre-requisites and version dependencies.
Now Player, Stage and Gazebo are complex packages. Not-surprisingly therefore they require other software (toolkits, libraries, and so on) to be installed first. These are the pre-requisites. Gazebo, for instance (which provides a 3 dimensional world, with physics modelling, in which the simulated robots run) required me to first install no less than five packages: the Geo-spatial Data Abstraction Library (GDAL), the Open Dynamics Engine (ODE), the Simplified Wrapper and Interface Generator (SWIG), the Python GUI wxPython and the open GL utilities library GLUT. Phew! But wxPython is itself a complex package, with its own pre-requisites. The pre-requisites have pre-requisites!
And as if that isn't enough to contend with when I tried to install Stage I discover that it needs the GIMP toolkit GTK+ of at least version 2.4. My GTK+ is only version 2.2. That's a version dependency.
These are the reasons Linux (marvellous as it is) isn't about to take over the world just yet.
What GNU/Linux needs is a distribution independent universal installer that will analyse your existing system, figure out the pre-requisites and version dependencies for the new package you want to install (and do that recursively), then get on and do it while you take the weekend off. Maybe there's already a sourceforge project to do just that, in which case I say 'huzzah!'.
But was it all worth the effort? As Keanu Reeves would say "hell yeah!". Player/Stage/Gazebo is a robot simulator of truly awesome power and versatility.
Friday, February 24, 2006
Quite dangerous ideas
To arrive at the edge of the world's knowledge, seek out the most complex and sophisticated minds, put them in a room together, and have them ask each other the questions they are asking themselves.
It appears that the Edge asks an Annual Question, "What is the answer to life, the universe, and everything", that sort of thing, and then publishes the answers by the contributing illuminati.
The 2006 question is "What is your dangerous idea?".
So it was with some excitement that I started to read the assembled responses of the great and the good. Very interesting and well worth reading but, I have to say, the ideas expressed are, er, not very dangerous. Quite dangerous, one might say, but by and large not the sort of ideas that had me rushing to hide behind the sofa.
So, I hear you say, "what's your dangerous idea?".
Ok then, here goes.
I think that Newton's interpretation of his first law of motion was wrong and that there is no such thing as a force of gravity. Let me say right away that this is not my idea: it is the result of a lifetime's work by my friend Science Philosopher Viv Pope. But I have played a part in the development of this work, so I feel justified in evangelising about it.
Recall your school physics. Newton's first law of motion states that every object in a state of uniform motion tends to remain in that state of motion unless an external force is applied to it. In other words, that the 'natural' state of motion is in a straight line. Of course in an abstract sort of way this feels as if it is right. Perhaps that is why it has not been seriously challenged for the best part of 400 years (or it could be because Newton's first law has become so embedded in the way we think about the world that we simply accept it unquestioningly).
Consider an alternative first law of motion: the natural (force less) state of motion is orbital. I.e. that bodies continue to orbit unless an external force is applied. Now the Universe is full of orbital motion. From the micro-scale - electrons in orbit around nuclei, to the macro-scale - moons around planets, planets around stars, rotating galaxies etc. If this alternative first law is true, it would mean that we don't need to invent gravity to account for orbital motion. This appeals to me, not least because it leads to a simpler and more elegant explanation (and I like Occam's Razor). It would also explain why - despite vast effort and millions of dollars worth of research - no empirical evidence (gravity waves or gravity particles) has yet been found for how gravity propagates or acts at-a-distance. A common-sense objection to this idea is "well if there's no such thing as gravity what is it that sticks us to the surface of the earth - why don't we just float off?". The answer is (and you can show this with some pretty simple maths), that the natural (force less) orbital radius for you (given the mass of your body), is quite a long way towards the centre of the earth from where you now sit. So there is a force that means that you weigh something, it's just not a mysterious force of gravity but the real force exerted by the thing that restrains you from orbiting freely, i.e. the ground under your feet.
This has all been worked out in a good deal of detail by Viv Pope and mathematician Anthony Osborne, and its called the Pope Osborne Angular Momentum Synthesis, or POAMS.
Now that's what I call a dangerous idea.
Thursday, February 23, 2006
On microcode: the place where hardware and software meet
But there is an even more remarkable place that I want to talk about here, and that is the place where hardware and software meet. That place is called microcode.
Let me first describe what microcode is.
Most serious computer programming is (quite sensibly) done with high-level languages (C++, Java, etc), but those languages don't run directly on the computer. They have to be translated into machine-code, the binary 0s and 1s that actually run on the processor itself. (The symbolic version of machine code is called 'assembler' and hard-core programmers who want extreme performance out of their computers program in assembler.) The translation from the high-level language into machine-code is done by a program called a compiler and if, like me, you work within a Linux environment then your compiler will most likely be the highly respected GCC (Gnu C Compiler).
However, there is an even lower level form of code than machine-code, and that is microcode.
Even though a machine-code instruction is a pretty low-level thing, like 'load the number 10 into the A register', which would be written in symbolic assembler as LD A,10, and in machine-code as an unreadable binary number, it still can't be excuted directly on the processor. To explain why I first need to give a short tutorial on what's going on inside the processor. Basically a microprocessor is a bit like a city where all of the specialist buildings (bank, garage, warehouse, etc) are connected together by the city streets. In a microprocessor the buildings are pieces of hardware that each do some particular job. One is a set of registers which provide low level working storage, another is the arithmetic logic unit (or ALU) that will perform simple arithmetic (add, subtract, AND, OR etc), yet another is an input-output port for transferring data to the outside world. In the microprocessor the city streets are called data busses. And, like a real city, data has to be moved around between say the ALU and the registers, by being routed. Also like a real city data on the busses can collide, so the microprocessor designer has to carefully avoid this otherwise data will be corrupted.
Ok, now I can get back to the microcode. Basically, each assembler instruction like LD A,10 has to be converted into a set of electrical signals (literally signals on individual wires) that will both route the data around the data busses, in the right sequence, and select which functions are to be performed by the ALU, ports, etc. These electrical signals are called microorders. Because the data takes time to get around on the data busses the sequence of microorders has to carefully take account of the time delays (which are called propagation delays) for data to get between any two places in the microprocessor. Thus, each assembler instruction has a little program of its own, a sequence of microorders (which may well have loops and branches, just like ordinary high level programs), and programming in microcode is exquisitely challenging.
Microcode really is the place where hardware and software meet.
----------------------------------------------------------------------------
*in Algol 60, on a deck of punched cards, to run on an ICL 1904 mainframe.
**which I am very sorry to see has now been discontinued by Borland.
Wednesday, February 15, 2006
On wild predictions of human-level AI
For instance the article futurology facts (now there's an oxymoron if ever there was one) on the BBC world home page quoted the British Telecom 'technology timeline' including:
2020: artificial intelligence elected to parliament
2040: robots become mentally and physically superior to humans
A BT futurologist is clearly having a joke at the expense of members of parliament. Robots won't exceed humans intellectually until 2040 but it's presumably ok for a sub-human machine intelligence to be 'elected' to parliament in 2020. Hmmm.
Setting aside the patent absurdity of the 2020 prediction, let's consider the 2040 robots becoming intellectually superior to humans thing.
First let me declare that I think machine intelligence equivalent or superior to human intelligence is possible (I won't go into why I think it's possible here - leave that to a future blog). However, I think the idea that this will be achieved within 35 years or so is wildly optimistic. The movie i,Robot is set in 2035; my own view is that this level of robot intelligence is unlikely until at least 2135.
So why such optimistic predictions (apart perhaps from wishful thinking)? Part of the problem I think is a common assumption that human level machine intelligence just needs an equivalent level of computational power to the human brain, and then you've cracked it. And since, as everyone knows, computers keep doubling in power roughly every two years (thanks to that nice man Gordon Moore), it doesn't take much effort to figure out that we will have computers with an equivalent level of computational power to the human brain in the near future.
That assumption is fallacious for all sorts of reasons, but I'll focus on just one.
It is this. Just having an abundance of computational power is not enough to give you human level artificial intelligence. Imagine a would-be medieval cathedral builder with a stockpile of the finest Italian marble, sturdy oak timbers, dedicated artisans and so on. Having the material and human resources to hand clearly does not make him into a cathedral builder - he also needs the design.
The problem is that we don't have the design for human-equivalent AI. Not even close. In my view we have only just started to scratch the surface of this most challenging of problems. Of course there are plenty of very smart people working on the problem, and from lots of different angles. The cognitive neuroscientists are by-and-large taking a top-down approach by studying real brains; the computer scientists build first-principles computational models of intelligence, and the roboticists take a bottom-up approach by building at-first simple robots with simple brains. But it's an immensly hard problem because human brains (and bodies) are immensly complex.
Surely the really interesting question is not when we will have that design, but how. In other words will it be by painstaking incremental development, or by a single monumental breakthrough. Will there (need to) be an Einstein of artificial intelligence? If the former then we will surely have to wait a lot longer than 34 years. If the latter then it could be tomorrow.
Perhaps a genius kid somewhere has already figured it out. Now there's a thought.
Monday, February 06, 2006
On free will and noisy brains
I said that the theromstat's AI 'decides' whether to switch the boiler on or off, which implies that it has free will. Of course it doesn't, because its artificial intelligence is no more than a simple rule, 'if temperature <> 60 then switch boiler off', for example. So, depending on the temperature, what the thermostat decides is completely determined. With this simple deterministic rule the thermostat can't simply decide to switch the boiler off regardless of the temperature just for the hell of it.
Well all of that is true for 99.99..% of the time. But consider the situation when the temperature is poised on almost exactly the value at which the thermostat switches. The temperature is neither going up nor down but is balanced precariously at a value just a tiny fraction of a degree away from the switching value. Now what determines whether the thermostat will switch? The answer is noise. All electrical systems (actually all physical systems above absolute zero) are noisy. So, at any instant in time the noise will have the effect of adding or subtracting a tiny amount to the temperature value, either pushing it over the switching threshold, or not.
For 99.99..% of the time the thermostat is deterministic, but for the remaining 0.00..1% of the time it is stochastic: it 'decides' whether to switch the boiler on or off at random, i.e. 'just for the hell of it'.
But, I hear you say, that's not free will. It's just like tossing a coin. Well, maybe it is. But maybe that's what free will is.
Consider now that oldest of choices. Fight or flee. Most of the time, for most animals, there is no choice. The decision is easy: the other animal is bigger, so run away; or smaller, so let's fight; or it's bigger but we're trapped in a corner, so fight anyway. Just like the thermostat, most of the time the outcome is determined by the rules and the situation, or the environment.
But occasionally (and probably somewhat more often than in the thermostat case) the choices that present themselves are perfectly evenly balanced. But the animal still has to make a choice and quickly, for the consequences of dithering are clear: dither and most likely be killed. So, how does an animal make a snap decision whether to fight or flee, with perfectly balanced choices? The answer, surely, is that the animal needs to, metaphorically speaking, toss a coin. On these rare occasions its fate is decided stochastically and brains, like thermostats, are noisy. Thus it is, I contend, neural noise that will tip the brain into making a snap decision when all else is equal - the neural equivalent of tossing a coin.
This is why I think brains evolved to be noisy.
The long extinct ditherers probably had less noisy brains.
Monday, January 23, 2006
On what brains are for
Surely not.
Brains are control systems for bodies. Pretty amazing control systems of course, but control systems all the same. Bodies have sensors (senses) and actuators (muscles). In very simple animals the outputs from the senses are almost directly connected to the muscles, so that the animal always reacts reflexively to stimuli. More complex animals have more brain in between senses and muscles and so may deliberate before reacting. Of course even very complex animals still have reflexes - think of the classic reflex test on your knee.
Of course I accept entirely that complex brains do encode knowledge, probably in a multiplicity of ways some of which we can discern - like the apparent spatial mapping of images into the visual cortex - but many ways that are not (yet) at all understandable.
But that is not what they're for.
Sunday, January 01, 2006
On blogging
So, why now blogging..? A number of reasons I guess.
First, a bit of vanity I suppose. I guess you have to be just a little bit vain to suppose that anyone else might read, or indeed be the slightest bit interested in your musings.
Second, as an academic and professional communicator, and someone who believes very strongly that ideas should be freely exchanged and communicated, I am interested in blogging as a medium for just that.
Thirdly, because I think the internet is changing human culture in some deep and surprising ways. The internet is becoming a new kind of dynamic collective memory, allowing us to offload (or upload, to be more accurate) stuff that we used to either have to remember or carry around with us. A small thing, perhaps, but have you noticed that business cards have become pretty much obsolete (at least in academia): people say "just google me". Wired telephones also, I think the one in my office doesn't get used more that once a week now; made more or less obsolete by email (and therein lies another blog entry!). Dictionaries, encyclopedias, libraries all going the same way (but not books, interestingly). Even CDs. As I write this I am listening to internet radio (bach-radio.com), being sucked wirelessly from my broadband connection and played on my HiFi by an amazing Philips Streamium. It's weeks since I played a CD! But it's more than these things: the way that we work, play and interact has changed profoundly. For the better? Well, maybe - that's a moot point. So, this is a rather long winded way of saying that blogging is, I think, a very interesting part of that change.
A final thought. Writing this feels strangely different to publishing web pages (which I've been doing since the web was invented). It feels much more personal, a kind of message in a virtual bottle. So, here it is, my first message cast out onto the ocean of the internet. Who knows where it will wash ashore.
*I recall sending emails to Caltech in c.1994 when you had to explicitly address the gateway between JANET and the ARPANET. It was quite a feat! I also built my research lab's first web server and hand coded the lab's first web pages sometime in 1996. The amazing wayback machine doesn't quite go way back enough - but here is it's earliest recorded IAS lab web page from April 1997.