Wednesday, February 02, 2011

How Intelligent are Intelligent Robots?

When giving talks about intelligent robots I've often been faced with the question "how intelligent is your robot?" with a tone of voice that suggests "...and should we be alarmed" It's a good question but one that is extremely difficult - if not impossible - to answer properly. I usually end up giving the rather feeble answer "not very", and I might well add "perhaps about as intelligent as a lobster" (or some other species that my audience will regard as reassuringly not very smart). I'm always left with an uneasy sense that I (and robotics in general) ought to be able to give an answer to this perfectly reasonable question. (Sooner or later I'm going to get caught out when someone follows up with "and exactly how intelligent is a lobster?")

Given that the study of Artificial Intelligence is over 60 years old, and that of embodied AI (i.e. intelligent robotics) not much younger, the fact that roboticists can't properly answer the question "how intelligent are intelligent robots" is, to say the least, embarrassing. It is I believe a problem that needs some serious attention.

Let's look at the question again. There is an implied abbreviation here: what my interlocutor means is: how intelligent are intelligent robots when compared with animals and humans? What's more we all assume a kind of 'scale' of intelligence - with humans (decidedly) at the top and, furthermore, a sense that a crocodile is smarter than a lobster, and a cat smarter than a crocodile. Where, then, would we place a robot vacuum cleaner, for instance, on this scale of animal intelligence?

Ok. To answer the question we clearly need to find a single measure, or test, for intelligence that is general enough it can be applied to robots, animals or humans. It needs to have a single scale broad enough to accommodate human intelligence and simple animals. This metric - let's call it GIQ for General (non-species-specific) Intelligence Quotient - would need to be extensible downwards - to accommodate single celled organisms (or plants for that matter) and of course robots because they're not very smart. Thinking ahead it should also be extensible upwards for super-human AI (which we keep being told is only a few decades away). Does such a measure exist already? No, I don't think it does, but I did come across this news posting on physorg.com a few days ago with the promising opening line How do you use a scientific method to measure the intelligence of a human being, an animal, a machine or an extra-terrestrial? It refers to a paper titled Measuring Universal Intelligence: Towards an Anytime Intelligence Test. I haven't been able to read the paper (it is behind a paywall) but - even from the abstract - it's pretty clear the problem isn't solved. In any event I'm doubtful because the news writeup talks of "interactive exercises in settings with a difficulty level estimated by calculating the so-called Kolmogorov complexity", which suggests a test that the agent being tested has to engage in. Well that's not going to work if you're testing the intelligence of a spider is it?

So let's set aside the problem of comparing the intelligence of robots with animals (or ET) for a moment. Are there existing non-species specific intelligence measures? This interesting essay by Jonathan Ball: The question of animal intelligence outlines several existing measures based on neural physiology. In summary they include:
  • Encephalization Quotient (EQ): which measures whether the brain of a given species is bigger or smaller than would be expected, compared with that of other animals its size (winner: Humans)
  • Cortical Folding: a measure based on the degree of cortical folding (winner: Dolphins)
  • Connectivity: a measure based on comparing the average number of connections per neuron (winner: Humans)
Interestingly, if we take the connectivity measure - which Jonathan Ball suggests offers the greatest degree of correlation with intelligence - then if our robot is controlled by an artificial neural network we might actually have a common basis for comparison of human and robot intelligence.

So, even if none of them are entirely satisfactory it's clear that there has been a great deal of work on measures of animal intelligence. What about the field of robotics - are there intelligence metrics for comparing one robot with another (say a vacuum cleaning robot with a toy robot dog)? As far as I'm aware the answer is a resounding no. (Of course the same is not true in the field of AI where passing the Turing Test has become the iconic - if controversial - holy grail.)

But all of this presupposes, firstly, that we can agree on what we mean by 'intelligence' - which we do not. And secondly, that intelligence is a single thing that any one animal, or robot, can have more or less of* - which is also very doubtful.


*An observation made by an anonymous reviewer of one of my papers, for which I am very grateful.

12 comments:

  1. It seems unlikely that intelligence is one dimensional. When most people talk about intelligence what they probably mean is something like "what capabilities does this creature have, and how many capabilities does it share with humans?".

    More intelligent is usually taken to mean more similar to a human.

    ReplyDelete
  2. I think 60 years is not enough to answer one of the fundamental questions of a discipline. Much older sciences like physics still struggle to find an answer to "is there on unified set of laws that govern the universe?". So we roboticists (and "artifcial intelligent" ?!) should not be worried by not having an answer to "how intelligent is your robot?"

    Probably a more pertinent question is "how skilled is your robot?". Then we have a huge variety of answers to provide.

    Regarding the neural networks, I think they are more a mathematical model than a biological one, making a comparison not straightforward.

    ReplyDelete
  3. Thankyou Bob and Lorenzo for your great comments.

    Bob: I agree completely that intelligence is not one dimensional. It's a concept that wraps up a complex set of behaviours and attributes, many of which have just as much to do with an agent's environment as it's innate intelligence. However, I still think there is value in trying to find a simple proxy for the complex thing we call intelligence. You're quite right also that we find it impossible not to be anthropocentric when thinking about intelligence.

    Lorenzo: yes good point - maybe it's not unreasonable that we don't have an answer yet!

    Although I agree that skills based assessments would work with robots, we would still have the problem of making comparisons - because most robots have only one (or a very few) specialised skills. For example, is a robot that plays the piano (but nothing else), more or less skilled than a robot that picks tomatoes (but nothing else)?

    I also agree comparing neural connections is also problematical, not least because biological neurons are way more complex and sophisticated than artificial neurons.

    ReplyDelete
  4. Even if the question of “how intelligent robots are” is difficult to tackle and will remain an ongoing research process of it is interesting and important to undertake it.
    Because this question is about what does it mean to be human, how do we compare ourselves with animals and with machines. It goes back to the beginning of modern science and was firstly formulated by the French philosopher and scientist René Descartes. His answer has influence still today with such question as body and mind and the model that animal are machines because they lack “mind” i.e. a soul. Even if we know that Descartes was wrong the question remains: how do we compare humans, animals and machines?

    As Jonathan Ball's proposition is concerned, it makes sense form a biological point of view if we compare vertebrate animals that possess a central nervous system. Put in that way this method is thus limited in scope to those animals only (and of course we are among them) and leaves out the question of all other living beings. It makes senses because those animals are evolutionary related and thus share many anatomical features including brain structures. It makes sense because it has roots in comparative anatomy on a subset of animals.

    However I don't see how it can be useful to understand machines. They do not have such thing as a central nervous systems and they are not really related by a true natural evolutionary process. As mentioned in the previous comment, it is very difficult to compare artificial neural network with real neurons and their complicated anatomy. It reminds me of Ray Kurtzweil claim of the singularity. It is not because one has a lot of computing power, even assuming it is higher than the human brain power (whatever that means!) still one need the “program” to be run. It is far far from being written.
    The same for ANN, it is far far away from biological brains (or neural networks) simply because we still do not understand how natural brains work.

    Which brings us back to biology because to answer the question of how intelligent robots are we need to understand how intelligent living beings are. Certainly robotics and AI can be part of that research.

    ReplyDelete
  5. Justice Stewart said of pornography that “I’ll know it when I see it”.
    In lots of ways, that’s pretty much what we can say about intelligence.

    Like the difference between a Rubens and an X-rated, context is everything. Intelligence can’t operate in a void: it needs a few pricks to kick against. Sometimes intelligence is about the ability to adapt; sometimes it’s about the ability to predict; sometimes it’s about the ability to decide whether to use that stick to fish for termites or poke your cousin in the eye. In lobster contexts, lobsters are perfectly intelligent; they probably look down on langoustines. In human contexts, lobsters are all at sea. Robots (and I know I’m hugely over-simplifying here) tend to live in fairly uncomplicated milieus; maybe they need increasingly dense environments in order to become increasingly less … dense. But then, they’ve only been around for sixty-ish years; it’s taken lobsters two billion or so to evolve to where they are today.

    An unrelated point on testing intelligence: I suspect we have to be careful of the Flynn Effect. As soon as we start testing, we get better at the test but with little evidence that intelligence itself has actually increased.

    ReplyDelete
  6. Ann: If Ashby's law holds then the complexity of the environment will act as a lower bound on the complexity of the robot's interactions with it (and its cognitive process as a regulator). In simple kinds of environment simple sensing and minimalist cognition may suffice, but a robot evolved in such a domain will be overwhelmed and perhaps fail catastrophically in a more complex environment due to its lack of embodied and/or cognitive variety.

    ReplyDelete
  7. On the Flynn effect and testing, the performance of any self-governing system is context and environment dependent, which makes any "objective" measurement of intelligence difficult. It might be possible to define intelligence less partially as a multi-dimensional variable which indicates the rate of adaptation within different skill domains.

    My guess is that although the content of IQ tests may vary the rate at which individuals are able to solve novel problems with which they are presented is likely to remain constant - assuming that they're not assisted by technology (such as Google, or a pocket calculator).

    ReplyDelete
  8. Thankyou José and Ann for your excellent and insightful comments.

    José: I agree completely. The fact that animals and robots have profoundly different bodies and mechanisms for acting-in and reacting-to the world means that we almost certainly cannot compare animal and robot intelligence with reference to internal structures, like brain or computational machinery (even though such comparisons might be illuminating in other ways). It seems to me we must develop a common set of tests that treats the animal/robot as an agent and measures its action or reaction to environmental stimuli or tests (which sounds very Skinnerian).

    Ann: yes - if we take a pragmatic but very reasonable definition of intelligence as 'doing the right thing at the right time' then Lobsters are perfectly intelligent. No more or less intelligent that they need to be. You're quite right too that environmental complexity has much to do with intelligence. Thanks also for your very useful cautionary note about the Flynn effect.

    ReplyDelete
  9. Many thanks indeed Bob for your latest comments, especially for pointing out Ashby's Law of Requisite Variety which certainly has relevance to the question of intelligence.

    I'm not sure I fully agree with your final point about the rate at which agents solve novel problems remaining constant. Unless I misunderstand you, if the agent were to learn then might that rate not change as they improve?

    ReplyDelete
  10. Learning certainly does complicate things, so rate of adaptation would probably have to be defined as a relative measure taking into account the existing state of the system - it's prior knowledge and also maybe physiological state. For a novel problem I'm assuming that prior knowledge only provides minimal assistance, rather like learning to ride a bike for the first time.

    ReplyDelete
  11. I tend to think of human culture in abstract terms as a sort of high dimensional geometry, and that unless you're a rebel your learning is to some extent about aligning your beliefs with the ambient geometry, partly by experience (data points) and partly by interpolation (filling in the blanks).

    What I think is happening with the Flynn effect is that over time the shape of the ambient culture (its hills and valleys) is slowly changing - assimilating new ideas and forgetting older ones. Some of the concepts assimilated and deemed to be desirable may overlap with the kinds of features found in IQ tests, so this is why subsequent generations appear to be smarter, when what's really happening I think is more of a case of all boats rise with the tide.

    Different subcultures may also have slightly different geometries in accordance with their own unique evolution, which I think helps to explain the controversial differences in IQ scores between groups.

    So for creatures (or robots) which are social beings intelligence isn't only an individual thing based upon direct experience or insight, and this further complicates the ability to make relative measurements of intelligence between agents which may reside in different cultures.

    ReplyDelete
  12. it was very interesting to read. I want to quote your post in my blog. It can? And you et an account on Twitter?

    ReplyDelete