Pages

Tuesday, May 31, 2011

Machine intelligence: fake or real?

A few days ago, at the excellent HowTheLightGetsIn festival, I took part in a panel debate called Rise of the Machines. Here was the brief:
From 2001 to The Matrix, intelligent machines and robots have played a central role in our fictions. Some now claim they are about to become fact. Is artificial intelligence possible or just a science fiction fantasy? And would it be a fundamental advance for humankind or an outcome to be feared?
Invited at the last minute, I found myself debating these questions with a distinguished panel consisting of philosophers Peter Hacker and Hilary Lawson, and law academic Lilian EdwardsHenrietta Moore brilliantly chaired.

I shan't attempt to summarise the debate here. I certainly couldn't do it, or the arguments of fellow panelists, justice. In any event it was filmed and should appear soon on IAI TV. What I want to talk about here is the question - which turned out to be central to the debate - of whether machines are, or could ever be regarded as, intelligent.

The position I adopted and argued in the debate is best summed up as simulationist. For the past 10 years or so I have believed our grand project as roboticists is to build robots that aim to be progressively higher fidelity imitations of life, and intelligence. This is a convenient and pragmatic approach: robots that behave as if they are intelligent are no less interesting (as working models of intelligence for instance), or potentially useful, than robots that really are intelligent, and the ethical questions that arise no less pressing*. But, I realised in Hay-on-Wye, the simulationist approach also plays to the arguments of philosophers, including Peter Hacker, that machines cannot ever be truly intelligent in principle.

Reflecting on that debate I realised that my erstwhile position in effect accepts that robots, or AI, will never be truly intelligent, never better than a simulation; that machines can never do more than pretend to be smart. However, I'm now not at all sure that position is logically tenable. The question that keeps going around my head is this: if a thing - biological or artificial - behaves as if it is intelligent, then why shouldn't it be regarded as properly intelligent? Surely behaving intelligently is the same as being intelligent. Isn't that what intelligence is?

Let me offer two arguments in support of this proposition.

There are those who argue that real intelligence is uniquely a property of living organisms. They admit that artificial systems might eventually demonstrate a satisfactory emulation of intelligence but will argue that nothing artificial can truly think, or feel. This is the anthropocentric (or perhaps more accurately, zoocentric) position. The fundamental problem with this position, in my view, is that it fails to explain which properties of biological systems make them uniquely intelligent. Is it that intelligence depends uniquely on exotic properties of biological stuff? The problem here is there's no evidence for such properties. Perhaps intelligence is uniquely an outcome of evolution? Well robot intelligence can be evolved, not designed. Perhaps advanced intelligence requires social structures in order to emerge? I would agree, and point to social robotics as a promising equivalent substrate. Advanced intelligence uniquely requires, perhaps, nurture because really smart animals are not born smart. Again I would agree, and point to the new field of developmental robotics. In short, I argue that it is impossible to propose a property of biological systems, required for intelligence, that is unique to those biological systems and cannot exist as a property of artificial systems.

My second argument is around the question of how intelligence is measured or determined. As I've blogged before, intelligence is a difficult thing to define let alone measure. But one thing is clear - no current measure of intelligence in humans or animals requires us to look inside their brains. We determine a human or animal to be intelligent exclusively on the basis of its actions. For simple animals we observe how they react and look at the sophistication of those responses (as prey or predator for instance). In humans we look formally to examinations (to measure cognitive intelligence) or more generally to ingenuity in social discourse (Machiavellian intelligence), or creativity (artistic or technical intelligence). For advanced animal intelligence we devise ever more ingenious tests, the results from which sometimes challenge or prejudices about where those animals sit on our supposed intelligence scale. We heard from Lilian Edwards during the debate that, in common law, civil responsibility is likewise judged exclusively on actions. A judge may have to make a judgement about the intentions of a defendant but they have to do so only on the evidence of their actions**. I argue, therefore, that it is inconsistent to demand a different test of intelligence for artificial systems. Why should we expect to determine whether a robot is truly intelligent or not on the basis of some not-yet-determined properties of its internal cognitive structures, when we do not require that test of animals or humans?

The counter-intuitive and uncomfortable conclusion: machine intelligence is not fake, it's real.


*perhaps even more so given that such robots are essentially fraudulent.
**with thanks to Lilian for correcting my wording here.

10 comments:

  1. What if we find a way to define and measure consciousness? Then an imitating robot could not be regarded as an intelligent entity anymore.

    ReplyDelete
  2. Hi Saman - thank you for your comment and great question.

    Consciousness is perhaps the most puzzling property of intelligence - especially reflective self-consciousness - the thing that perhaps only Humans do which is think about ourselves thinking. Have you come across the idea of the philosopher's zombie? A thought experiment in which there's another you, that talks, behaves and acts exactly like you, but is not conscious. A robot that imitates consciousness, i.e. behaves as if it were conscious would be the machine equivalent of a philosopher's zombie. However, I disagree with the assumption behind the philosopher's zombie thought experiment, which is that there's another you inside your head, that accounts for your consciousness.

    Thus, using the same arguments as this blog post, I think if a robot behaves as if it is conscious then it is logically incoherent to regard it as anything other than conscious (but remember such a robot would be incredibly hard to build).

    Even if, as your question suggests, we discover that consciousness requires some property X, then it is hard to imagine why we would not be able to provide a machine with X.

    ReplyDelete
  3. When I first started to write my PhD I had planned to consider 'emotional' robots and consciousness in some way, what did these ideas mean, could a robot really be emotional or conscious etc. (It was all pretty vague to begin with!) However, after a while I decided, as you do here that, while it sounds a little defeatist, appearances really are all you can go on (I blame my phenomenology reading in part :) ). What I'm more interested in now is exploring how non-humanoid robots in particular are read in these ways, as emotional and as intelligent, and how this enables their communication with people, or at least people's understanding of them as somewhat 'lively' machines.

    Does any of your research monitor or record people's reactions to your robot swarms?

    ReplyDelete
  4. Hi Eleanor, thank you for your comment.

    Interesting that your research brought you to similar conclusions.

    Re your question about how non-humanoid robots are read as emotional by people, in fact Sajida, one of the PhD students in the Artificial Culture project has been working on exactly this question, specifically with children. There are several posts by Sajida about this on http://artcultprojectblog.blogspot.com/.

    ReplyDelete
  5. It never occured to me that a computer's simulation of intelligence said anything about if real computer intelligence is possible. The only thing it says to me is that simulated intelligence is easier to pull off than real intelligence. Programming is hard enough. Of course we'll go for the low hanging fruit.

    If a human passes a test using some trick, but isn't really compentenct, you might say it's a failure of the test. If a computer passes an intelligence test without doing the work, it's still a failure of the test.

    Watson didn't have pre-canned answers when playing Jeopardy. That suggests that Watson had some skill in playing the game. Deep Blue certainly had a big book of pre-computed moves. But so did Kasparov. These are skills. Build up enough of them, and a computer is intelligent. At the moment, we have no idea what such a list of skills would look like. But once we're arguing if certain people are intelligent by whatever the standard is, then we'll have a good handle on it.

    ReplyDelete
  6. Recently, I've read some papers from http://journalofcosmology.com/Contents14.html . It seems that consciousness may have some property X (specially when you look at it from a Quantum Mechanical point of view). I agree with you that me may be able to duplicate that too, but before then, I believe a zombie has no "rights" of its own (thus, we're not obliged to interact with them ethically, as we're not obliged to interact with a camera in an ethical way). But sure if we build a conscious machine (even if a dummy one), we are forced to act with it in an ethical way. For me, "Qualia" is the red line of ethics.

    ReplyDelete
  7. Hi Alan,
    Just a small comment about the last part. When we try to "measure" intelligence we do both we look at what organism are capable of doing but also what kind of "brain" structure they have to do that.
    What is in the box also matters, even if organism without neuronal brains do present forms of intelligence.
    Cheers,
    José

    ReplyDelete
  8. Hi Alan,

    I think the "big thing" that is missing from AI and Robotics is a deep understanding of the question of value. Ethics in humans is fundamentally about prosocial ways to get what the humans need and to a lesser degree want. Intelligence is being clever and prosocial (or perhaps cunningly antisocial and not getting caught). Robots as they currently exist have no "innate" value circuits. In particular they have no hedonic circuits (pleasure/pain), no emotion circuits (guilt, shame, pride, anger). The emotions/affect (EQ) are as much a part of practical intelligence as the logical and rational (cognitive) manipulations of IQ. Presently AI is lots of IQ and almost no EQ. (There is a lot of ongoing research on robot emotions but its decades behind the logical/mathematical research.) Biology has implemented intellligence with phenomenal consciousness and hedonic circuits and (in humans and some apes) moral (or proto-moral) emotions. As yet, AI and Robotics has very little of p-consciousness, hedonic circuits and any kind of emotion. It may be that to get joy working you do actually need oxytocin and a blood brain "soggy computation" interface... and there is no alternative.

    Cheers,

    Sean

    ReplyDelete
    Replies
    1. Thank you Sean for your great comment. Lots to think about in your question. Firstly I think it important to tease apart ethical behaviour from emotions/feelings. You're quite right that the latter are key to practical decision making in humans. But ethics are not innate (apart perhaps from altruism), but learned. Small children can behave very 'unethically'; we chastise (and hence reinforce good behaviour) but rightly don't hold them responsible. I think we can make practical ethical robots without understanding or implementing artificial emotions.

      Delete
  9. The video Prof. Winfield is blogging about is now here: http://iai.tv/video/rise-of-the-machines

    ReplyDelete