From 2001 to The Matrix, intelligent machines and robots have played a central role in our fictions. Some now claim they are about to become fact. Is artificial intelligence possible or just a science fiction fantasy? And would it be a fundamental advance for humankind or an outcome to be feared?Invited at the last minute, I found myself debating these questions with a distinguished panel consisting of philosophers Peter Hacker and Hilary Lawson, and law academic Lilian Edwards; Henrietta Moore brilliantly chaired.
I shan't attempt to summarise the debate here. I certainly couldn't do it, or the arguments of fellow panelists, justice. In any event it was filmed and should appear soon on IAI TV. What I want to talk about here is the question - which turned out to be central to the debate - of whether machines are, or could ever be regarded as, intelligent.
The position I adopted and argued in the debate is best summed up as simulationist. For the past 10 years or so I have believed our grand project as roboticists is to build robots that aim to be progressively higher fidelity imitations of life, and intelligence. This is a convenient and pragmatic approach: robots that behave as if they are intelligent are no less interesting (as working models of intelligence for instance), or potentially useful, than robots that really are intelligent, and the ethical questions that arise no less pressing*. But, I realised in Hay-on-Wye, the simulationist approach also plays to the arguments of philosophers, including Peter Hacker, that machines cannot ever be truly intelligent in principle.
Reflecting on that debate I realised that my erstwhile position in effect accepts that robots, or AI, will never be truly intelligent, never better than a simulation; that machines can never do more than pretend to be smart. However, I'm now not at all sure that position is logically tenable. The question that keeps going around my head is this: if a thing - biological or artificial - behaves as if it is intelligent, then why shouldn't it be regarded as properly intelligent? Surely behaving intelligently is the same as being intelligent. Isn't that what intelligence is?
Let me offer two arguments in support of this proposition.
There are those who argue that real intelligence is uniquely a property of living organisms. They admit that artificial systems might eventually demonstrate a satisfactory emulation of intelligence but will argue that nothing artificial can truly think, or feel. This is the anthropocentric (or perhaps more accurately, zoocentric) position. The fundamental problem with this position, in my view, is that it fails to explain which properties of biological systems make them uniquely intelligent. Is it that intelligence depends uniquely on exotic properties of biological stuff? The problem here is there's no evidence for such properties. Perhaps intelligence is uniquely an outcome of evolution? Well robot intelligence can be evolved, not designed. Perhaps advanced intelligence requires social structures in order to emerge? I would agree, and point to social robotics as a promising equivalent substrate. Advanced intelligence uniquely requires, perhaps, nurture because really smart animals are not born smart. Again I would agree, and point to the new field of developmental robotics. In short, I argue that it is impossible to propose a property of biological systems, required for intelligence, that is unique to those biological systems and cannot exist as a property of artificial systems.
My second argument is around the question of how intelligence is measured or determined. As I've blogged before, intelligence is a difficult thing to define let alone measure. But one thing is clear - no current measure of intelligence in humans or animals requires us to look inside their brains. We determine a human or animal to be intelligent exclusively on the basis of its actions. For simple animals we observe how they react and look at the sophistication of those responses (as prey or predator for instance). In humans we look formally to examinations (to measure cognitive intelligence) or more generally to ingenuity in social discourse (Machiavellian intelligence), or creativity (artistic or technical intelligence). For advanced animal intelligence we devise ever more ingenious tests, the results from which sometimes challenge or prejudices about where those animals sit on our supposed intelligence scale. We heard from Lilian Edwards during the debate that, in common law, civil responsibility is likewise judged exclusively on actions. A judge may have to make a judgement about the intentions of a defendant but they have to do so only on the evidence of their actions**. I argue, therefore, that it is inconsistent to demand a different test of intelligence for artificial systems. Why should we expect to determine whether a robot is truly intelligent or not on the basis of some not-yet-determined properties of its internal cognitive structures, when we do not require that test of animals or humans?
The counter-intuitive and uncomfortable conclusion: machine intelligence is not fake, it's real.
*perhaps even more so given that such robots are essentially fraudulent.
**with thanks to Lilian for correcting my wording here.