Tuesday, June 19, 2012

60 years of asking Can Robots Think?

Last week at the Cheltenham Science Festival we debated the question Can robots think? It's not a new question. Here, for instance, is a wonderful interview from 1961 on the very same question. So, the question hasn't changed. Has the answer?


Well it's interesting to note that I, and fellow panelists Murray Shanahan and Lilian Edwards, were much more cautious last week in Cheltenham, than our illustrious predecessors. Both on the question can present day robots think: answer No. And will robots (or computers) be able to think any time soon: answer, again No.

The obvious conclusion is that 50 years of Artificial Intelligence research has failed. But I think that isn't true. AI has delivered some remarkable advances, like natural speech recognition and synthesis, chess programs, conversational AI (chatbots) and lots of 'behind the scenes' AI (of the sort that figures out your preferences and annoyingly presents personalised advertising on web pages). But what is undoubtedly true was Weisner, Selfridge and Shannon were being very optimistic (after all AI had only been conceived a decade earlier by Alan Turing). Whereas today, perhaps chastened and humbled, most researchers take a much more cautious approach to these kinds of claims.

But I think there are more complex reasons.

One is that we now take a much stricter view of what we mean by 'thinking'. As I explained last week in Cheltenham, it's relatively easy to make a robot that behaves as if it is thinking (and, I'm afraid, also relatively easy to figure out that the robot is not really thinking). So, it seems that a simulation of thinking is not good enough*. We're now looking for the real thing.

That leads to the second reason. It seems that we are not much closer to understanding how cognition in animals and humans works than we were 60 years ago. Actually, that's unfair. There have been tremendous advances in cognitive neuroscience but - as far as I can tell - those advances have brought us little closer to being able to engineer thinking in artificial systems. That's because it's a very very hard problem. And, to add further complication, it remains a philosophical as well as a scientific problem.

In Cheltenham Murray Shanahan brilliantly explained that there are three approaches to solving the problem. The first is what we might call a behaviourist approach: don't worry about what thinking is, just try and make a machine that behaves as if it's thinking. The second is the computational modelling approach: try and construct, from first principles, a theoretical model of how thinking should work, then implement that. And third, the emulate real brains approach: scan real brains in sufficiently fine detail and then build a high fidelity model with all the same connections, etc, in a very large computer. In principle, the second and third approaches should produce real thinking.

What I find particularly interesting is that the first of these 3 approaches is more or less the one adopted by the conversational AI programs entered for the Loebner prize competition. Running annually since 1992, the Loebner prize is based on the test for determining if machines can think, famously suggested by Alan Turing in 1950 and now known as the Turing test. To paraphrase: if a human cannot tell whether she is conversing with a machine or another human - and it's a machine - then that machine must be judged to be thinking. I strongly recommend reading Turing's beautifully argued 1950 paper.

No chatbot has yet claimed the $100,000 first prize, but I suspect that we will see a winner sooner or later (personally I think it's a shame Apple hasn't entered Siri). But the naysayers will still argue that the winner is not really thinking (despite passing the Turing test). And I think I would agree with them. My view is that a conversational AI program, however convincing, remains an example of 'narrow' AI. Like a chess program a chatbot is designed to do just one kind of thinking: textual conversation. I believe that true artificial thinking ('general' AI) requires a body.

And hence a new kind of Turing test: for an embodied AI, AKA robot.

And this brings me back to Murray's 3 approaches. My view is that the 3rd approach 'emulate real brains' is at best utterly impractical because it would mean emulating the whole organism (of course, in any event, your brain isn't just the 1300 or so grammes of meat in your head, it's the whole of your nervous system). And, ultimately, I think that the 1st (behaviourist - which is kind of approaching the problem from the outside in) and 2nd (computational modelling - which is an inside out approach) will converge.

So when, eventually, the first thinking robot passes the (as yet undefined) Turing test for robots I don't think it will matter very much whether the robot is behaving as if it's thinking - or actually is, for reasons of its internal architecture, thinking. Like Turing, I think it's the test that matters.


*Personally I think that a good enough behavioural simulation will be just fine. After all, an aeroplane is - in some sense - a simulation of avian flight but no one would doubt that it is also actually flying.

5 comments:

  1. Ok I've waited a couple days before commenting on this as it is the number one question I think when it comes to robotics and what people expect from a robot.
    I think your assessment is good. I agree that we take for granted the inroads that have been made.
    When I see projects like the self driving car and walkers evolving their own locomotion I see great progress.
    But when it comes to robotic companions who think like us it appears we have made little progress. Every now and then you see a "performance" by a nao or uni research robot,like, say kismet and you think wow! until you realise it really is just that... a performance.
    I think we can be a little tough on the definition of thinking though.
    We take for granted the lowly calculator but to people born a hundred years ago it would be magical. And isn't in it's design taking on calculations that originated in human thought. So I don't think it's too much of a stretch to say that it has been given the power of thought. All be it very specialized thinking.
    Look at googles search engine. I remember when searching for information was tedious and boring having to follow link after link and try multiple search engines to find the best results. I never use anything else but google today because what I'm after is usually in the top 5 results.
    I'm just an amateur philosopher and wannabe scientist/engineer but this topic I am very passionate about.
    I have read a lot and thought a lot about it.
    As others much greater have said part of the problem is the definition. I can see this myself as my thoughts are continually moving the criteria of what I consider AI.
    I think most people appreciate the complexity of the human mind and allowing for this we have made much progress in AI. While I am not a singularitarian I do think the next 10-20 years in this field will be interesting indeed.

    ReplyDelete
  2. Thank you Gullygunyah for your thoughtful response.

    While I agree with you that we should accept that calculators and search engines do certain a kind of thinking - we also have to admit that those are not thinking in the everyday sense of the word, i.e. attending to a stimulus, or reflecting on an idea.

    I too think singularitarianism is deluded - but even without the big bang of AI predicted by them, I agree we shall see some remarkable developments.

    ReplyDelete
  3. I believe that Actual Intelligence in robots cannot be made, but rather evolved.

    I could never imagine just hitting a 'GO' button and have an intellectual robot talking to me.

    Although I am unsure of how the robot should evolve. But, lets say we start with some fundamental instruction set on learning, and allow the robot to react to all of the input around it. If you look at how a baby learns, they have no idea what is going on around them at birth, but, after 'learning' from the input around, it can make relational connections between objects.

    I guess it could be plausible that a robot armed with a sundry of input methods, the ability to create relational databases, and a die hard information parser backed by a set standard of boolean operations, could begin to learn, and potentially evolve its'learning.

    And/Or, a robot that contains some sort of 'genetic' trait set, where this trait set is a mold-able instruction set devised in a code that is that baseline of how the robot interacts with its' environment. Have this trait set mold with another closely similar set (certain attributes either subtract, add, or choose from the first code value, or the second). With the input around the robot, have it ask questions about what it needs or how to obtain some goal, and find a way to mold the code so that the it can be met. If the robot cannot find an answer but knows the goal, mutate it until progress can be seen, and target the attributes that had the largest impact on that progress.
    The problem is this code will need a way to be interpreted by the robot in a way that attributes can be added or removed from the set (maybe a set of pointers to the robots RAM as the last attribute in the base code). You can start off with a robot that is dumb, not knowing how to accomplish anything, but after time, the code should change to meet the expectations of the surroundings. But coming up with this coded trait set would prove to be an incredible task.

    Mostly just speculating, but it is an idea.


    -Ryan Griffin

    ReplyDelete
    Replies
    1. Thanks Ryan for your comment. I think you're right. Like you it seems to me unlikely that we can simply design advanced robot AI, so that it works 'out of the box'. I think it most likely that advanced robots will need a developmental phase - just like humans - and the emerging research field of developmental robotics is looking at exactly this. And it's also possible that elements of the robot's baseline AI (i.e. prior to development) will need to be evolved. Evolutionary robotics is another (relatively) new research field.

      And the controversial idea of neural Darwinism (in neuroscience), raises the possibility that evolutionary mechanisms might be needed during development.

      Delete
  4. @Alan,

    I would love to see huge progress in the field of Evolutionary robotics. To me it seems to be one of the only ways to create this intelligence, without us, as a race, knowing too much about how we become intelligent.

    As a dreamer this sounds plausible, but, the realist in me keeps asking the question of how (from a scientist point of view). I've imagined a learning manager device that can rewrite logic modules in the code backend of the robot. But, this seems dangerous, and likely to fail. Another way I've imagined, is using this same idea of a learning manager, to load up new logic modules into a relational database (where it can relate the logic devices to real data tables).

    This still leaves the question of how, in a sense of how will this learning manager be able to create and modify code, and know how to relate it to certain data.

    Or even better, if the way the database relates the data, isn't through simple pointers of one to one, or one to many, but if different relations incorporated the logic. So, let's say the robot wants to learn how to turn on the lights in the room, it could relate Data A -> Data B (as an action operator where A is the light switch position and B is the current light state). When it flips the switch, and sees an immediate change in the light state (turning on) it can relate this switch to that light as having a boolean equal to property, which the relation can store the analyzed properties. So, if the robot later wants to turn on the lights in a room, it will look for a switch, and test it based off of current knowledge. If outcome was not expected (switch was on, but light was off) then re analyze and possibly rewrite the relationship between light switches, and lights. So original logic from A state equals B state, might become A(current) == B(state) and !A == !B.

    This relation can be saved into a logic pool, and indexed by type. Where as more patterns are created, the robot could even combine certain types of relations to create greater understanding.

    Again, just wild speculations on my part. And these speculations can keep going as I question more of 'how' as I move down the top-down-model.

    Also, I hope you don't mind me posting ideas rather than straight forward comments. I want to post a lot of my ideas, because I would love if someone in the field could read this, and have an Ah-Hah! moment. Or some more input about how people feel about these ideas. Knowledge and ideas should be shared, and never horded in my opinion. Although it can allow an individual to be taken advantage of easily, it can prove to work towards the greater success of humans.

    -Ryan Griffin
    (sorry for the long winded-ness, and digressions!)

    ReplyDelete