Monday, October 11, 2010

Google robot car: Great but proving the AI is safe is the real challenge

Great to read that Google are putting some funding into driverless car technology with the very laudable aims of reducing robot traffic fatalities and reducing carbon emissions. Google have clearly assembled a seriously talented group led by Stanford's Sebastian Thrun. (One can only imagine the Boardroom discussions in the car manufacturers this week on Google's entry into their space.)

While this is all very good, I think it's important to keep the news in perspective. Driverless cars have been in development for a long time and what Sebastian has announced this weekend is not a game changing leap forward. To be fair his blog post's main claim is the record for distance driven but Joe Wuensche's group at University BW Munich has a remarkable record of driverless car research; fifteen years ago their Mercedes 500 drove from Munich to Denmark on regular roads, at up to 180 km/h, with surprisingly little manual driver intervention (about 5%). I've seen MuCAR-3, the latest autonomous car from Joe's group, in action in the European Land Robotics Challenge and it is deeply impressive - navigating its way through forest tracks with no white lines or roadside kerbs to help the car's AI figure out where the road's edges are.

So the technology is pretty much there. Or is it?

The problem is that what Thrun's team at Google, and Wuensche's team at UBM, have compellingly demonstrated is proof of principle: trials under controlled conditions with a safety driver present (somewhat controversially at ELROB, because the rules didn't allow a safety driver). That's a long way from your granny getting into her car which then autonomously drives her to the shops without her having to pay attention in case she needs to hit the brakes when the car decides to take a short cut across the vicar's lawn. The fundamental unsolved problem is how to prove the safety and dependability of the Artificial Intelligence (AI) driving the car. This is a serious problem not just for driverless cars, but all next-generation autonomous robots. Proving the safety of a system, i.e. proving that it will both always do the right thing and never do the wrong thing, is very hard right now for conventional systems that have no learning in them (i.e. no AI). But with AI the problem gets a whole lot worse: the AI in the Google car, to quote "becomes familiar with the environment and its characteristics", i.e. it learns. And we don't yet know how to prove the correctness of systems that learn.

In my view that is the real challenge.

4 comments:

  1. Dear Alan,

    splendid comment on the Google "announcement"!

    There is an interesting overview under:
    http://www.pcmag.com/article2/0,2817,2370598,00.asp

    regards
    Frank

    ReplyDelete
  2. Hi Frank

    Thank you for your kind comment, and for the interesting link.

    Best
    Alan

    ReplyDelete
  3. Dear Alan,
    thanks for pointing out safety problems to be solved and research challenges. There are also many legal, ethical and social issues that have to be considered and solved before autonomous cars will hit the market. But I´m sure Google can be an important driving force to change the rules.

    Some driver less car links at
    http://www.infonaut.se/GRB2010/#-20280

    Best regards
    Wolfgang H.
    Infonaut Sweden
    http://robotland.blogspot.com

    ReplyDelete
  4. This is fascinating, and I did not realize how far (and how early) the German work had gone.

    It strikes me that there are two possible models here, and that the AI difficulties are much more daunting in one model than the other. The first model is Thune's (and many others), in which we have a completely driverless car. I wonder whether that model won't encounter a lot of public backlash. In many countries, the car is the symbol of personal freedom, and handing your freedom over to a robot may be scary. On the other hand, we can engineer robot-assisted cars that preserve (the illusion of) user control while insuring the driver against potentially damaging mistakes. Many cars already have such elements.

    Incidentally, I think we may find those two model in many other areas of robotics. I watched a da Vinci robotic surgery recently--a long, complicated 4-hour operation. I expected to be impressed by the robotic system, and I was. What surprised me, however, was that I was more impressed by the surgeon. The robotic system enhanced the surgeon, but didn't come remotely close to replacing him.

    ReplyDelete