Great to read that
Google are putting some funding into driverless car technology with the very laudable aims of reducing robot traffic fatalities and reducing carbon emissions. Google have clearly assembled a seriously talented group led by Stanford's
Sebastian Thrun. (One can only imagine the Boardroom discussions in the car manufacturers this week on Google's entry into their space.)
While this is all very good, I think it's important to keep the news in perspective. Driverless cars have been in development for a long time and what Sebastian has announced this weekend is not a game changing leap forward. To be fair his blog post's main claim is the record for distance driven but
Joe Wuensche's group at University BW Munich has a remarkable record of driverless car research;
fifteen years ago their Mercedes 500 drove from Munich to Denmark on regular roads, at up to 180 km/h, with surprisingly little manual driver intervention (about 5%). I've seen MuCAR-3, the latest autonomous car from Joe's group,
in action in the European Land Robotics Challenge and it is deeply impressive - navigating its way through forest tracks with no white lines or roadside kerbs to help the car's AI figure out where the road's edges are.
So the technology is pretty much there. Or is it?
The problem is that what Thrun's team at Google, and Wuensche's team at UBM, have compellingly demonstrated is proof of principle: trials under controlled conditions with a safety driver present (somewhat controversially at
ELROB, because the rules didn't allow a safety driver). That's a long way from your granny getting into her car which then autonomously drives her to the shops
without her having to pay attention in case she needs to hit the brakes when the car decides to take a short cut across the vicar's lawn. The fundamental unsolved problem is how to prove the safety and dependability of the Artificial Intelligence (AI) driving the car. This is a serious problem not just for driverless cars, but all next-generation autonomous robots. Proving the safety of a system, i.e. proving that it will
both always do the right thing and never do the wrong thing, is very hard right now for conventional systems that have no learning in them (i.e. no AI). But with AI the problem gets a whole lot worse: the AI in the Google car, to quote "becomes familiar with the environment and its characteristics", i.e. it
learns. And we don't yet know how to prove the correctness of systems that learn.
In my view that is the real challenge.