Friday, October 15, 2010

New video of 20 evolving e-pucks

In June I blogged about Nicolas Bredeche and Jean-Marc Montanier working with us in the lab to transfer their environment-driven distributed evolutionary adaptation algorithms to real robots, using our Linux extended e-pucks. Nicolas and Jean-Marc made another visit in August to extend the experiments to a larger swarm size, of 20 robots; they made a YouTube movie and here it is:



In the narrative on YouTube Nicolas writes
This video shows a fully autonomous artificial evolution within a population of ~20 completely autonomous real (e-puck) robots. Each robot is driven by its "genome" and genomes are spread whenever robots are close enough (range: 25cm). The most "efficient" genomes end up being those that successfully drive robots to meet with each other while avoiding getting stuck in a corner.

There is no human-defined pressure on robot behavior. There is no human-defined objective to perform.

The environment alone puts pressure upon which genomes will survive (ie. the better the spread, the higher the survival rate). Then again, the ability for a genome to encode an efficient behavioral strategy first results from pure chance, then from environmental pressure.

In this video, you can observe how going towards the sun naturally emerges as a good strategy to meet/mate with other (it is used as a convenient "compass") and how changing the sun location affect robots behavior.

Note: the 'sun' is the static e-puck with a white band around it.

Wednesday, October 13, 2010

Twitter

Well, I can't believe I'm on Twitter: https://twitter.com/alan_winfield

Not at all sure I understand what I'm doing yet. There' some puzzling terminology to learn - what's retweeting for instance..? (It sounds like a word from The Meaning of Liff.)

The reason I joined is because I wanted to respond to the questions on @scienceexchange. The first question is
Given the rate of discovery of exo-planets - is there still any doubt that we are not alone in the universe?
And my twittered answer:
Depends: life maybe a little more probable; intelligent life still highly improbable, see Drake's equation
I like the challenge of trying to construct a useful answer in 140 characters.

Monday, October 11, 2010

Google robot car: Great but proving the AI is safe is the real challenge

Great to read that Google are putting some funding into driverless car technology with the very laudable aims of reducing robot traffic fatalities and reducing carbon emissions. Google have clearly assembled a seriously talented group led by Stanford's Sebastian Thrun. (One can only imagine the Boardroom discussions in the car manufacturers this week on Google's entry into their space.)

While this is all very good, I think it's important to keep the news in perspective. Driverless cars have been in development for a long time and what Sebastian has announced this weekend is not a game changing leap forward. To be fair his blog post's main claim is the record for distance driven but Joe Wuensche's group at University BW Munich has a remarkable record of driverless car research; fifteen years ago their Mercedes 500 drove from Munich to Denmark on regular roads, at up to 180 km/h, with surprisingly little manual driver intervention (about 5%). I've seen MuCAR-3, the latest autonomous car from Joe's group, in action in the European Land Robotics Challenge and it is deeply impressive - navigating its way through forest tracks with no white lines or roadside kerbs to help the car's AI figure out where the road's edges are.

So the technology is pretty much there. Or is it?

The problem is that what Thrun's team at Google, and Wuensche's team at UBM, have compellingly demonstrated is proof of principle: trials under controlled conditions with a safety driver present (somewhat controversially at ELROB, because the rules didn't allow a safety driver). That's a long way from your granny getting into her car which then autonomously drives her to the shops without her having to pay attention in case she needs to hit the brakes when the car decides to take a short cut across the vicar's lawn. The fundamental unsolved problem is how to prove the safety and dependability of the Artificial Intelligence (AI) driving the car. This is a serious problem not just for driverless cars, but all next-generation autonomous robots. Proving the safety of a system, i.e. proving that it will both always do the right thing and never do the wrong thing, is very hard right now for conventional systems that have no learning in them (i.e. no AI). But with AI the problem gets a whole lot worse: the AI in the Google car, to quote "becomes familiar with the environment and its characteristics", i.e. it learns. And we don't yet know how to prove the correctness of systems that learn.

In my view that is the real challenge.