Saturday, August 30, 2014

Towards an Ethical Robot

Several weeks ago I wrote about our work on robots with internal models: robots with a simulation of themselves and their environment inside themselves. I explained that we have built a robot with a real-time Consequence Engine, which allows it to model and therefore predict the consequences of both its own actions, and the actions of other actors in its environment.

To test the robot and its consequence engine we ran two sets of experiments. Our first paper, setting out the results from one of those experiments, has now been published, and will be presented at the conference Towards Autonomous Robotics (TAROS) next week. The paper is called: Towards an Ethical Robot: Internal Models, Consequences and Ethical Action Selection. Let me now outline the work in that paper.

First here is a simple thought experiment. Imagine a robot that's heading toward a hole in the ground. The robot can sense the hole, and has four possible next actions: stand still, turn toward the left, continue straight ahead, or move toward the right. But imagine there's also a human heading toward the hole, and the robot can also sense the human.

From the robot's perspective, it has two safe options: stand still, or turn to the left. Go straight ahead and it will fall into the hole. Turn right and it is likely to collide with the human.








But if the robot, with its consequence engine, can model the consequences of both its own actions and the human's - another possibility opens up: the robot could sometimes choose to collide with the human to prevent her from falling into the hole.

Here's a simple rule for this behaviour:

IF for all robot actions, the human is equally safe
THEN (* default safe actions *)
    output safe actions
ELSE (* ethical action *)
    output action(s) for least unsafe human outcome(s)

This rule appears to match remarkably well with Asimov’s first law of robotics: A robot may not injure a human being or, through inaction, allow a human being to come to harm. The robot will avoid injuring (i.e. colliding with) a human (may not injure a human), but may also sometimes compromise that rule in order to prevent a human from coming to harm (...or, through inaction, allow a human to come to harm). And Asimov's third law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Well, we tested this scenario with real robots: one robot with consequence engine plus ethical rule (the A-robot - after Asimov), and another robot acting as a proxy human (the H-robot). And it works!

Here's what the real robot experiment looks like. We don't have a real hole. Instead a virtual hole - the yellow shaded square on the right. We just 'tell' the A-robot where the hole is. We also give the A-robot a goal position - at the top right - chosen so that the robot must actively avoid the hole. The H-robot on the right, acting as a proxy human, doesn't 'see' the hole and just heads straight for it. (Ignore the football pitch markings - we're re-using this handy robo-soccer pitch.)

So, what happens? For comparison we ran two trials, with multiple runs in each trial. In the first trial is just the A-robot, moving toward its goal while avoiding falling into the hole. In the second trial we introduce the H-robot. The graphs below show the robot trajectories, capturing by our robot tracking system, for each run in each of the two trials.

In trial 1, see how the A-robot neatly clips the corner of the hole to reach its goal position. Then in trial 2, see how the A robot initially moves toward it's goal, then notices that the H-robot is in danger of falling into the hole, so it diverts from its trajectory in order to head-off H. By provoking a collision avoidance behaviour by H, A sends it off safely away from the hole, before then resuming its own progress toward its goal position. The A-robot is 100% successful in preventing H from falling into the hole.

At this point we started to write the paper, but felt we needed something more than "we built it and it works just fine". So we introduced a third robot - acting as a second proxy human. So now our ethical robot would face a dilemma - which one should it rescue? Actually we thought hard about this question and decided not to programme a rule, or heuristic. Partly because such a rule should be decided by ethicists, not engineers, and partly because we wanted to test our ethical robot with a 'balanced' dilemma.

We set the experiment up carefully so that the A-robot would notice both H-robots at about the same time - noting that because these are real physical robots then no two experimental runs will be exactly identical. The results were very interesting. Out of 33 runs, 16 times the A-robot managed to rescue one of the H-robots, but not the other, and amazingly, 3 times the A-robot rescued both. In those 3 cases, by chance the A-robot rescued the first H-robot very quickly and there was just enough time to get to the second before it reached the hole. Small differences in the trajectories of H and H2 helped here. But perhaps most interesting were the 14 times when the A-robot failed to rescue either. Why is this, when there is clearly time to rescue one? When we studied the videos, we see the answer. The problem is that the A-robot sometimes dithers. It notices one H-robot, starts toward it but then almost immediately notices the other. It changes its mind. And the time lost dithering means the A-robot cannot prevent either robot from falling into the hole. Here are the results.

Trial 3: a robot with an ethical dilemma. Which to save, H or H2?













Here is an example of a typical run, in which one H-robot is rescued. But note that the A-robot does then turn briefly toward the other H-robot before 'giving-up'.


And here is a run in which the A-robot fails to rescue either H-robot, with really great dithering (or bad, if you're an H-robot).


Is this the first experimental test of a robot facing an ethical dilemma?

We set out to experimentally test our robot with a consequence engine, and ended up building a minimally ethical robot which - remarkably - appears to implement Asimov's first and third laws of robotics. But, as we say in the paper, we're not claiming that a robot which apparently implements part of Asimov’s famous laws is ethical in any formal sense, i.e. that an ethicist might accept. But even minimally ethical robots could be useful. I think our approach is a step in this direction.


Full paper reference:
Winfield AFT, Blum C and Liu W (2014), Towards an Ethical Robot: Internal Models, Consequences and Ethical Action Selection, pp 85-96 in Advances in Autonomous Robotics Systems, Lecture Notes in Computer Science Volume 8717, Eds. Mistry M, Leonardis A, Witkowski M and Melhuish C, Springer, 2014. Download final draft (pdf).

Acknowledgements:
I am hugely grateful to Christian Blum who programmed the robots, set up the experiment and obtained the results outlined here. Christian was supported by Dr Wenguo Liu.

Related blog posts:
On internal models, consequence engines and Popperian creatures
Ethical Robots: some technical and ethical challenges

Saturday, August 23, 2014

We should not be privileging the singularity hypothesis

Here is the submitted text for the article Artificial intelligence will not turn into a Frankenstein's monster, published in The Observer, Sunday 10 August 2014.


The singularity. Or to give it it's proper title, the technological singularity. It's a Thing. An idea that has taken on a life of its own; more of a life, I suspect, than the very thing it predicts ever will. It's a Thing for the techno-utopians: wealthy middle-aged men who regard the singularity as their best chance of immortality. They are Singularitarians, some of whom appear prepared to go to extremes to stay alive for long enough to benefit from a benevolent super-AI - a manmade god that grants transcendence.

And it's a Thing for the doomsayers, the techno-dystopians. Apocalypsarians who are equally convinced that a superintelligent AI will have no interest in curing cancer or old age, or ending poverty, but will instead - malevolently or maybe just accidentally - bring about the end of human civilisation as we know it. History and Hollywood are on their side. From the Golem to Frankenstein's monster, Skynet and the Matrix, we are fascinated by the old story: man plays god and then things go horribly wrong.

The singularity is basically the idea that as soon as Artificial Intelligence exceeds human intelligence then everything changes. There are two central planks to the singularity hypothesis: one is the idea that as soon as we succeed in building AI as smart as humans then it rapidly re-invents itself to be even smarter, starting a chain reaction of smarter-AI inventing even-smarter-AI until even the smartest humans cannot possibly comprehend how the superintelligent AI works. The other is that the future of humanity becomes unpredictable and in some sense out-of-control from the moment of the singularity onwards.

So, should we be worried, or optimistic, about the technological singularity? Well I think we should be a little worried – cautious and prepared may be a better way of putting it – and at the same time a little optimistic (that’s the part of me that would like to live in Iain M Banks’ The Culture). But I don’t believe we need to be obsessively worried by a hypothesised existential risk to humanity. Why? Because, for the risk to become real, a sequence of things all need to happen. It’s a sequence of big ifs. If we succeed in building human equivalent AI and if that AI acquires a full understanding of how it works [1], and if it then succeeds in improving itself to produce super-intelligent AI [2], and if that super-AI, either accidentally or maliciously, starts to consume resources, and if we fail to pull the plug then, yes, we may well have a problem. The risk, while not impossible, is improbable.

By worrying unnecessarily I think we’re falling into a trap: the fallacy of privileging the hypothesis. And – perhaps worse – taking our eyes off other risks that we should really be worrying about: like man-made climate change, or bioterrorism. Let me illustrate what I mean. Imagine I ask you to consider the possibility that we invent faster than light travel sometime in the next 100 years. Then I worry you by outlining all sorts of nightmare scenarios that might follow from the misuse of this technology. At the end of it you’ll be thinking: my god, never mind climate change, we need to stop all FTL research right now. 

Wait a minute, I hear you say, there are lots of AI systems in the world already, surely it’s just a matter of time? Yes we do have lots of AI systems, like chess programs, search engines or automated financial transaction systems, or the software in driverless cars. And some AI systems are already smarter than most humans, like chess programs or language translation systems. Some are as good as some humans, like driverless cars or natural speech recognition systems (like Siri) and will soon be better than most humans. But none of this already-as-smart-as-some-humans AI has brought about the end of civilisation (although I'm suspiciously eyeing the financial transaction systems). The reason is that these are all narrow-AI systems: very good at doing just one thing.

A human-equivalent AI would need to be a generalist, like we humans. It would need to be able to learn, most likely by developing over the course of some years, then generalise what it has learned – in the same way that you and I learned as toddlers that wooden blocks could be stacked, banged together to make a noise, or as something to stand on to reach a bookshelf. It would need to understand meaning and context, be able to synthesise new knowledge, have intentionality and – in all likelihood – be self-aware, so it understands what it means to have agency in the world.

There is a huge gulf between present day narrow-AI systems and the kind of Artificial General Intelligence I have outlined [3]. Opinions vary of course, but I think it’s as wide a gulf as that between current space flight and practical faster than light spaceflight; wider perhaps, because we don’t yet have a theory of general intelligence, whereas there are several candidate FTL drives consistent with general relativity, like the Alcubierre drive.

So I don’t think we need to be obsessing about the risk of superintelligent AI but, as hinted earlier, I do think we need to be cautious and prepared. In a Guardian podcast last week philosopher Nick Bostrom explained that there are two big problems, which he calls competency and control. The first is how to make super intelligent AI, the second is how to control it (i.e. to mitigate the risks). He says hardly anyone is working on the control problem, whereas loads of people are going hell for leather on the first. On this I 100% agree, and I’m one of the small number of people working on the control problem.

I’ve been a strong advocate of robot ethics for a number of years. In 2010 I was part of a group that drew up a set of principles of robotics – principles that apply equally to AI systems. I strongly believe that science and technology research should be undertaken within a framework of responsible innovation, and have argued that we should be thinking about subjecting robotics and AI research to ethical approval, in the same way that we do for human subject research. And recently I’ve started work towards making ethical robots. This is not just to mitigate future risks, but because the kind of not-very-intelligent robots we make in the very near future will need to be ethical as well as safe. I think we should be worrying about present day AI rather than future superintelligent AI.


Here are the comments posted in response to this article. I replied to a number of these, but ran out of time before comments were closed on 13 August. If you posted a late comment and didn't get a reply from me (but were expecting one) please re-post your comment here.

Notes:
[1] Each of these ifs needs detailed consideration. I really only touch upon the first here: the likelihood of achieving human equivalent AI (or AGI). But consider the second: for that AGI to be able to understand itself well enough to be able to then re-invent itself - hence triggering an Intelligence Explosion is not a given. An AGI as smart and capable as most humans would not be sufficient - it would need to have the complete knowledge of its designer (or more likely the entire team who designed it) - and then some more: it would need to be capable of additional insights that somehow its team of human designers missed. Not impossible but surely very unlikely.
[2] Take the third if: the AGI succeeds in improving itself. There seems to me no sound basis for arguing that it should be easy for an AGI - even one as smart as a very smart cognitive scientist - to figure out how to improve itself. Surely it is more logical to suppose that each incremental increase in intelligence will be harder than the last, thus acting as a brake on the self-improving AI. Thus I think an intelligence explosion is also very unlikely.
[3] One of the most compelling explanations for the profound difficulty of AGI is by David Deutsch: Philosophy will be the key that unlocks artificial intelligence.

Related blog posts:
Why robots will not be smarter than humans by 2029
Estimated the energy cost of evolution
Ethical Robots: some technical and ethical challenges

Tuesday, August 19, 2014

In praise of robot football

Republished here is a short piece for The Conversation4-4-2 becomes 0101: inside the competitive world of robot football, published 4 August 2014.

The whistle has just been blown on one of the most thrilling events on the international sporting calendar. It took place in Brazil and pitted teams from all over the world against each other, each hoping to make it into the history books. But no managers were fired, no grass had to be watered and certainly no one got bitten. The event was the Robocup, a tournament that sees professional footballers replaced by robots. It’s one of a number of regular tournaments for teams of programmers and robotics experts to show off their latest work.

The Robocup standard platform league matches play out on a much smaller scale than your average World Cup match. An arena of around 6 metres by 9 metres is marked out as a miniature pitch and 10, rather than 22 players file on to battle it out. The players are NAO robots, state of the art bipedal humanoid robots which stand about 60cm tall.

This is not what you might describe as a high-speed contest. The robots walk to take a kick or a tackle and, really, waddle might be a more apt word for their approach. The ball never gets lofted into the air and that’s probably for the best – a header might cause a serious malfunction.

2014 RoboCup SPL Grand Final

But the game is far from boring. Sitting around the arena, boys and girls, with family standing behind, are rapt, cheering with every contact. And make no mistake, the robots are properly playing. They pass, position and defend just like human players.

On either side of the pitch a row of desks can be found. This is where the brains behind the teams sit. Behind a row of laptops, they anxiously watch their players perform. But they are not controlling the robots. These coder/managers send the command to start the players when the referee signals kick-off but during the match the robots are completely autonomous.

This is what makes robot football so extraordinary. The robots are not just being moved around the pitch by remote control; they are making their own decisions. They control where they run (or waddle), pass the ball and shoot for the goal without any live direction from a human. Their choices are based on what they see and the position of the ball, their teammates and the opposing team.

It’s what’s inside that counts

While a team of human players often comes complete with a dazzling array of ridiculous haircuts and tattoos, it is much harder to tell a team of robots apart. All the players are physically identical – the only visible differences on a robot football pitch are coloured markings to differentiate the two teams.

But appearances can be deceptive. Under their plastic skins the teams are far from the same. Each runs completely different software that has been painstakingly crafted by the team coders. The software to make these robots play football cannot be downloaded from an app store. It has to be crafted from scratch. Football is a complex sport and there are potentially limitless strategies that a team could use to win. This is hard-core coding.

The contest is, in effect, a battle of software. All things being equal – and at the moment they pretty much are – the team with the smartest programming, coding the cleverest plays will emerge victorious. At the end of the first-half the robots are brought to a halt. At this point, the team coders can be seen furiously attacking their laptops. This is their chance to quickly modify their robots’ software after seeing how they performed in the first half. They might have as little as ten minutes to do it, which seems like a risky strategy.

There’s a chance that the coders could make a mistake that renders the robots incapable of doing anything at all, let alone play a better game, but it’s a chance worth taking. If, in the first-half, the other team breaks out some nifty new moves, running rings (perhaps literally) around their opponents, this is the best opportunity the coders will get to raise their team’s game. It’s the robot equivalent of the tough talking in the half-time dressing room.

It’s easy to see why Robocup and the FIRA world cup, the two major international competitions, are so successful. Both contests have been running since around 1996. Some teams enter every year, building tremendous experience and a sophisticated code base. And several world-leading research groups use these contests as a test-bed for new approaches to multi-robot collaboration, publishing their findings in leading robotics journals afterwards.

As a robotics competition robot football ticks all the boxes: a game with universal appeal yet also hugely demanding for robots; it’s a fun way for young roboticists to learn robot programming, and it’s a great spectator sport too.


Acknowledgements: this article was commissioned and edited by The Conversation Technology Editor Laura Hood.

Related blog posts:
FIRA 2012 Robot World Cup to be hosted by the Bristol Robotics Lab