Slide 2
Robot ethics and machine ethics are two sides of the same coin. Robot ethics are ethics for humans. Machine ethics are ethics for robots.
Slide 3
But robot ethics and machine ethics are at very different levels of urgency and maturity. Robot Ethics is a much more pressing concern, given the rapid pace of developing applications as diverse as driverless cars, assisted living robots and smart robot toys. Also robot ethics has a large and active community which is already making progress toward standards and policy.
In contrast machine ethics remains the subject of basic research by a very small community of scholars. There are in fact no real-world ethical robots at the time of writing and it seems unlikely that there will be for some years.
Slide 4
Wendell Wallach and Colin Allen, in their wonderful 2009 book posed the open question: “Do we have a moral imperative to try and build ethical robots?” and suggest that the answer is (a qualified) yes.
Slide 5
James Moor’s important and influential paper 2006 set out a set of four categories of moral agency, from none to full.
Examples of ethical impact agents are kitchen knives and hammers. Both can be evaluated for ethical use (i.e. surgery) and unethical use (i.e. stabbing someone).
An example implicit ethical agent is the kind of blunt plastic knife that comes with airline food.
An explicit ethical agent can reason about ethics. Very few explicit agents have been demonstrated, not least because they are very hard to build.
The only full ethical agents we know of are adult humans of sound mind.
Slide 6
Allen, Smit and Wallach defined 3 approaches to explicit ethical machines in their 2005 paper. A training approach, which they call top down; a constraint (rules) based approach which they call bottom up and a hybrid approach that combines the two.
The work I will describe in this talk is all bottom-up. I know of only one instance of a top-down approach. The wonderful work of Susan and Michael Anderson: see their paper shown here.
Slide 7
Is it possible to build a moral machine: a robot capable of choosing or moderating its actions on the basis of ethical rules? Until 2014 I thought the idea impossible. But I changed my mind. In fact, also developed and experimentally tested an ethical robot. What brought about this U-turn?
Slide 8
First was thinking about very simple ethical behaviours. Imagine you see someone not looking where they’re going - about to walk into a hole in the pavement. Most liklely you will intervene. But why? It’s not just because you’re a good person – you also have the cognitive machinery to predict the consequences of someone’s actions.
Slide 9
Now imagine it’s not you, but a robot with four possible next actions. From the robot's perspective, it has two safe options: standstill (A), or turn to its left (B). But if the robot can model the consequences of both its own actions and the human's - another possibility opens up: the robot could choose to collide with the human to prevent him from falling into the hole (action D).
Slide 10
Let’s write this down as a logical rule. Remarkably the rule appears to match Asimov’s first law of robotics: A robot may not injure a human being or, through inaction, allow a human being to come to harm. The through inaction clause is important because it allow the robot to be morally proactive.
Slide 11
So emerged the idea is that we might be able to build a robot with Asimovian ethics. We need to equip the robot with the ability to predict the consequences of both its own, and other(s) actions, plus the hard-wired ethical rule in the previous slide.
Slide 12
Then came the realisation that the technology we need not only exists but is mature and commonplace in robotics research – it is the robot simulator. Robot simulators provide developers with a virtual environment for prototyping robot code before then running that code on the real robot.
Slide 13
But a robot simulator is not enough on its own. It also needs to be running inside the ethical robot. Thus, we set about designing a simulation-based internal model, which we call a consequence engine, shown here. On the right had the vertical green line describes the Sense Plan Act control system of most robots.
The consequence engine runs in parallel. The internal simulator has the three components shown here: a world model (with physics), a robot model (of itself and others), and an exact copy of the robot’s real controller.
For the current disposition of the robot – and others – the CE loops through all next possible actions, in order to estimate what might happen for each action. Then all of those predictions are evaluated, and the safety or ethics logic modifies the real robot’s action selection. Our robots are typically able to loop through 30 next possible actions every half a second.
Slide 14
The action evaluator codes the estimated outcome for each robot’s action (and the proxy human robot), on a scale of 0 to 10. Where 0 is completely safe and 10 is very dangerous. The value 4 codes for a collision; in reality simple obstacle avoidance, so no collision at all.
This simple table shows this mechanism assuming just 4 next possible actions. These numerical values allow the ethical robot to choose ‘ahead right’ as the least unsafe outcome for the proxy human. The lowest combined outcome values.
Slide 15
We built an ethical robot based on these ideas. We don’t have a real hole in the ground – just a danger zone, and we use robots as proxy humans. We ran two sets of experiments first with e-puck then with NAO robots. Let me show you these results – testing a simple Asimovian robot.
Slide 16
This short movie clip shows the robots of trial 2*. The ethical A-robot starts at the lower middle and the proxy-human H-robot starts from the left. The first run is in real time, then successive runs are speeded up.
Notice especially the moment when the A-robot ‘notices’ the H-robot is heading for danger and diverts from its path to intercept it. This is when Asimov’s 1st law is triggered.
We see the A-robot successfully prevents the H-robot from falling into the hole in every run.
*Trial 1 is simply the ethical robot avoiding the hole.
Slide 17
After running trial 2 with the e-puck robots we decided to test our Asimovian robot with an ethical dilemma by introducing a second proxy-human H2 – also heading toward danger. As far as we know this is the world’s first experimental demonstration of an ethical robot facing a balanced dilemma.
Trial 3 is very interesting because on many runs the A-robot is seen to ‘dither’. We see this on the first run when the A-robot could have easily reached H2 to intercept it, but failed to do so, resulting in both H and H2 falling into the hole.
Because the consequence engine is running continuously, the A-robot can change its decision every half a second. This explains the dithering we observe here.
No comments:
Post a Comment