Wednesday, October 30, 2013

Ethical Robots: some technical and ethical challenges

Here are the slides of my keynote at last week's excellent EUCog meeting: Social and Ethical Aspects of Cognitive Systems. And the talk itself is here, on YouTube.

I've been talking about robot ethics for several years now, but that's mostly been about how we roboticists must be responsible and mindful of the societal impact of our creations. Two years ago I wrote - in my Very Short Introduction to Robotics - that robots cannot be ethical. Since then I've completely changed my mind*. I now think there is a way of making a robot that is at least minimally ethical. It's a huge technical challenge which, in turn, raises new ethical questions. For instance: if we can build ethical robots, should we? Must we..? Would we have an ethical duty to do so? After all, the alternative would be to build amoral robots. Or, would building ethical robots create a new set of ethical problems? An ethical Pandora's box.




The talk was in three parts.

Part 1: here I outline why and how roboticists must be ethical. This is essentially a recap of previous talks. I start with the societal context: the frustrating reality that even when we meet to discuss robot ethics this can be misinterpreted as scientists fear a revolt of killer robots. This kind of media reaction is just one part of three linked expectation gaps, in what I characterise as a crisis of expectations. I then outline a few ethical problems in robotics - just as examples. Here I argue it's important to link safe and ethical behaviour - something that I return to later. Then I recap the five draft principles of robotics.

Part 2: here I ask the question: what if we could make ethical robots? I outline new thinking which brings together the idea of robots with internal models, with Dennett's Tower of Generate and Test, as a way of making robots that can predict the consequences of their own actions. I then outline a generic control architecture for robot safety, even in unpredictable environments. The important thing about this approach is that the robot can generate next possible actions, test them in its internal model, and evaluate the safety consequences of each possible action. The unsafe actions are then inhibited - and the robot controller determines which of the remaining safe actions is chosen, using its usual action-selection mechanism. Then I argue that it is surprisingly easy to extend this architecture for ethical behaviour, to allow the robot to predict the robot actions that would minimise harm for a human in its environment. This appears to represent an implementation of Asimov's 1st and 3rd laws. I outline the significant technical challenges that would need to be overcome to make this work.

But, assuming such a robot could be built, how ethical would it be? I suggest that with a subset of Asimovian ethics it probably wouldn't satisfy an ethicist or moral philosopher. But, nevertheless - I argue there's a good chance that such a minimally ethical robot could help to increase trust, in the robot, from its users.

Part 3: in the final part of the talk I conclude with some ethical questions. The first is: if we could build an ethical robot, are we ethically compelled to do so? Some argue that we have an ethical duty to try and build moral machines. I agree. But the counter argument, my second ethical question, is are there ethical hazards? Are we opening a kind of ethical Pandora's box, by building robots that might have an implicit claim to rights, or responsibilities. I don't mean that such a robot would ask for rights, but instead that, because it is has some moral agency, then we might think it should be accorded rights. I conclude that we should try and build ethical robots. The benefits I think far outweigh any ethical hazards, which in any event can, I think, be minimised.


*It was not so much an epiphany, as a slow conversion from sceptic to believer. I have long term collaborator Michael Fisher to thank for doggedly arguing with me that it was worth thinking deeply about how to build ethical robots.

Sunday, October 20, 2013

A Close(ish) Encounter with Voyager 2

It is summer 1985. I'm visiting Caltech with colleague and PhD supervisor Rod Goodman. Rod has just been appointed in the Electrical Engineering Department at Caltech, and I'm still on a high from finishing my PhD in Information Theory. Exciting times.

Rod and I are invited to visit the Jet Propulsion Labs (JPL). It's my second visit to JPL. But it turned into probably the most inspirational afternoon of my life. Let me explain.

After the tour the good folks who were showing us round asked if I would like to meet some of the post-docs in the lab. As he put it: the fancy control room with the big wall screens is really for the senators and congressmen - this is where the real work gets done. So, while Rod went off to discuss stuff with his new Faculty colleagues I spent a couple of hours in a back room lab, with a Caltech post-doc working on - as he put it - a summer project. I'm ashamed to say I don't recall his name so I'll call him Josh. Very nice guy, a real southern californian dude.

Now, at this point, I should explain that there was a real buzz at JPL. Voyager 2, which had already more than met its mission objectives was now on course to Uranus and due to arrive in January 1986. It was clear that there was a significant amount of work in planning for that event. The first ever opportunity to take a close look at the seventh planet.

So, Josh is sitting at a bench and in front of him is a well-used Apple II computer. And behind the Apple II is a small display screen so old that the phosphor is burned. This used to happen with CRT computer screens - it's the reason screen savers were invented. Beside the computer are notebooks and manuals, including prominently a piece of graph paper with a half-completed plot. Josh then starts to explain: one of the cameras on Voyager 2 has (they think) a tiny piece of grit* in the camera turntable - the mechanism that allows the camera to be panned. This space grit means that the turntable is not moving as freely as it should. It's obviously extremely important that when Voyager gets to Uranus they need to be able to point the cameras accurately, so Josh's project is to figure out how much torque is (now) needed to move the camera turntable to any desired position. In other word's re-calibrate the camera's controller.

At this point I stop Josh. Let me get this straight: there's a spacecraft further from earth, and flying faster, than any manmade object ever, and your summer project is to do experiments with one of its cameras, using your Apple II computer. Josh: yea, that's right.

Josh then explains the process. He constructs a data packet on his Apple II, containing the control commands to address the camera's turntable motor and to instruct the motor to drive the turntable. As soon as he's happy that the data packet is correct, he then sends it - via the RS232 connection at the back of his Apple II - to a JPL computer (which, I guess would be a mainframe). That computer then, in turn, puts Josh's data packet together with others, from other engineers and scientists also working on Voyager 2, after - I assume - carefully validating the correctness of these commands. Then the composite data packet is sent to the Deep Space Network (DSN) to be transmitted, via one of the DSNs big radio telescopes, to Voyager 2.

Then, some time later, the same data packet is received by Voyager 2, decoded and de-constructed and said camera turntable moves a little bit. The camera then sends back to Earth, again via a composite data packet, some feedback from the camera - the number of degrees the turntable moved. So a day or two later, via a mind-bogglingly complex process involving several radio telescopes and some very heavy duty error-correcting codes, the camera-turntable feedback arrives back at Josh's desktop Apple II with the burned-phosphor screen. This is where the graph paper comes in. Josh picks up his pencil and plots another point on his camera-turntable calibration graph. He then repeats the process until the graph is complete. It clearly worked because six months later Voyager 2 produced remarkable images of Uranus and its moons.

This was, without doubt, the most fantastic lab experiment I'd ever seen. From his humble Apple II in Pasadena Josh was doing tests on a camera rig, on a spacecraft, about 1.7 billion miles away. For a Thunderbirds kid, I really was living in the future. And being a space-nerd I already had some idea of the engineering involved in NASA's deep space missions, but that afternoon in 1985 really brought home to me the extraordinary systems engineering that made these missions possible. Given the very long project lifetimes - Voyager 2 was designed in the early 1970s, launched in 1977, and is still returning valuable science today - its engineers had to design for the long haul; missions that would extend over several generations. Systems design like this requires genius, farsightedness and technical risk taking. Engineering that still inspires me today.

*it later transpired that the problem was depleted lubricant, not space grit.