I was interviewed by John Arlidge on Saturday, researching for a piece on the American Association for Artificial Intelligence meeting, earlier reported in the New Scientist with the title Smart machines: what's the worst that could happen.
John's article in the Sunday Times was, in my view, a more-or-less reasonable account of what's actually a rather dull story: a group of senior researchers in AI getting together to discuss setting up ethical and design guidelines for future AI-based systems. Well good. That's what we should expect to happen and, indeed the AAAI group are probably a bit late off the mark. An EU initiative in Roboethics has been underway since 2004/05: here is a recent draft of the EURON Roboethics Roadmap; the South Korean government have been reported to be working on a robot ethics charter, and the venerable International Standards Organisation (ISO) have had a group working for a couple of years now on a new ISO standard for intelligent robots.
Unfortunately a sub-editor (I guess) chose to give the piece the lurid title: Scientists fear a revolt by killer robots. Sorry guys. I know it doesn't make for good headlines but we scientists do not fear a revolt by killer robots.
Yes, autonomous robots will demand some new - possibly radical - approaches to safety, reliability and ethics and, yes, a good deal of effort needs to go into this, but the fact that these efforts are going on is not because of some secret fears of killer robots taking over. Just good engineering practice.