I'm with about 25 people in a hotel in the New Forest to talk about the ethical, legal and societal issues around robotics. We are a diverse crew: a core of robotics and AI folk, richly complemented by academics in psychology, law, ethics, philosophy, culture, performance and art history. This joint EPSRC/AHRC workshop was an outcome of a discussion on robot ethics at the EPSRC Societal Issues Panel in November 2009. (See also my post The Ethical Roboticist.)
Of course in any discussion about robot ethics it is inevitable that Asimov's Three Laws of Robotics will come up and, I must admit, I've always insisted that they have no value whatsoever. They were, after all, a fictional device for creating stories with dramatic moral ambiguities - not a serious attempt to draw up a moral code of robots. Today I've been forced to revise that opinion. Amazingly we have succeeded in drafting a new set of five 'laws', not for robots themselves but for designers and operators of robots. (You can't have laws for robots because they are not persons - or at least not for the foreseeable future.)
I can't post them here just yet - a joint statement needs to be drafted and agreed first. But to answer the question in the title of this post - no, robots can't be Three Laws Safe, but they quite possibly could be Five Laws Compliant.
Postscript: here is a much better description of the workshop on Lilian Edwards' excellent blog.