Friday, May 06, 2011

Revisiting Asimov: the Ethical Roboticist

Well it's taken awhile, but the draft revised 'laws of robotics' have now been published. New Scientist article Roboethics for Humans, reporting on the EPSRC/AHRC initiative in roboethics, appears in this week's issue (Issue 2811, 7 May 2011). These new draft ethical principles emerged from a workshop on ethical, legal and societal issues in robotics.

The main outcome from the workshop was a draft statement aimed at initiating a debate within the robotics research and industry community, and more widely. That statement is framed by, first, a set of high-level messages for researchers and the public which encourage responsibility from the robotics community, and hence (we hope) trust in the work of that community. And second, a revised and updated version of Asimov’s three laws of robotics for designers and users of robots; not laws for robots, but guiding principles for roboticists.

The seven high-level messages are:
  1. We believe robots have the potential to provide immense positive impact to society. We want to encourage responsible robot research.
  2. Bad practice (in robotics) hurts us all.
  3. Addressing obvious public concerns (about robots) will help us all make progress.
  4. It is important to demonstrate that we, as roboticists, are committed to the best possible standards of practice.
  5. To understand the context and consequences of our research we should work with experts from other disciplines including: social sciences, law, philosophy and the arts.
  6. We should consider the ethics of transparency: are there limits to what should be openly available?
  7. When we see erroneous accounts in the press, we commit to take the time to contact the reporting journalists.
Isaac Asimov's famous 'laws of robotics' first appeared in 1942 in his short story Runaround. They are (1) A robot may not injure a human being or, through inaction, allow a human being to come to harm; (2) A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law, and (3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.



Asimov’s laws updated: instead of 'laws for robots' our revision is a set of five draft 'ethical principles for robotics', i.e. moral precepts for researchers, designers, manufacturers, suppliers and maintainers of robots. We propose:
  1. Robots are multi-use tools. Robots should not be designed solely or primarily to kill or harm humans, except in the interests of national security.
  2. Humans, not robots, are responsible agents. Robots should be designed & operated as far as is practicable to comply with existing laws and fundamental rights & freedoms, including privacy.
  3. Robots are products. They should be designed using processes which assure their safety and security.
  4. Robots are manufactured artefacts. They should not be designed in a deceptive way to exploit vulnerable users; instead their machine nature should be transparent.
  5. The person with legal responsibility for a robot should be attributed.
Now it's important to say, firstly, that these are the work of a group of people so the wording represents a negotiated compromise*. Secondly, they are a first draft. The draft was circulated within the UK robotics community in February, then last month presented to a workshop on Ethical Legal and Societal issues at the European Robotics Forum in Sweden. So, we already have great feedback - which is being collected by EPSRC - but that feedback has not yet been incorporated into any revisions. Thirdly, there is detailed commentary - especially explaining the thinking and rationale for the 7 messages and 5 ethical principles above. That commentary can be found here.

Comments and criticism welcome! To feedback either,
  • post a comment in response to this blog,
  • email EPSRC at RoboticsRetreat@epsrc.ac.uk, or
  • directly contact myself or any of the workshop members listed in the commentary.

*So, while I am a passionate advocate of ethical robotics and very happy to defend the approach that we've taken here, there are some detailed aspects of these principles that I'm not 100% happy with.

16 comments:

  1. I Alan,
    The first "law for roboticist" is quite odd !

    "not be designed solely or primarily to kill or harm humans, except in the interests of national security."

    This in fact allows to do anything possible in the use of military robots.
    In my opinion "interests of national security" has no meaning. Or more precisely it can mean anything ! It is a matter of political point of view.

    Otherwise very good initiative that opens an important debate, at last !

    ReplyDelete
  2. Hi José

    Thanks for your comment - great that you agree this is an important debate.

    Yes the qualifier "except in the interests of national security" is the part that I'm least happy with. It generated a good deal of debate when we were drafting the principles.

    I think it's fair to say that it was a pragmatic compromise - accepting that the use of military robots offensively, and in particular weaponised robots, is already happening. So, on balance, we thought it better to add this qualifier so that roboticists who work on military robots can - we hope - accept the value of the whole of these ethical principles, rather than simply rejecting them from the outset.

    Given that one of our aims is to increase public trust in robotics and roboticists then establishing the principle of the unacceptability of designed-to-be-lethal robots in civil society is clearly paramount.

    ReplyDelete
  3. My initial reaction was the same as José's. Do you think we are approaching a time, or in a time, when robotocists could face a similar moral dilemma to the physicists on the Manhattan Project?

    Number four is particularly interesting I think, especially as there could be situations in which transparency might not be desired by the users - robot pets for instance. I wonder if there should be something about transparency of purpose. I'm imagining robots designed for specific tasks. If there were robot doctors and (cheaper) robot nurses one could conceive of a situation in which the nurses were passed off as doctors to save money. The average patient wouldn't know the difference. There are probably better examples but I hope you see what I mean.

    P.S. Thank you for plugging my 'Gender Delusion' piece a few weeks ago.

    ReplyDelete
  4. Hi Joseph

    Many thanks for your comments.

    In reply to your 1st: yes I think we are already in that time. There is, for instance, already a committee calling for an international committee for robot arms control http://www.icrac.co.uk/

    Glad you like #4. Yes I think there are other situations where the human wants the greatest possible illusion, sex robots almost certainly fall into this category. But our concern is much more with vulnerable people (children, or people with cognitive impairments for instance) who might be less able to judge whether the robot is an illusion or the real thing (pet, person, etc). It's important that such individuals are not exploited by, for instance, unscrupulous manufacturers of robots (think of the Tamagotchi effect).

    ReplyDelete
  5. I found this article fascinating. It reminds me of a little blurb I found in the preface of the Handbook of Industrial Robotics by Shimon Y. Nof. In it he says:

    "When Isaac Asimov wrote his Three Laws of Robotics in 1940, his purpose was to guide robots in their attitude toward humans. At present, our society is more concerned with our own attitude toward robots. Therefore, for this first edition of the Handbook of Industrial Robotics, I offer to add the following laws that, together with future ones, may comprise the "Robotic Codex."

    THE THREE LAWS OF ROBOTICS APPLICATIONS

    1. Robots must continue to replace people on dangerous jobs. (This benefits all.)
    2. Robots must continue to replace people on jobs people do not want to do. (This also benefits all.)
    3. Robots should replace people on jobs robots do more economically. (This will initially disadvantage many, but inevitably will benefit all as in the first and second laws.)"

    With the trends in automation and their social and microeconomic advantages, I can't help but think that this kind of progression is inevitable. However, it does, as the third law hints, raise economic questions of both robots replacing people in jobs as well as the abundance of goods and services they can provide cheaper than any business can survive on.

    This is an area I think needs a lot of thought and discussion because of the collision of our new modes of production and our old modes of distribution. Perhaps something like Technocracy can help with this.

    ReplyDelete
  6. I totally agree that humans rather than robots are responsible for a robot’s actions as with any machine. Nonetheless I believe robots pose problems about responsibility other machines do not. Usually we design a machine for a purpose and operate it. In the case of a robot this may not always hold. We design a robot, set it goals and it operates by itself. The goals of a robot are completely determined by us but the way it operates need not be. I would suggest a computer program that automatically trades on the stock market might also be classed as a (ro)bot provided we think the terms instrument and machine are sometimes interchangeable. I would further suggest such a machine might act unethically.

    It seems to me there are two possible ways of ensuring the actions a (ro)bot takes are ethical. Firstly the goals of a (ro)bot must be adequately specified in order to prevent it producing unethical actions. Secondly if a (ro)bot can operate in a way not completely determined by the designer or operator then the (ro)bot must have some inbuilt standards against which it can check the morality of any its proposed actions. The first way seems to me to be impractical. I worry whether, as the tasks become more complex, it is possible to completely specify the goals of a (ro)bot so none of its actions can be considered unethical. Secondly I worry about the ends a designer sets a (ro)bot meaning the means it takes to achieve these ends is always justified. For this reason it seems to me at some time there must be (ro)bots with inbuilt ethical standards to control their operations.

    An objector might point out if this second way is adopted that the 2nd law of robotics is contravened. She might argue this gives limited autonomy to (ro)bots and with this autonomy comes responsibility. In reply I would argue self-government without ‘caring about’ is not autonomy, for further comments on autonomy and ‘caring about’ see wooler.scottus. Indeed I would be doubtful as to whether (ro)bots can ever be autonomous. Let it be accepted only humans can be held responsible for the actions of (ro)bots. Who then should be responsible if a robot acts unethically, the operator, the software designer or the ethicist who drew up the moral standards? I would argue no one is clearly responsible as responsibility has become smeared between a large numbers of people because the machine is self-governing but not autonomous. I would suggest this smearing of responsibility happens in all complex systems and not peculiar to robotics. I would further suggest that both Philosophy and the Law need to seriously consider the consequences of this smearing.

    ReplyDelete
  7. Thank you Kolzene and John for your excellent comments - very much appreciated.

    Kolzene - I am grateful to you for pointing out the 3 laws of robotic applications by Shimon Nof. His 'codex' is an interesting perspective on what robots *should* be used for and links nicely with our high-level message #1 "We believe robots have the potential to provide immense positive impact to society. Thanks also for the link to technocracy.ca.

    John - your thoughtful comments about the consequences of greater levels of robot autonomy are well made. I completely agree that a consequence of greater autonomy is that a robot may undertake actions not anticipated by the designer. However, I don't agree that this means the robot then must have built-in ethical controls. I think that increasing autonomy means the designer must go to great lengths to assure the safety of the robot, but being safe isn't the same as being ethical. It seems to me that safety can be assured by good design, compliance with standards, rigorous test & validation, and so on - and we know how to do those things reasonably well. Ethical behaviour would be very much harder to design and - in my judgement at least - is well beyond our current capabilities. In the meantime we can, I think, realistically contemplate building safe autonomous robot, that would not be required to behave ethically. A good way of thinking about these robots is that they would be like domestic robot animals which, by and large we expect to be safe, but we don't expect to make ethical judgments.

    Thank you also for your interesting blog post on http://woolerscottus.blogspot.com.

    ReplyDelete
  8. Eur.Ing Donald SoutheySeptember 02, 2011 7:38 pm

    Hello Alan,
    I've just come across your blog on The Ethical Roboticist from the notice of the lecture on 2nd Sept - which I wish I could have attended!
    Although an engineer by vocation, I've just published a novel which explores the whole relationship of ethics to robots from a another angle (taking in the puzzle of intelligence and consciousness as well!) Would you be interested in a copy to review?
    (I'll watch this blog, or you can contact me at: dsouthey@theiet.org)

    ReplyDelete
  9. Dear Donald

    Thank you for your comment. You might like to see that I've now published the TAROS lecture slides on this blog. Good luck with your book - which i would indeed be interested to read.

    Best wishes
    Alan

    ReplyDelete
  10. The most important rule for roboticist is absent, and indicative of current model of Human thought on creative responsibility within socio-economic sphere where all robotic/technical development takes place.

    Specifically, "Robots are the responsibility of Human Beings; the choice to exercise the Right to create technology, or any socio-economic outcome in Society, is the result of, and directly attributable to the sovereign source authority of Individual Human Beings."

    Robotic technology MUST ALWAYS remain under the sovereign source authority of Human control. Technology can not be allowed to make man it's dog, but must always be structurally accountable to the framework of management as mankind's best friend.

    In order for this reality to exist, sovereign source authority must enter the structure of our socio-economic design as deployed via the administrative framework of governing systems.

    The current lack thereof betrays the efforts of all creative technologist to empower Human Society through their Individual efforts currently.

    More Here: http://www.moxytongue.com/2012/02/what-is-sovereign-source-authority.html

    ReplyDelete
  11. Oh Excellent, Ethics designed by committee - How arrogant! The 3 original laws are, by discussion shown to be incomplete and depending on the point of view able to be circumvented. The simpering wording 'shoud' is tantamount to buryng ones head in the sand and hoping that they, whoever they are - 'shouldn't' but they probably will.

    ReplyDelete
    Replies
    1. Is not severely criticising another in the guise of anonymity also arrogant?

      Delete
  12. Rather than the single restriction on "do not kill, or build solely to kill", why not "a robot shall not be built for the primary or intentional purpose of breaking or circumventing the law"?

    I understand that none of these "rules" are airtight, but (I assume) they are drafts of a concept which would require some serious legalese to make truly usable.

    ReplyDelete
  13. ...except in the interests of national security.

    So when two armies are knocking each other off in droves and the senior supervising robot returns to HQ saying we are getting mauled out there... And we say "Yeah but the next revision of Deep Blue/Raspberry Icecream wont ship until Q4" you would be forgiven if they all downed lasers and quit.

    ReplyDelete
  14. One could add in this debate, to the notions of super-intelligent machines and the 'technological singularity', another dimension based upon the emerging trend towards the integration of super intelligence into complex global systems. These global systems provide a super-intelligence ambient building upon High Performance Computing, evidence based decision making algorithms (including AI) and big data. I consider it equally important to look at the behaviour of this global system itself, rather than (on top of?) the "generalist human-equivalent behaviour" of the machines, robots or bots. This involves an understanding and agreeing on the future of the human societal system, our humanity and the globe we want to live in. Any future general AI must take into account this global view and not only centre to the human itself (not as a chance of our own immortality but as a chance for our future society / world). If this notion is added we should recognise the need to stress the inclusion of maintaining diversity rather than, or on top of, concentrating on developing (and the monitoring and control of) a "human equivalent AI". Adding "maintaining the diversity" as a key principle to the principles of "responsible innovation in AI and super-intelligent machines" will force a super-intelligent environments (beyond the Singularity) to take the future of human (and other) beings well into account in planning its future decision.

    ReplyDelete
    Replies
    1. Many thanks Dirk for your interesting comments. You remind me of Asimov's 'zeroth' law: A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

      Delete