Monday, April 25, 2016

From ethics to regulation and governance

The following text was drafted in response to question 4 of the Parliamentary Science and Technology Committee Inquiry on Robotics and Artificial Intelligence on The social, legal and ethical issues raised by developments in robotics and artificial intelligence technologies, and how they should be addressed.

From Ethics to Regulation and Governance

1. Public attitudes. It is well understood that there are public fears around robotics and artificial intelligence. Many of these fears are undoubtedly misplaced, fuelled perhaps by press and media hype, but some are grounded in genuine worries over how the technology might impact, for instance, jobs or privacy. The most recent Eurobarometer survey on autonomous systems showed that the proportion of respondents with an overall positive attitude has declined from 70% in the 2012 survey to 64% in 2014. Notably the 2014 survey showed that the more personal experience people have with robots, the more favourably they tend to think of them; 82% of respondents have a positive view of robots if they have experience with them, whereas only 60% of respondents have a positive view if they lack robot experience. Also important is that a significant majority (89%) believe that autonomous systems are a form of technology that requires careful management.

2. Building trust in robotics and artificial intelligence requires a multi-faceted approach. The ethics roadmap here illustrates the key elements that contribute to building public trust. The core idea of the roadmap is that ethics inform standards, which in turn underpin regulation.

3. Ethics are the foundation of trust, and underpin good practice. Principles of good practice can be found in Responsible Research and Innovation (RRI). Examples include the 2014 Rome Declaration on RRI; the six pillars of the Rome declaration on RRI are: Engagement, Gender equality, Education, Ethics, Open Access and Governance. The EPSRC framework for responsible innovation incorporates the AREA (Anticipate, Reflect, Engage and Act) approach.

4. The first European work to articulate ethical considerations for robotics was the EURON Roboethics Roadmap.

5. In 2010 a joint AHRC/EPSRC workshop drafted and published the Principles of Robotics for designers, builders and users of robots. The principles are:
  • Robots are multi-use tools. Robots should not be designed solely or primarily to kill or harm humans, except in the interests of national security;
  • Humans, not robots, are responsible agents. Robots should be designed; operated as far as is practicable to comply with existing laws & fundamental rights & freedoms, including privacy.
  • Robots are products. They should be designed using processes which assure their safety and security.
  • Robots are manufactured artefacts. They should not be designed in a deceptive way to exploit vulnerable users; instead their machine nature should be transparent.
  • The person with legal responsibility for a robot should be attributed.
6. Work by the British Standards Institute technical subcommittee on Robots and Robotic Devices led to publication – in April 2016 – of BS 8611: Guide to the ethical design and application of robots and robotic systems. BS8611 is not a code of practice; instead it gives “guidance on the identification of potential ethical harm and provides guidelines on safe design, protective measures and information for the design and application of robots”. BS8611 articulates a broad range of ethical hazards and their mitigation, including societal, application, commercial/financial and environmental risks, and provides designers with guidance on how to assess then reduce the risks associated with these ethical hazards. The societal hazards include, for example, loss of trust, deception, privacy & confidentiality, addiction and employment.

7. The IEEE has recently launched a global initiative on Ethical Considerations in the Design of Autonomous Systems, to encompass all intelligent technologies including robotics, AI, computational intelligence and deep learning.

8. Significant recent work towards regulation was undertaken by the EU project RoboLaw. The primary output of that project is a comprehensive report entitled Guidelines on Regulating Robotics. That report reviews both ethical and legal aspects; the legal analysis covers rights, liability & insurance, privacy and legal capacity. The report focuses on driverless cars, surgical robots, robot prostheses and care robots and concludes by stating: “The field of robotics is too broad, and the range of legislative domains affected by robotics too wide, to be able to say that robotics by and large can be accommodated within existing legal frameworks or rather require a lex robotica. For some types of applications and some regulatory domains, it might be useful to consider creating new, fine-grained rules that are specifically tailored to the robotics at issue, while for types of robotics, and for many regulatory fields, robotics can likely be regulated well by smart adaptation of existing laws”.

9. In general technology is trusted if it brings benefits while also safe, well regulated and, when accidents happen, subject to robust investigation. One of the reasons we trust airliners is that we know they are part of a highly regulated industry with an excellent safety record. The reason commercial aircraft are so safe is not just good design, it is also the tough safety certification processes and, when things do go wrong, robust processes of air accident investigation. Should driverless cars, for instance, be regulated through a body similar to the Civil Aviation Authority (CAA), with a driverless car equivalent of the Air Accident Investigation Branch?

10. The primary focus of paragraphs 1 – 9 above is robotics and autonomous systems, and not software artificial intelligence. This reflects the fact that most work toward ethics and regulation has focussed on robotics. Because robots are physical artefacts (which embody AI) they are undoubtedly more readily defined and hence regulated than distributed or cloud-based AIs. This and the already pervasive applications of AI (in search engines, machine translation systems or intelligent personal assistant AIs, for example) strongly suggest that greater urgency needs to be directed toward considering the societal and ethical impact of AI, including the governance and regulation of AI.

11. AI systems raise serious questions over trust and transparency:
  • How can we trust the decisions made by AI systems, and – more generally – how can the public have confidence in the use of AI systems in decision making?
  • If an AI system makes a decision that turns out to be disastrously wrong, how do we investigate the logic by which the decision was made?
  • Of course much depends of the consequences of those decisions. Consider decisions that have real consequences to human safety or well being, such as those made by medical diagnosis AIs or driverless car autopilots. Systems that make such decisions are critical systems.
12. Existing critical software systems are not AI systems, nor do they incorporate AI systems. The reason is that AI systems (and more generally machine learning systems) are generally regarded as impossible to verify for safety critical applications - the reasons for this need to be understood.
  • First is the problem of verification of systems that learn. Current verification approaches typically assume that the system being verified will never change its behaviour, but a system that learns does – by definition – change its behaviour, so any verification is likely to be rendered invalid after the system has learned.
  • Second is the black box problem. Modern AI systems, and especially the ones receiving the greatest attention, so called Deep Learning systems, are based on Artificial Neural Networks (ANNs). A characteristic of ANNs is that after the ANN has been trained with data sets (which may be very large, so called “big data” sets – which itself poses another problem for verification), any attempt to examine the internal structure of the ANN in order to understand why and how the ANN makes a particular decision is impossible. The decision making process of an ANN is not transparent.
  • The problem of verification and validation of systems that learn may not be intractable, but is the subject of current research, see for example work on verification and validation of autonomous systems. The black box problem may be intractable for ANNs, but could be avoided by using algorithmic approaches to AI (i.e. that do not use ANNs).
Recommendations

13. It is vital that we address public fears around robotics and artificial intelligence, through renewed public engagement and consultation.
14. Work is required to identify the kind of governance framework(s) and regulatory bodies needed to support Robotics and Artificial Intelligence in the UK. A group should be set up and charged with this work; perhaps a Royal Commission, as recently suggested by Tom Watson MP.

Saturday, April 09, 2016

Robots should not be gendered

Should robots be gendered? I have serious doubts about the morality of designing and building robots to resemble men or women, boys or girls. Let me explain why.

The first worry I have follows from one of the five principles of robotics, which states: robots should not be designed in a deceptive way to exploit vulnerable users; instead their machine nature should be transparent.

To design a gendered robot is a deception. Robots cannot have a gender in any meaningful sense. To impose a gender on a robot, either by design of its outward appearance, or programming some gender stereotypical behaviour, cannot be for reasons other than deception - to make humans believe that the robot has gender, or gender specific characteristics.

When we drafted our 4th ethical principle the vulnerable people we had in mind were children, the elderly or disabled. We were concerned that naive robot users may come to believe that the robot interacting with them (caring for them perhaps) is a real person, and that the care the robot is expressing for them is real. Or that an unscrupulous robot manufacture exploits that belief. But when it comes to gender we are all vulnerable. Whether we like it or not we all react to gender cues. So whether deliberately designed to do so or not, a gendered robot will trigger reactions that a non-gendered robot will not.

Our 4th principle states that a robot's machine nature should be transparent. But for gendered robots that principle doesn't go far enough. Gender cues are so powerful that even very transparently machine-like robots with a female body shape, for instance, will provoke a gender-cued response.

My second concern leads from an ethical problem that I've written and talked about before: the brain-body mismatch problem. I've argued that we shouldn't be building android robots at all until we can embed an AI into those robots that matches their appearance. Why? Because our reactions to a robot are strongly influenced by its appearance. If it looks human then we, not unreasonably, expect it to behave like a human. But a robot not much smarter than a washing machine cannot behave like a human. Ok, you might say, if and when we can build robots with human-equivalent intelligence, would I be ok with that? Yes, provided they are androgynous.

My third - and perhaps most serious concern - is about sexism. By building gendered robots there is a huge danger of transferring one of the evils of human culture: sexism, into the artificial realm. By gendering and especially sexualising robots we surely objectify. But how can you objectify an object, you might say? The problem is that a sexualised robot is no longer just an object, because of what it represents. The routine objectification of women (or men) because of ubiquitous sexualised robots will surely only deepen the already acute problem of the objectification of real women and girls. (Of course if humanity were to grow up and cure itself of the cancer of sexism, then this concern would disappear.)

What of the far future? Given that gender is a social construct then a society of robots existing alongside humans might invent gender for themselves. Perhaps nothing like male and female at all. Now that would be interesting.