From Ethics to Regulation and Governance
1. Public attitudes. It is well understood that there are public fears around robotics and artificial intelligence. Many of these fears are undoubtedly misplaced, fuelled perhaps by press and media hype, but some are grounded in genuine worries over how the technology might impact, for instance, jobs or privacy. The most recent Eurobarometer survey on autonomous systems showed that the proportion of respondents with an overall positive attitude has declined from 70% in the 2012 survey to 64% in 2014. Notably the 2014 survey showed that the more personal experience people have with robots, the more favourably they tend to think of them; 82% of respondents have a positive view of robots if they have experience with them, whereas only 60% of respondents have a positive view if they lack robot experience. Also important is that a significant majority (89%) believe that autonomous systems are a form of technology that requires careful management.
2. Building trust in robotics and artificial intelligence requires a multi-faceted approach. The ethics roadmap here illustrates the key elements that contribute to building public trust. The core idea of the roadmap is that ethics inform standards, which in turn underpin regulation.
3. Ethics are the foundation of trust, and underpin good practice. Principles of good practice can be found in Responsible Research and Innovation (RRI). Examples include the 2014 Rome Declaration on RRI; the six pillars of the Rome declaration on RRI are: Engagement, Gender equality, Education, Ethics, Open Access and Governance. The EPSRC framework for responsible innovation incorporates the AREA (Anticipate, Reflect, Engage and Act) approach.
4. The first European work to articulate ethical considerations for robotics was the EURON Roboethics Roadmap.
5. In 2010 a joint AHRC/EPSRC workshop drafted and published the Principles of Robotics for designers, builders and users of robots. The principles are:
- Robots are multi-use tools. Robots should not be designed solely or primarily to kill or harm humans, except in the interests of national security;
- Humans, not robots, are responsible agents. Robots should be designed; operated as far as is practicable to comply with existing laws & fundamental rights & freedoms, including privacy.
- Robots are products. They should be designed using processes which assure their safety and security.
- Robots are manufactured artefacts. They should not be designed in a deceptive way to exploit vulnerable users; instead their machine nature should be transparent.
- The person with legal responsibility for a robot should be attributed.
7. The IEEE has recently launched a global initiative on Ethical Considerations in the Design of Autonomous Systems, to encompass all intelligent technologies including robotics, AI, computational intelligence and deep learning.
8. Significant recent work towards regulation was undertaken by the EU project RoboLaw. The primary output of that project is a comprehensive report entitled Guidelines on Regulating Robotics. That report reviews both ethical and legal aspects; the legal analysis covers rights, liability & insurance, privacy and legal capacity. The report focuses on driverless cars, surgical robots, robot prostheses and care robots and concludes by stating: “The field of robotics is too broad, and the range of legislative domains affected by robotics too wide, to be able to say that robotics by and large can be accommodated within existing legal frameworks or rather require a lex robotica. For some types of applications and some regulatory domains, it might be useful to consider creating new, fine-grained rules that are specifically tailored to the robotics at issue, while for types of robotics, and for many regulatory fields, robotics can likely be regulated well by smart adaptation of existing laws”.
9. In general technology is trusted if it brings benefits while also safe, well regulated and, when accidents happen, subject to robust investigation. One of the reasons we trust airliners is that we know they are part of a highly regulated industry with an excellent safety record. The reason commercial aircraft are so safe is not just good design, it is also the tough safety certification processes and, when things do go wrong, robust processes of air accident investigation. Should driverless cars, for instance, be regulated through a body similar to the Civil Aviation Authority (CAA), with a driverless car equivalent of the Air Accident Investigation Branch?
10. The primary focus of paragraphs 1 – 9 above is robotics and autonomous systems, and not software artificial intelligence. This reflects the fact that most work toward ethics and regulation has focussed on robotics. Because robots are physical artefacts (which embody AI) they are undoubtedly more readily defined and hence regulated than distributed or cloud-based AIs. This and the already pervasive applications of AI (in search engines, machine translation systems or intelligent personal assistant AIs, for example) strongly suggest that greater urgency needs to be directed toward considering the societal and ethical impact of AI, including the governance and regulation of AI.
11. AI systems raise serious questions over trust and transparency:
Recommendations
13. It is vital that we address public fears around robotics and artificial intelligence, through renewed public engagement and consultation.
- How can we trust the decisions made by AI systems, and – more generally – how can the public have confidence in the use of AI systems in decision making?
- If an AI system makes a decision that turns out to be disastrously wrong, how do we investigate the logic by which the decision was made?
- Of course much depends of the consequences of those decisions. Consider decisions that have real consequences to human safety or well being, such as those made by medical diagnosis AIs or driverless car autopilots. Systems that make such decisions are critical systems.
- First is the problem of verification of systems that learn. Current verification approaches typically assume that the system being verified will never change its behaviour, but a system that learns does – by definition – change its behaviour, so any verification is likely to be rendered invalid after the system has learned.
- Second is the black box problem. Modern AI systems, and especially the ones receiving the greatest attention, so called Deep Learning systems, are based on Artificial Neural Networks (ANNs). A characteristic of ANNs is that after the ANN has been trained with data sets (which may be very large, so called “big data” sets – which itself poses another problem for verification), any attempt to examine the internal structure of the ANN in order to understand why and how the ANN makes a particular decision is impossible. The decision making process of an ANN is not transparent.
- The problem of verification and validation of systems that learn may not be intractable, but is the subject of current research, see for example work on verification and validation of autonomous systems. The black box problem may be intractable for ANNs, but could be avoided by using algorithmic approaches to AI (i.e. that do not use ANNs).
13. It is vital that we address public fears around robotics and artificial intelligence, through renewed public engagement and consultation.
No comments:
Post a Comment