Saturday, February 03, 2018

Why ethical robots might not be such a good idea after all

This week my colleague Dieter Vanderelst presented our paper: The Dark Side of Ethical Robots at AIES 2018 in New Orleans.

I blogged about Dieter's very elegant experiment here, but let me summarise. With two NAO robots he set up a demonstration of an ethical robot helping another robot acting as a proxy human, then showed that with a very simple alteration of the ethical robot's logic it is transformed into a distinctly unethical robot - behaving either competitively or aggressively toward the proxy human.

Here are our paper's key conclusions:

The ease of transformation from ethical to unethical robot is hardly surprising. It is a straightforward consequence of the fact that both ethical and unethical behaviours require the same cognitive machinery with – in our implementation – only a subtle difference in the way a single value is calculated. In fact, the difference between an ethical (i.e. seeking the most desirable outcomes for the human) robot and an aggressive (i.e. seeking the least desirable outcomes for the human) robot is a simple negation of this value.

On the face of it, given that we can (at least in principle) build explicitly ethical machines* then it would seem that we have a moral imperative to do so; it would appear to be unethical not to build ethical machines when we have that option. But the findings of our paper call this assumption into serious doubt. Let us examine the risks associated with ethical robots and if, and how, they might be mitigated. There are three.
  1. First there is the risk that an unscrupulous manufacturer might insert some unethical behaviours into their robots in order to exploit naive or vulnerable users for financial gain, or perhaps to gain some market advantage (here the VW diesel emissions scandal of 2015 comes to mind). There are no technical steps that would mitigate this risk, but the reputational damage from being found out is undoubtedly a significant disincentive. Compliance with ethical standards such as BS 8611 guide to the ethical design and application of robots and robotic systems, or emerging new IEEE P700X ‘human’ standards would also support manufacturers in the ethical application of ethical robots. 
  2. Perhaps more serious is the risk arising from robots that have user adjustable ethics settings. Here the danger arises from the possibility that either the user or a technical support engineer mistakenly, or deliberately, chooses settings that move the robot’s behaviours outside an ‘ethical envelope’. Much depends of course on how the robot’s ethics are coded, but one can imagine the robot’s ethical rules expressed in a user-accessible format, for example, an XML like script. No doubt the best way to guard against this risk is for robots to have no user adjustable ethics settings, so that the robot’s ethics are hard-coded and not accessible to either users or support engineers. 
  3. But even hard-coded ethics would not guard against undoubtedly the most serious risk of all, which arises when those ethical rules are vulnerable to malicious hacking. Given that cases of white-hat hacking of cars have already been reported, it's not difficult to envisage a nightmare scenario in which the ethics settings for an entire fleet of driverless cars are hacked, transforming those vehicles into lethal weapons. Of course, driverless cars (or robots in general) without explicit ethics are also vulnerable to hacking, but weaponising such robots is far more challenging for the attacker. Explicitly ethical robots focus the robot’s behaviours to a small number of rules which make them, we think, uniquely vulnerable to cyber-attack.
Ok, taking the most serious of these risks: hacking, we can envisage several technical approaches to mitigating the risk of malicious hacking of a robot’s ethical rules. One would be to place those ethical rules behind strong encryption. Another would require a robot to authenticate its ethical rules by first connecting to a secure server. An authentication failure would disable those ethics, so that the robot defaults to operating without explicit ethical behaviours. Although feasible, these approaches would be unlikely to deter the most determined hackers, especially those who are prepared to resort to stealing encryption or authentication keys.

It is very clear that guaranteeing the security of ethical robots is beyond the scope of engineering and will need regulatory and legislative efforts. Considering the ethical, legal and societal implications of robots, it becomes obvious that robots themselves are not where responsibility lies. Robots are simply smart machines of various kinds and the responsibility to ensure they behave well must always lie with human beings. In other words, we require ethical governance, and this is equally true for robots with or without explicit ethical behaviours.

Two years ago I thought the benefits of ethical robots outweighed the risks. Now I'm not so sure. I now believe that - even with strong ethical governance - the risks that a robot’s ethics might be compromised by unscrupulous actors are so great as to raise very serious doubts over the wisdom of embedding ethical decision making in real-world safety critical robots, such as driverless cars. Ethical robots might not be such a good idea after all.

*As a footnote let me explain what I mean by explicitly ethical robots: these are robots that select behaviours on the basis of ethical rules - in a sense they can be said to reason about ethics (in our case by evaluating the ethical consequences of several possible actions). Here I'm using the terminology of James Moor, who proposed four kinds of ethical agents, as I explain here. Moor shows in his classification that all robots (and AIs) are ethical agents in the sense that they can all have an ethical impact.

Thus, even though we're calling into question the wisdom of explicitly ethical robots, that doesn't change the fact that we absolutely must design all robots to minimise the likelihood of ethical harms, in other words we should be designing implicitly ethical robots within Moor's schema.

Here is the full reference to our paper:

Vanderelst D and Winfield AFT (2018), The Dark Side of Ethical Robots, AAAI/ACM Conf. on AI Ethics and Society (AIES 2018), New Orleans.

Related blog posts:
The Dark side of Ethical Robots
Could we make a moral machine?
How ethical is your ethical robot?
Towards ethical robots: an update
Towards an Ethical Robot

Thursday, February 01, 2018

Ethical Governance: what is it and who's doing it?

These days I often find myself talking about ethical governance. Not just talking about but advocating: for instance in written evidence to the 2016 parliamentary select committee on robots and AI I made the link between ethical governance and trust. I believe that without transparent ethical governance robotics and AI will not win public trust, and without trust we will not see the societal benefits of robots and AI that we all hope for.

But what exactly is ethical governance and who is doing it, and perhaps more importantly, who in robotics and AI is doing it well?

In a draft paper on the subject I define ethical governance as
a set of processes, procedures, cultures and values designed to ensure the highest standards of behaviour. Ethical governance thus goes beyond simply good (i.e. effective) governance, in that it inculcates ethical behaviours. Normative ethical governance is seen as an important pillar of responsible research and innovation (RRI), which “entails an approach, rather than a mechanism, so it seeks to deal with ethical issues as or before they arise in a principled manner rather than waiting until a problem surfaces and dealing with it in an ad hoc way [1]” 
The link I make here between ethical governance and responsible research and innovation is I think really important. Ethical governance is a key part of RRI. They are not the same thing but it would be hard to imagine good ethical governance without RRI, and vice versa.

So what would I expect of companies or organisations who claim to be ethical? As a starting point for discussion here are five things that ethical companies should do:
  • Have an ethical code of conduct, so that everyone in the company understands what is expected of them. This should sit alongside a mechanism which allows employees to be able to raise ethical concerns, if necessary in confidence, without fear of displeasing a manager.
  • Provide ethics training for everyone, without exception. Ethics, like quality, is not something you can do as as add-on; simply appointing an ethics manager, while not a bad idea, is not enough. Ethical governance needs to become part of a company's culture and DNA, not just in product development but in management, finance, HR and marketing too.
  • Undertake ethical risk assessments of all new products, and act upon the findings of those assessments. A toolkit, or method, for ethical risk assessment of robots and robotic systems exists in British Standard BS 8611, which - alongside much else - sets out 20 ethical risks and hazards together with recommendations on how to mitigate these and verify that they have been addressed.
  • Be transparent about your ethical governance. Of course your robots and AIs must be transparent too, but here I mean transparency of process, not product. It's not enough to claim to be ethical, you need to show how you are ethical. That means publishing your ethical code of conduct, membership of your ethics board if you have one (and its terms of reference), and ideally case studies showing how you have conducted ethical risk assessments.
  • Really value ethical governance.  Even if you have the four processes above in place you also needs to be sincere about ethical governance; that ethical governance is one of your core values, and just not a smokescreen for what you really value, like maximising shareholder returns.
My final point about really valuing ethical governance is of course hard to evidence. But, like trust, confidence in a company's claim to be ethical has to be earned and - as we've seen - can easily be damaged.

This brings me to my second question: who is doing ethical governance? And are there any examples of best practice? A week or so ago I asked Twitter this question. I've had quite a few nominations but haven't yet looked into them all. When I have, I will complete this blog post.

[1] Rainey, S., and Goujon, P. (2011). Toward a Normative Ethical of Governance of Technology. Contextual Pragmatism and Ethical Governance. In Ren von Schomberg (ed.) Towards Responsible Research and Innovation in the Information and Communication Technologies and Security Technologies Fields, Report of the European Commission-DG Research and Innovation.

Saturday, December 23, 2017

A Round Up of Robotics and AI ethics: part 1 Principles

This blogpost is a round up of the various sets of ethical principles of robotics and AI that have been proposed to date, ordered by date of first publication. The principles are presented here (in full or abridged) with notes and references but without commentary. If there any (prominent) ones I've missed please let me know.

Asimov's three laws of Robotics (1950)
  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. 
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. 
I have included these to explicitly acknowledge, firstly, that Asimov undoubtedly established the principle that robots (and by extension AIs) should be governed by principles, and secondly that many subsequent principles have been drafted as a direct response. The three laws first appeared in Asimov's short story Runaround [1]. This wikipedia article provides a very good account of the three laws and their many (fictional) extensions.

Murphy and Wood's three laws of Responsible Robotics (2009)
  1. A human may not deploy a robot without the human-robot work system meeting the highest legal and professional standards of safety and ethics. 
  2. A robot must respond to humans as appropriate for their roles. 
  3. A robot must be endowed with sufficient situated autonomy to protect its own existence as long as such protection provides smooth transfer of control which does not conflict with the First and Second Laws. 
These were proposed in Robin Murphy and David Wood's paper Beyond Asimov: The Three Laws of Responsible Robotics [2].

EPSRC Principles of Robotics (2010)
  1. Robots are multi-use tools. Robots should not be designed solely or primarily to kill or harm humans, except in the interests of national security. 
  2. Humans, not Robots, are responsible agents. Robots should be designed and operated as far as practicable to comply with existing laws, fundamental rights and freedoms, including privacy. 
  3. Robots are products. They should be designed using processes which assure their safety and security. 
  4. Robots are manufactured artefacts. They should not be designed in a deceptive way to exploit vulnerable users; instead their machine nature should be transparent. 
  5. The person with legal responsibility for a robot should be attributed. 
These principles were drafted in 2010 and published online in 2011, but not formally published until 2017 [3] as part of a two-part special issue of Connection Science on the principles, edited by Tony Prescott & Michael Szollosy [4]. An accessible introduction to the EPSRC principles was published in New Scientist in 2011.

Future of Life Institute Asilomar principles for beneficial AI (Jan 2017)

I will not list all 23 principles but extract just a few to compare and contrast with the others listed here:
6. Safety: AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.
7. Failure Transparency: If an AI system causes harm, it should be possible to ascertain why.
8. Judicial Transparency: Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.
9. Responsibility: Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.
10. Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.
11. Human Values: AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.
12. Personal Privacy: People should have the right to access, manage and control the data they generate, given AI systems’ power to analyze and utilize that data.
13. Liberty and Privacy: The application of AI to personal data must not unreasonably curtail people’s real or perceived liberty.
14. Shared Benefit: AI technologies should benefit and empower as many people as possible.
15. Shared Prosperity: The economic prosperity created by AI should be shared broadly, to benefit all of humanity.
An account of the development of the Asilomar principles can be found here.

The ACM US Public Policy Council Principles for Algorithmic Transparency and Accountability (Jan 2017)
  1. Awareness: Owners, designers, builders, users, and other stakeholders of analytic systems should be aware of the possible biases involved in their design, implementation, and use and the potential harm that biases can cause to individuals and society.
  2. Access and redress: Regulators should encourage the adoption of mechanisms that enable questioning and redress for individuals and groups that are adversely affected by algorithmically informed decisions.
  3. Accountability: Institutions should be held responsible for decisions made by the algorithms that they use, even if it is not feasible to explain in detail how the algorithms produce their results.
  4. Explanation: Systems and institutions that use algorithmic decision-making are encouraged to produce explanations regarding both the procedures followed by the algorithm and the specific decisions that are made. This is particularly important in public policy contexts.
  5. Data Provenance: A description of the way in which the training data was collected should be maintained by the builders of the algorithms, accompanied by an exploration of the potential biases induced by the human or algorithmic data-gathering process.
  6. Auditability: Models, algorithms, data, and decisions should be recorded so that they can be audited in cases where harm is suspected.
  7. Validation and Testing: Institutions should use rigorous methods to validate their models and document those methods and results. 
See the ACM announcement of these principles here. The principles form part of the ACM's updated code of ethics.

Japanese Society for Artificial Intelligence (JSAI) Ethical Guidelines (Feb 2017)
  1. Contribution to humanity Members of the JSAI will contribute to the peace, safety, welfare, and public interest of humanity. 
  2. Abidance of laws and regulations Members of the JSAI must respect laws and regulations relating to research and development, intellectual property, as well as any other relevant contractual agreements. Members of the JSAI must not use AI with the intention of harming others, be it directly or indirectly.
  3. Respect for the privacy of others Members of the JSAI will respect the privacy of others with regards to their research and development of AI. Members of the JSAI have the duty to treat personal information appropriately and in accordance with relevant laws and regulations.
  4. Fairness Members of the JSAI will always be fair. Members of the JSAI will acknowledge that the use of AI may bring about additional inequality and discrimination in society which did not exist before, and will not be biased when developing AI. 
  5. Security As specialists, members of the JSAI shall recognize the need for AI to be safe and acknowledge their responsibility in keeping AI under control. 
  6. Act with integrity Members of the JSAI are to acknowledge the significant impact which AI can have on society. 
  7. Accountability and Social Responsibility Members of the JSAI must verify the performance and resulting impact of AI technologies they have researched and developed. 
  8. Communication with society and self-development Members of the JSAI must aim to improve and enhance society’s understanding of AI.
  9. Abidance of ethics guidelines by AI AI must abide by the policies described above in the same manner as the members of the JSAI in order to become a member or a quasi-member of society.
An explanation of the background and aims of these ethical guidelines can be found here, together with a link to the full principles (which are shown abridged above).

Draft principles of The Future Society's Science, Law and Society Initiative (Oct 2017)
  1. AI should advance the well-being of humanity, its societies, and its natural environment. 
  2. AI should be transparent
  3. Manufacturers and operators of AI should be accountable
  4. AI’s effectiveness should be measurable in the real-world applications for which it is intended. 
  5. Operators of AI systems should have appropriate competencies
  6. The norms of delegation of decisions to AI systems should be codified through thoughtful, inclusive dialogue with civil society.
This article by Nicolas Economou explains the 6 principles with a full commentary on each one.

MontrĂ©al Declaration for Responsible AI draft principles (Nov 2017)
  1. Well-being The development of AI should ultimately promote the well-being of all sentient creatures.
  2. Autonomy The development of AI should promote the autonomy of all human beings and control, in a responsible way, the autonomy of computer systems.
  3. Justice The development of AI should promote justice and seek to eliminate all types of discrimination, notably those linked to gender, age, mental / physical abilities, sexual orientation, ethnic/social origins and religious beliefs.
  4. Privacy The development of AI should offer guarantees respecting personal privacy and allowing people who use it to access their personal data as well as the kinds of information that any algorithm might use.
  5. Knowledge The development of AI should promote critical thinking and protect us from propaganda and manipulation.
  6. Democracy The development of AI should promote informed participation in public life, cooperation and democratic debate.
  7. Responsibility The various players in the development of AI should assume their responsibility by working against the risks arising from their technological innovations.
The Montréal Declaration for Responsible AI proposes the 7 values and draft principles above (here in full with preamble, questions and definitions).

IEEE General Principles of Ethical Autonomous and Intelligent Systems (Dec 2017)
  1. How can we ensure that A/IS do not infringe human rights
  2. Traditional metrics of prosperity do not take into account the full effect of A/IS technologies on human well-being
  3. How can we assure that designers, manufacturers, owners and operators of A/IS are responsible and accountable
  4. How can we ensure that A/IS are transparent
  5. How can we extend the benefits and minimize the risks of AI/AS technology being misused
These 5 general principles appear in Ethically Aligned Design v2, a discussion document drafted and published by the IEEE Standards Association Global Initiative on Ethics of Autonomous and Intelligent Systems. The principles are expressed not as rules but instead as questions, or concerns, together with background and candidate recommendations.

A short article co-authored with IEEE general principles co-chair Mark Halverson Why Principles Matter explains the link between principles and standards, together with further commentary and references.

UNI Global Union Top 10 Principles for Ethical AI (Dec 2017)
  1. Demand That AI Systems Are Transparent
  2. Equip AI Systems With an “Ethical Black Box”
  3. Make AI Serve People and Planet 
  4. Adopt a Human-In-Command Approach
  5. Ensure a Genderless, Unbiased AI
  6. Share the Benefits of AI Systems
  7. Secure a Just Transition and Ensuring Support for Fundamental Freedoms and Rights
  8. Establish Global Governance Mechanisms
  9. Ban the Attribution of Responsibility to Robots
  10. Ban AI Arms Race
Drafted by UNI Global Union's Future World of Work these 10 principles for Ethical AI (set out here with full commentary) “provide unions, shop stewards and workers with a set of concrete demands to the transparency, and application of AI”.

[1] Asimov, Isaac (1950): Runaround,  in I, Robot, (The Isaac Asimov Collection ed.) Doubleday. ISBN 0-385-42304-7.
[2] Murphy, Robin; Woods, David D. (2009): Beyond Asimov: The Three Laws of Responsible Robotics. IEEE Intelligent systems. 24 (4): 14–20.
[3] Margaret Boden et al (2017): Principles of robotics: regulating robots in the real world
Connection Science. 29 (2): 124:129.
[4] Tony Prescott and Michael Szollosy (eds.) (2017): Ethical Principles of Robotics, Connection Science. 29 (2) and 29 (3).

Tuesday, December 12, 2017

Wednesday, October 11, 2017

Some reflections on ERL Emergency 2017

It has now been a couple of weeks since the ERL Emergency robotics competition in Piombino, Italy, so I've had time to wind down and reflect a little on the event. The competition and associated events was a great success. A total of 16 teams from 9 countries were organised into 8 multi-domain (air, land and sea) groups for the competition - see the ERL Emergency programme here for details.

From a technical point of view what interested me the most was to see how those teams improved their performances since euRathlon 2015.  Of course a precise comparison is not possible for several reasons: first, not all teams participated in both 2015 and 2017 competitions - and of those that did both personnel and robots had been refreshed, and second, since this is an outdoor competition - conditions (weather, wind and especially sea) were inevitably different.

However, the fact that euRathlon 2015 and ERL Emergency 2017 were held at the same location, with updated but broadly similar competition scenarios means that general scenario (task) level comparisons are possible. In fact we also carried forward some of the functional benchmarks from the 2015 competition, which will allow detailed analysis across both competitions (but not in this blog post).

Instead I will here give a few general (and rather subjective) comments comparing 2015 and 2017 performance.

Communications continued to be a problem for teams, with all but one team in 2017 (as in 2015) choosing to communicate with their land robots via WiFi. Now any communications engineer (as I used to be) will tell you that WiFi is a hopelessly bad choice for outdoor communications. The image above shows the waypoint positions for ground robots from the start position W1 to W6 in front of the building. The control tent was located near to W1 close to the trees above the beach - about 112m from the front of the building - and with no line of sight to W6 because of the uneven terrain. And to make matters worse the land robots need to enter the building and locate the machine room - about 10m from the entrance and again with no line of sight. Despite the obvious drawbacks of WiFi those teams that did use it came up with workarounds, including using robots as mobile repeaters and ingenious systems in which robots deployed a succession of fixed repeaters. There was clearly progress in communications from 2015 to 2017 because, in 2015 - when it became clear that no team could communicate successfully with the machine room (room #3) - we relocated the machine room from the rear of the building to the front (room #1), whereas in 2017 no such relocation was necessary; those teams that reached the machine room at the rear of the building were able to communicate with their robots. See the floorplan here.

Human-robot interfaces were critical to success. In 2015 we saw some interfaces that made it extremely difficult for teams to remotely tele-operate their robots, when operators struggled with postage-stamp sized windows showing the live video feed from the robot's cameras (especially difficult when bright sunshine most days meant that light levels are very high inside the tent). In 2017 we saw not only much improved HRIs but integration between autonomous and tele-operated functions so that, for instance, operators were able to drag and drop the next waypoint then monitor the robot's autonomous progress to that waypoint, then - when at the waypoint - make use of smart machine vision to identify objects of potential interest (OPIs).

Effective human-human communication was also a critical success factor, underlining the fact that ERL Emergency tests not just robots but human robot teams or - to be more accurate - human-human-robot-robot teams. Given that typically a team's aerial, underwater and land robot operators were in separate control tents, establishing exactly how and when these operators would communicate with each other was very important. In this regard we (the judges) didn't mind how intra-team communication was organised - they could use WiFi, mobile phones, or even a runner. In 2015 the weaker teams clearly had obviously not thought about this at all and suffered as a result. Again in 2017 we saw a big improvement, with very effective intra-team communication in the most successful teams.

The full results listings are shown here for euRathlon 2015, and here for ERL Emergency 2017.

Here are a few images from the 2017 competition:

Tuesday, August 15, 2017

The case for an Ethical Black Box

Last month we presented our paper The Case for an Ethical Black Box at Towards Autonomous Robotic Systems (TAROS 2017), University of Surrey. The paper makes a very simple proposition: all robots should be fitted, as standard, with the equivalent of an aircraft Flight Data Recorder. We argue that without such a device - which we call an ethical black box - it will be impossible to properly investigate robot accidents. Ian Sample covered our paper in the Guardian here.

Here is the paper abstract:
This paper proposes that robots and autonomous systems should be equipped with the equivalent of a Flight Data Recorder to continuously record sensor and relevant internal status data. We call this an ethical black box. We argue that an ethical black box will be critical to the process of discovering why and how a robot caused an accident, and thus an essential part of establishing accountability and responsibility. We also argue that without the transparency afforded by an ethical black box, robots and autonomous systems are unlikely to win public trust.
And here are the presentation slides from TAROS:

The full paper can be downloaded from here. Comments and feedback welcome.

The full paper reference:

Winfield A.F.T., Jirotka M. (2017) The Case for an Ethical Black Box. In: Gao Y., Fallah S., Jin Y., Lekakou C. (eds) Towards Autonomous Robotic Systems. TAROS 2017. Lecture Notes in Computer Science, vol 10454. Springer, Cham.

Related blog posts:
The infrastructure of life 2 - Transparency

Friday, July 14, 2017

Three stories about Robot Stories

Here are the slides I gave yesterday morning as member of panel Sci-Fi Dreams: How visions of the future are shaping the development of intelligent technology, at the Centre for the Future of Intelligence 2017 conference. I presented three short stories about robot stories.

Slide 2:
The FP7 TRUCE Project invited a number of scientists - mostly within the field of Artificial Life - to suggest ideas for short stories. Those stories were then sent to a panel of writers, who chose one of the stories. I submitted an idea called The feeling of what it is like to be a robot and was delighted when Lucy Caldwell contacted me. Following a visit to the lab Lucy drafted a beautiful story called The Familiar which - following some iteration - appeared in the collected volume Beta Life.

Slide 3:
More recently the EU Human Brain Project Foresight Lab brought three Sci Fi writers - Allen Ashley, Jule Owen and Stephen Oram - to visit the lab. Inspired by what they saw they then wrote three wonderful short stories, which were read at the 2016 Bristol Literature Festival. The readings were followed by a panel discussion which included myself and BRL colleagues Antonia Tzemanaki and Marta Palau Franco. The three stories are published in the volume Versions of the Future. Stephen Oram went on to publish a collection called Eating Robots.

Slide 4:
My first two stories were about people telling stories about robots. Now I turn to the possibility of robots themselves telling stories. Some years ago I speculated on the idea on the idea of robots telling each other stories (directly inspired by a conversation with Richard Gregory). That idea has now turned into a current project, with the aim of building an embodied computational model of storytelling. For a full description see this paper, currently in press.

Wednesday, June 21, 2017

CogX: Emerging ethical principles, toolkits and standards for AI

Here are the slides I presented at the CogX session on Precision Ethics this afternoon. My intention with these slides was to give a 10 minute helicopter overview of emerging ethical principles, toolkits and ethical standards for AI, including Responsible Research and Innovation.

A commentary will follow in a few days.

Wednesday, March 08, 2017

Does AI pose a threat to society?

Last week I had the pleasure of debating the question "does AI pose a threat to society?" with friends and colleagues Christian List, Maja Pantic and Samantha Payne. The event was organised by the British Academy and brilliantly chaired by the Royal Society's director of science policy Claire Craig.

Here is my opening statement:

One Friday afternoon in 2009 I was called by a science journalist at, I recall, the Sunday Times. He asked me if I knew that there was to be a meeting of the AAAI to discuss robot ethics. I said no I don’t know of this meeting. He then asked “are you surprised they are meeting to discuss robot ethics” and my answer was no. We talked some more and agreed it was actually a rather dull story: a case of scientists behaving responsibly. I really didn’t expect the story to appear but checked the Sunday paper anyway, and there in the science section was the headline Scientists fear revolt of killer robots. (I then spent the next couple of days on the radio explaining that no, scientists do not fear a revolt of killer robots.)

So, fears of future super intelligence - robots taking over the world - are greatly exaggerated: the threat of an out-of-control super intelligence is a fantasy - interesting for a pub conversation perhaps. It’s true we should be careful and innovate responsibly, but that’s equally true for any new area of science and technology. The benefits of robotics and AI are so significant, the potential so great, that we should be optimistic rather than fearful. Of course robots and intelligent systems must be engineered to very high standards of safety for exactly the same reasons that we need our washing machines, cars and airplanes to be safe. If robots are not safe people will not trust them. To reach it’s full potential what robotics and AI needs is a dose of good old fashioned (and rather dull) safety engineering.

In 2011 I was invited to join a British Standards Institute working group on robot ethics, which drafted a new standard BS 8611 Guide to the ethical design of robots and robotic systems, published in April 2016. I believe this to be the world’s first standard on ethical robots.

Also in 2016 the very well regarded IEEE standards association – the same organization that gave us WiFi - launched a Global initiative on Ethical Considerations in AI and Autonomous Systems. The purpose of this Initiative is to ensure every technologist is educated and empowered to prioritize ethical considerations in the design and development of autonomous and intelligent systems; in a nutshell, to ensure ethics are baked in. In December we published Ethically Aligned Design: A Vision for Prioritizing Human Well Being with AI and Autonomous Systems. Within that initiative I'm also leading a new standard on transparency in autonomous systems, based on the simple principle that it should always be possible to find out why an AI or robot made a particular decision.

We need to agree ethical principles, because they are needed to underpin standards – ways of assessing and mitigating the ethical risks of robotics and AI. But standards needs teeth and in turn underpin regulation. Why do we need regulation? Think of passenger airplanes; the reason we trust them is because it's a highly regulated industry with an amazing safety record, and robust, transparent processes of air accident investigation when things do go wrong. Take one example of a robot that we read a lot about in the news – the Driverless Car. I think there's a strong case for a driverless car equivalent of the CAA, with a driverless car accident investigation branch. Without this it's hard to see how driverless car technology will win public trust.

Does AI pose a threat to society? No. But we do need to worry about the down to earth questions of present day rather unintelligent AIs; the ones that are deciding our loan applications, piloting our driverless cars or controlling our central heating. Are those AIs respecting our rights, freedoms and privacy? Are they safe? When AIs make bad decisions, can we find out why? And I worry too about the wider societal and economic impacts of AI. I worry about jobs of course, but actually I think there is a bigger question: how can we ensure that the wealth created by robotics and AI is shared by all in society?

Thank you.

This image was used to advertise the BA's series of events on the theme Robotics, AI and Society. The reason I reproduce it here is that one of the many interesting questions to the panel was about the way that AI tends to be visualised in the media. This kind of human face coalescing (or perhaps emerging) from the atomic parts of the AI seems to have become a trope for AI. Is it a helpful visualisation of the human face of AI, or does it mislead to an impression that AI has human characteristics?

Wednesday, February 15, 2017

Thoughts on the EU's draft report on robotics

A few weeks ago I was asked to write a short op-ed on the European Parliament Law Committee's recommendations on civil law rules for robotics.

In the end the piece didn't get published, so I am posting it here.

It is a great shame that most reports of the European Parliament’s Committee for Legal Affairs’ vote last week on its Draft Report on Civil Law Rules on Robotics headlined on ‘personhood’ for robots, because the report has much else to commend it. Most important among its several recommendations is a proposed code of ethical conduct for roboticists, which explicitly asks designers to research and innovate responsibly. Some may wonder why such an invitation even needs to be made but, given that engineering and computer science education rarely includes classes on ethics (it should), it is really important that robotics engineers reflect on their ethical responsibilities to society – especially given how disruptive robot technologies are. This is not new – great frameworks for responsible research and innovation already exist. One such is the 2014 Rome Declaration on RRI, and in 2015 the Foundation for Responsible Robotics was launched.

Within the report’s draft Code of Conduct is a call for robotics funding proposals to include a risk assessment. This too is a very good idea and guidance already exists in British Standard BS 8611, published in April 2016. BS 8611 sets out a comprehensive set of ethical risks and offers guidance on how to mitigate them. It is very good also to see that the Code stresses that humans, not robots, are the responsible agents; this is something we regarded as fundamental when we drafted the Principles of Robotics in 2010.

For me transparency (or the lack of it) is an increasing worry in both robots and AI systems. Labour’s industry spokesperson Chi Onwurah is right to say, “Algorithms are part of our world, so they are subject to regulation, but because they are not transparent, it’s difficult to regulate them effectively” (and don’t forget that it is algorithms that make intelligent robots intelligent). So it is very good to see the draft Code call for robotics engineers to “guarantee transparency … and right of access to information by all stakeholders”, and then in the draft ‘Licence for Designers’: you should ensure “maximal transparency” and even more welcome “you should develop tracing tools that … facilitate accounting and explanation of robotic behaviour… for experts, operators and users”.  Within the IEEE Standards Association Global Initiative on Ethics in AI and Autonomous Systems, launched in 2016, we are working on a new standard on Transparency in Autonomous Systems.

This brings me to standards and regulation.  I am absolutely convinced that regulation, together with transparency and public engagement, builds public trust. Why is it that we trust our tech? Not just because it’s cool and convenient, but also because it’s safe (and we assume that the disgracefully maligned experts will take care of assuring that safety). One of the reasons we trust airliners is that we know they are part of a highly regulated industry with an amazing safety record. The reason commercial aircraft are so safe is not just good design, it is also the tough safety certification processes and, when things do go wrong, robust processes of air accident investigation. So the Report’s call for a European Agency for Robotics and AI to recommend standards and regulatory framework is, as far as I’m concerned, not a moment too soon. We urgently need standards for safety certification of a wide range of robots, from drones and driverless cars to robots for care and assisted living.

Like many of my robotics colleagues I am deeply worried by the potential for robotics and AI to increase levels of economic inequality in the world. Winnie Byanyima, executive director of Oxfam writes for the WEF, “We need fundamental change to our economic model. Governments must stop hiding behind ideas of market forces and technological change. They … need to steer the direction of technological development”. I think she is right – we need a serious public conversation about technological unemployment and how we ensure that the wealth created by AI and Automonous Systems is shared by all. A Universal Basic Income may or may not be the best way to do this – but it is very encouraging to see this question raised in the draft Report.

I cannot close the piece without at least mentioning artificial personhood. My own view is that personhood is the solution to a problem that doesn’t exist. I can understand why, in the context of liability, the Report raises this question for discussion, but – as the report itself later asserts in the Code of Conduct: humans, not robots are the responsible agents. Robots are, and should remain, artefacts.