Pages

Saturday, February 03, 2018

Why ethical robots might not be such a good idea after all

This week my colleague Dieter Vanderelst presented our paper: The Dark Side of Ethical Robots at AIES 2018 in New Orleans.

I blogged about Dieter's very elegant experiment here, but let me summarise. With two NAO robots he set up a demonstration of an ethical robot helping another robot acting as a proxy human, then showed that with a very simple alteration of the ethical robot's logic it is transformed into a distinctly unethical robot - behaving either competitively or aggressively toward the proxy human.

Here are our paper's key conclusions:

The ease of transformation from ethical to unethical robot is hardly surprising. It is a straightforward consequence of the fact that both ethical and unethical behaviours require the same cognitive machinery with – in our implementation – only a subtle difference in the way a single value is calculated. In fact, the difference between an ethical (i.e. seeking the most desirable outcomes for the human) robot and an aggressive (i.e. seeking the least desirable outcomes for the human) robot is a simple negation of this value.

On the face of it, given that we can (at least in principle) build explicitly ethical machines* then it would seem that we have a moral imperative to do so; it would appear to be unethical not to build ethical machines when we have that option. But the findings of our paper call this assumption into serious doubt. Let us examine the risks associated with ethical robots and if, and how, they might be mitigated. There are three.
  1. First there is the risk that an unscrupulous manufacturer might insert some unethical behaviours into their robots in order to exploit naive or vulnerable users for financial gain, or perhaps to gain some market advantage (here the VW diesel emissions scandal of 2015 comes to mind). There are no technical steps that would mitigate this risk, but the reputational damage from being found out is undoubtedly a significant disincentive. Compliance with ethical standards such as BS 8611 guide to the ethical design and application of robots and robotic systems, or emerging new IEEE P700X ‘human’ standards would also support manufacturers in the ethical application of ethical robots. 
  2. Perhaps more serious is the risk arising from robots that have user adjustable ethics settings. Here the danger arises from the possibility that either the user or a technical support engineer mistakenly, or deliberately, chooses settings that move the robot’s behaviours outside an ‘ethical envelope’. Much depends of course on how the robot’s ethics are coded, but one can imagine the robot’s ethical rules expressed in a user-accessible format, for example, an XML like script. No doubt the best way to guard against this risk is for robots to have no user adjustable ethics settings, so that the robot’s ethics are hard-coded and not accessible to either users or support engineers. 
  3. But even hard-coded ethics would not guard against undoubtedly the most serious risk of all, which arises when those ethical rules are vulnerable to malicious hacking. Given that cases of white-hat hacking of cars have already been reported, it's not difficult to envisage a nightmare scenario in which the ethics settings for an entire fleet of driverless cars are hacked, transforming those vehicles into lethal weapons. Of course, driverless cars (or robots in general) without explicit ethics are also vulnerable to hacking, but weaponising such robots is far more challenging for the attacker. Explicitly ethical robots focus the robot’s behaviours to a small number of rules which make them, we think, uniquely vulnerable to cyber-attack.
Ok, taking the most serious of these risks: hacking, we can envisage several technical approaches to mitigating the risk of malicious hacking of a robot’s ethical rules. One would be to place those ethical rules behind strong encryption. Another would require a robot to authenticate its ethical rules by first connecting to a secure server. An authentication failure would disable those ethics, so that the robot defaults to operating without explicit ethical behaviours. Although feasible, these approaches would be unlikely to deter the most determined hackers, especially those who are prepared to resort to stealing encryption or authentication keys.

It is very clear that guaranteeing the security of ethical robots is beyond the scope of engineering and will need regulatory and legislative efforts. Considering the ethical, legal and societal implications of robots, it becomes obvious that robots themselves are not where responsibility lies. Robots are simply smart machines of various kinds and the responsibility to ensure they behave well must always lie with human beings. In other words, we require ethical governance, and this is equally true for robots with or without explicit ethical behaviours.

Two years ago I thought the benefits of ethical robots outweighed the risks. Now I'm not so sure. I now believe that - even with strong ethical governance - the risks that a robot’s ethics might be compromised by unscrupulous actors are so great as to raise very serious doubts over the wisdom of embedding ethical decision making in real-world safety critical robots, such as driverless cars. Ethical robots might not be such a good idea after all.

*As a footnote let me explain what I mean by explicitly ethical robots: these are robots that select behaviours on the basis of ethical rules - in a sense they can be said to reason about ethics (in our case by evaluating the ethical consequences of several possible actions). Here I'm using the terminology of James Moor, who proposed four kinds of ethical agents, as I explain here. Moor shows in his classification that all robots (and AIs) are ethical agents in the sense that they can all have an ethical impact.

Thus, even though we're calling into question the wisdom of explicitly ethical robots, that doesn't change the fact that we absolutely must design all robots to minimise the likelihood of ethical harms, in other words we should be designing implicitly ethical robots within Moor's schema.

Here is the full reference to our paper:

Vanderelst D and Winfield AFT (2018), The Dark Side of Ethical Robots, AAAI/ACM Conf. on AI Ethics and Society (AIES 2018), New Orleans.

Related blog posts:
The Dark side of Ethical Robots
Could we make a moral machine?
How ethical is your ethical robot?
Towards ethical robots: an update
Towards an Ethical Robot

Thursday, February 01, 2018

Ethical Governance: what is it and who's doing it?

These days I often find myself talking about ethical governance. Not just talking about but advocating: for instance in written evidence to the 2016 parliamentary select committee on robots and AI I made the link between ethical governance and trust. I believe that without transparent ethical governance robotics and AI will not win public trust, and without trust we will not see the societal benefits of robots and AI that we all hope for.

But what exactly is ethical governance and who is doing it, and perhaps more importantly, who in robotics and AI is doing it well?

In a draft paper on the subject I define ethical governance as
a set of processes, procedures, cultures and values designed to ensure the highest standards of behaviour. Ethical governance thus goes beyond simply good (i.e. effective) governance, in that it inculcates ethical behaviours. Normative ethical governance is seen as an important pillar of responsible research and innovation (RRI), which “entails an approach, rather than a mechanism, so it seeks to deal with ethical issues as or before they arise in a principled manner rather than waiting until a problem surfaces and dealing with it in an ad hoc way [1]” 
The link I make here between ethical governance and responsible research and innovation is I think really important. Ethical governance is a key part of RRI. They are not the same thing but it would be hard to imagine good ethical governance without RRI, and vice versa.

So what would I expect of companies or organisations who claim to be ethical? As a starting point for discussion here are five things that ethical companies should do:
  • Have an ethical code of conduct, so that everyone in the company understands what is expected of them. This should sit alongside a mechanism which allows employees to be able to raise ethical concerns, if necessary in confidence, without fear of displeasing a manager.
  • Provide ethics training for everyone, without exception. Ethics, like quality, is not something you can do as as add-on; simply appointing an ethics manager, while not a bad idea, is not enough. Ethical governance needs to become part of a company's culture and DNA, not just in product development but in management, finance, HR and marketing too.
  • Undertake ethical risk assessments of all new products, and act upon the findings of those assessments. A toolkit, or method, for ethical risk assessment of robots and robotic systems exists in British Standard BS 8611, which - alongside much else - sets out 20 ethical risks and hazards together with recommendations on how to mitigate these and verify that they have been addressed.
  • Be transparent about your ethical governance. Of course your robots and AIs must be transparent too, but here I mean transparency of process, not product. It's not enough to claim to be ethical, you need to show how you are ethical. That means publishing your ethical code of conduct, membership of your ethics board if you have one (and its terms of reference), and ideally case studies showing how you have conducted ethical risk assessments.
  • Really value ethical governance.  Even if you have the four processes above in place you also needs to be sincere about ethical governance; that ethical governance is one of your core values, and just not a smokescreen for what you really value, like maximising shareholder returns.
My final point about really valuing ethical governance is of course hard to evidence. But, like trust, confidence in a company's claim to be ethical has to be earned and - as we've seen - can easily be damaged.

This brings me to my second question: who is doing ethical governance? And are there any examples of best practice? A week or so ago I asked Twitter this question. I've had quite a few nominations but haven't yet looked into them all. When I have, I will complete this blog post.


[1] Rainey, S., and Goujon, P. (2011). Toward a Normative Ethical of Governance of Technology. Contextual Pragmatism and Ethical Governance. In Ren von Schomberg (ed.) Towards Responsible Research and Innovation in the Information and Communication Technologies and Security Technologies Fields, Report of the European Commission-DG Research and Innovation.