I strongly believe that researchers in intelligent robotics, autonomous systems and AI can no longer undertake their research in a moral vacuum, regard their work as somehow ethically neutral, or as someone else's ethical problem.
Researchers, we, need to be much more concerned about both how our work affects society and how interactions with this technology affect individuals.
Right now researchers in intelligent robots, or AI, do not need to seek ethical approval for their projects (unless of course they involve clinical or human subject trials), so most robotics/AI projects in engineering and computer science fall outside any kind of ethical scrutiny. While I'm not advocating that this should change now, I do believe – especially if some of the more adventurous current projects come anywhere close to achieving their goals – that ethical approval for intelligent robotics/AI research might be a wise course of action within five years.
Let me now try and explain why, by defining four ethical problems.
1. The ethical problem of artificial emotions, or robots that are designed to solicit an emotional response from humans
Right now, in our lab in Bristol, is a robot that can look you in the eye and, when you smile, the robot smiles back. Of course there's nothing 'behind' this smile, it's just a set of motors pulling and pushing the artificial skin of the robot's face. But does the inauthenticity of the robot's artificial emotions abnegate the designer of any responsibility for a human's response to that robot? I believe it does not, especially if those humans are children or unsophisticated users.
Young people at a recent Robotic Visions conference concluded that “robots shouldn't have emotions but they should recognise them”.
A question I'm frequently asked when giving talks is “could robots have feelings?”. My answer is “no, but we can make robots that behave as if they have feelings”. I'm now increasingly of the view that it won't matter whether a future robot really has feelings or not.
On the horizon is robots with artificial theory of mind, a development that will only serve to deepen this ethical problem.
2. The problem of engineering ethical machines
Clearly for all sorts of applications intelligent robots will need to be programmed with rules of safe/acceptable behaviour (c.f. Asimov 'laws' of robotics). This is not so far fetched: Ron Arkin, roboticist at Georgia Tech has proposed the development of an artificial conscience for military robots.
Such systems are no longer just an engineering problem. In short it is no longer good enough to build an intelligent robot, we need to be able to build an ethical robot. And, I would strongly argue, if it is a robot with artificial emotions, or designed to provoke human emotional responses, that robot must also have artificial ethics.
3. The societal problem of correct ethical behaviour toward robot companions or robot pets
Right now many people think of robots as slaves: that's what the word means. But in many near term applications it will – I argue - be more appropriate to think of robots as companions. Especially if those robots - say in healthcare – even in a limited sense 'get to know' their human charges over a period of time.
Our society rightly abhors cruelty to animals. Is it possible to be cruel to a robot? Right now not really, but as soon as we have robot companions or pets, on which humans come to depend – and that's in the very near future – then those human dependents will certainly expect their robots to be treated with respect and dignity [perhaps even to be accorded (animal) rights]. Would they be wrong to expect this?
4. The ethical problem of engineering sentient machines
A contemporary German philosopher, Thomas Metzinger, has asserted that all research in intelligent systems should be stopped. His argument is that in trying to engineer artificial consciousness we will, unwittingly, create machines that are in effect disabled (simply because we can't go from insect to human level intelligence in one go). In effect – he argues - we could create AI that can experience suffering. Now his position is extreme, but it does I think illustrate the difficulty. In moving from simple automata that in no sense could be thought of as sentient to intelligent machines that simulate sentience we need to be mindful of the ethical minefield of engineering sentience.
In summary:
What is it that makes intelligent autonomous systems different to other technologies in a way that means we need to have special concerns about ethical and societal impacts? It is, I suggest two factors in combination. Firstly, agency. Secondly, the ability to elicit an emotional response or in extremis dependency from humans. Right now we have plenty of systems with agency, within proscribed limits, like airline autopilots or room thermostats. We also have machines that generate emotional responses: Ferraris or iPods. Intelligent robots are different because they bring these two elements together in a potent new combination.
This post is the text of the statement I prepared for the EPSRC Societal Impact Panel in November 2009.