Sunday, June 17, 2018

What is Artificial Intelligence? (Or, can machines think?)

Here are the slides from my York Festival of Ideas keynote yesterday, which introduced the festival focus day Artificial Intelligence: Promises and Perils.

I start the keynote with Alan Turing's famous question: Can a Machine Think? and explain that thinking is not just the conscious reflection of Rodin's Thinker but also the largely unconscious thinking required to make a pot of tea. I note that at the dawn of AI 60 years ago we believed the former kind of thinking would be really difficult to emulate artificially and the latter easy. In fact it has turned out to be the other way round: we've had computers that can expertly play chess for over 20 years, but we can't yet build a robot that could go into your kitchen and make you a cup of tea (see also the Wozniak coffee test).

In slides 5 and 6 I suggest that we all assume a cat is smarter than a crocodile, which is smarter than a cockroach, on a linear scale of intelligence from not very intelligent to human intelligence. I ask where would a robot vacuum cleaner be on this scale and propose that such a robot is about as smart as an e-coli (single celled organism). I then illustrate the difficulty of placing the Actroid robot on this scale because, although it may look convincingly human (from a distance), in reality the robot is not very much smarter than a washing machine (and I hint that this is an ethical problem).

In slide 7 I show how apparently intelligent behaviour doesn't require a brain, with the Solarbot. This robot is an example of a Braitenberg machine. It has two solar panels (which look a bit like wings) acting as both sensors and power sources; the left hand panel is connected to the right hand wheel and vice versa. These direct connections mean that Solarbot can move towards the light and even navigate its way through obstacles, thus showing that intelligent behaviour is an emergent property of the interactions between body and environment.

In slide 8 I ask the question: What is the most advanced AI in the world today? (A question I am often asked.) Is it for example David Hanson's robot Sophia (which some press reports have claimed as the world's most advanced)? I argue it is not, since it is a chatbot AI - with a limited conversational repertoire - with a physical body (imagine Alexa with a humanoid head). Is it the DeepMind AI AlphaGo which famously beat the world's best Go player in 2016? Although very impressive I again argue no since AlphaGo cannot do anything other than play Go. Instead I suggest that everyday Google might well be the world's most advanced AI (on this I agree with my friend Joanna Bryson). Google is in effect a librarian able to find a book from an immense library for you - on the basis of your ill formed query - more or less instantly! (And this librarian is poly lingual too.)

In slides 9 I make the point that intelligence is not one thing that animals, robots and AIs have more or less of (in other words the linear scale shown on slides 5 and 6 is wrong). Then in slides 10 - 13 I propose four distinct categories of intelligence: morphological, swarm, individual and social intelligence. I suggest in slides 14 - 16 that if we express these as four axes of a graph then we can (very approximately) compare the intelligence of different organisms, including humans. In slide 17 I show some robots and argue that this graph shows why robots are so unintelligent; it is because robots generally only have two of the four kinds of intelligence whereas animals typically have three or sometimes all four. A detailed account of these ideas can be found in my paper How intelligent is your intelligent robot?

In the next segment, slides 18-20 I ask: how do we make Artificial General Intelligence (AGI)? I suggest that the key difference between current narrow AI and AGI is the ability - which comes very naturally to humans - to generalise knowledge learned in one context to a completely different context. This I think is the basis of human creativity. Using Data from Star Trek the next generation as a SF example of an AGI with human-equivalent intelligence as what we might be aiming for in the quest for AGI I explain that there are 3 approaches to getting there: by design, using artificial evolution or by reverse engineering animals. I offer the opinion that the gap between where we are now and Data like AGI is about the same as the gap between current space craft engine technology and warp drive technology. In other words not any time soon.

In the fourth segment of the talk (slides 21-24) I give a very brief account of evolutionary robotics - a method for breeding robots in much the same way farmers have artificially selected new varieties of plants and animals for thousands of years. I illustrate this with the wonderful Golem project which, for the first time, evolved simple creatures then 3D printed the most successful ones. I then introduce our new four year EPSRC funded project Autonomous Robot Evolution: from cradle to grave. In a radical new approach we aim to co-evolve robot bodies and brains in real-time and real-space. Using techniques from 3D printing new robot designs will literally be printed, before being trained in a nursery, then fitness tested in a target environment. With this approach we hope to be able to evolve robots for extreme environments, however because the energy costs are so high I do not think evolution is a route to truly thinking machines.

In the final segment (slides 25-35) I return to the approach of trying to design rather than evolve thinking machines. I introduce the idea of embedding a simulation of a robot in that robot, so that it has the ability to internally model itself. The first example I give is the amazing anthropomimetic robot invented by my old friend Owen Holland, called ECCEROBOT. Eccerobot is able to learn how to control it's own very complicated and hard-to-control body by trying out possible movement sequences in its internal model (Owen calls this a 'functional imagination'). I then outline our own work to use the same principle - a simulation based internal model - to demonstrate simple ethical behaviours, first with e-puck robots, then with NAO robots. These experiments are described in detail here and here. I suggest that these robots - with their ability to model and predict the consequences of their own and others' actions, in other words anticipate the future - may represent the first small steps toward thinking machines.

Related blog posts:
60 years of asking can robot think?
How intelligent are intelligent robots?
Robot bodies and how to evolve them

Wednesday, May 30, 2018

Simulation-based internal models for safer robots

Readers of this blog will know that I've become very excited by the potential of robots with simulation-based internal models in recent years. So far we've demonstrated their potential in simple ethical robots and as the basis for rational imitation. Our most recent publication instead examines the potential of robots with simulation-based internal models for safety. Of course it's not hard to see why the ability to model and predict the consequences of both your own and others' actions can help you to navigate the world more safely than without that ability.

Our paper Simulation-Based Internal Models for Safer Robots demonstrates the value of anticipation in what we call the corridor experiment. Here a smart robot (equipped with a simulation based internal model which we call a consequence engine) must navigate to the end of a corridor while maintaining a safe space around it at all times despite five other robots moving randomly in the corridor - in much the same way you and I might have to navigate down a busy office corridor while others are coming in the opposite direction.

Here is the abstract from our paper:
In this paper, we explore the potential of mobile robots with simulation-based internal models for safety in highly dynamic environments. We propose a robot with a simulation of itself, other dynamic actors and its environment, inside itself. Operating in real time, this simulation-based internal model is able to look ahead and predict the consequences of both the robot’s own actions and those of the other dynamic actors in its vicinity. Hence, the robot continuously modifies its own actions in order to actively maintain its own safety while also achieving its goal. Inspired by the problem of how mobile robots could move quickly and safely through crowds of moving humans, we present experimental results which compare the performance of our internal simulation-based controller with a purely reactive approach as a proof-of-concept study for the practical use of simulation-based internal models.
So, does it work? Thanks to some brilliant experimental work by Christian Blum the answer is a resounding yes. The best way to understand what's going on is with this wonderful gif animation of one experimental run below. The smart robot (blue) starts at the left and has the goal of safely reaching the right hand end of the corridor – its actual path is also shown in blue. Meanwhile 5 (red) robots are moving randomly (including bouncing off walls) and their actual paths are also shown in red; these robots are equipped only with simple obstacle avoidance behaviours. The larger blue circle shows blue's 'attention radius' – to reduce computational effort blue will only model red robots within this radius. The yellow paths in front of the red robots in blue's attention radius show blue's predictions of how those robots will move (taking into account collisions with the corridor walls and with blue and each other). The light blue projection in front of blue shows which of the 34 next possible actions of blue that is internally modelled is actually chosen as the next action (which, as you will see, sometimes includes standing still).

What do the results show us? Christian ran lots of trials – 88 simulations and 54 real robot experiments – over four experiments: (1) the baseline in simulation – in which the blue robot has only a simple reactive collision avoidance behaviour, (2) the baseline with real robots, (3) using the consequence engine (CE) in the blue robot in simulation, and (4) using the consequence engine in the blue robot with real robots. In the results below (a) shows the time taken for the blue robot to reach the end of the corridor, (b) shows the distance that the blue robot covers while reaching the end of the corridor, (c) shows the “danger ratio” experienced by the blue robot, and (d) shows the number of consequence engine runs per timestep in the blue robot. The danger ratio is the percentage of the run time that anther robot is within the blue robot’s safety radius.

For a relatively small cost in additional run time and distance covered, panels (a) and (b), the danger ratio is very significantly reduced from a mean value of ~20% to a mean value of zero, panel (c). Of course there is a computational cost, and this is reflected in panel (d); the baseline experiment has no consequence engine and hence runs no simulations, whereas the smart robot runs an average of between 8 and 10 simulations per time-step. This is exactly what we would expect: predicting the future clearly incurs a computational overhead.

Full paper reference:
Blum C, Winfield AFT and Hafner VV (2018) Simulation-Based Internal Models for Safer Robots. Front. Robot. AI 4:74. doi: 10.3389/frobt.2017.00074

I am indebted to Christian Blum who programmed the robots, set up the experiment and obtained the results outlined here. Christian lead authored the paper, which was also co-authored by my friend and research collaborator Verena Hafner, who was Christian's PhD advisor.

Saturday, February 03, 2018

Why ethical robots might not be such a good idea after all

This week my colleague Dieter Vanderelst presented our paper: The Dark Side of Ethical Robots at AIES 2018 in New Orleans.

I blogged about Dieter's very elegant experiment here, but let me summarise. With two NAO robots he set up a demonstration of an ethical robot helping another robot acting as a proxy human, then showed that with a very simple alteration of the ethical robot's logic it is transformed into a distinctly unethical robot - behaving either competitively or aggressively toward the proxy human.

Here are our paper's key conclusions:

The ease of transformation from ethical to unethical robot is hardly surprising. It is a straightforward consequence of the fact that both ethical and unethical behaviours require the same cognitive machinery with – in our implementation – only a subtle difference in the way a single value is calculated. In fact, the difference between an ethical (i.e. seeking the most desirable outcomes for the human) robot and an aggressive (i.e. seeking the least desirable outcomes for the human) robot is a simple negation of this value.

On the face of it, given that we can (at least in principle) build explicitly ethical machines* then it would seem that we have a moral imperative to do so; it would appear to be unethical not to build ethical machines when we have that option. But the findings of our paper call this assumption into serious doubt. Let us examine the risks associated with ethical robots and if, and how, they might be mitigated. There are three.
  1. First there is the risk that an unscrupulous manufacturer might insert some unethical behaviours into their robots in order to exploit naive or vulnerable users for financial gain, or perhaps to gain some market advantage (here the VW diesel emissions scandal of 2015 comes to mind). There are no technical steps that would mitigate this risk, but the reputational damage from being found out is undoubtedly a significant disincentive. Compliance with ethical standards such as BS 8611 guide to the ethical design and application of robots and robotic systems, or emerging new IEEE P700X ‘human’ standards would also support manufacturers in the ethical application of ethical robots. 
  2. Perhaps more serious is the risk arising from robots that have user adjustable ethics settings. Here the danger arises from the possibility that either the user or a technical support engineer mistakenly, or deliberately, chooses settings that move the robot’s behaviours outside an ‘ethical envelope’. Much depends of course on how the robot’s ethics are coded, but one can imagine the robot’s ethical rules expressed in a user-accessible format, for example, an XML like script. No doubt the best way to guard against this risk is for robots to have no user adjustable ethics settings, so that the robot’s ethics are hard-coded and not accessible to either users or support engineers. 
  3. But even hard-coded ethics would not guard against undoubtedly the most serious risk of all, which arises when those ethical rules are vulnerable to malicious hacking. Given that cases of white-hat hacking of cars have already been reported, it's not difficult to envisage a nightmare scenario in which the ethics settings for an entire fleet of driverless cars are hacked, transforming those vehicles into lethal weapons. Of course, driverless cars (or robots in general) without explicit ethics are also vulnerable to hacking, but weaponising such robots is far more challenging for the attacker. Explicitly ethical robots focus the robot’s behaviours to a small number of rules which make them, we think, uniquely vulnerable to cyber-attack.
Ok, taking the most serious of these risks: hacking, we can envisage several technical approaches to mitigating the risk of malicious hacking of a robot’s ethical rules. One would be to place those ethical rules behind strong encryption. Another would require a robot to authenticate its ethical rules by first connecting to a secure server. An authentication failure would disable those ethics, so that the robot defaults to operating without explicit ethical behaviours. Although feasible, these approaches would be unlikely to deter the most determined hackers, especially those who are prepared to resort to stealing encryption or authentication keys.

It is very clear that guaranteeing the security of ethical robots is beyond the scope of engineering and will need regulatory and legislative efforts. Considering the ethical, legal and societal implications of robots, it becomes obvious that robots themselves are not where responsibility lies. Robots are simply smart machines of various kinds and the responsibility to ensure they behave well must always lie with human beings. In other words, we require ethical governance, and this is equally true for robots with or without explicit ethical behaviours.

Two years ago I thought the benefits of ethical robots outweighed the risks. Now I'm not so sure. I now believe that - even with strong ethical governance - the risks that a robot’s ethics might be compromised by unscrupulous actors are so great as to raise very serious doubts over the wisdom of embedding ethical decision making in real-world safety critical robots, such as driverless cars. Ethical robots might not be such a good idea after all.

*As a footnote let me explain what I mean by explicitly ethical robots: these are robots that select behaviours on the basis of ethical rules - in a sense they can be said to reason about ethics (in our case by evaluating the ethical consequences of several possible actions). Here I'm using the terminology of James Moor, who proposed four kinds of ethical agents, as I explain here. Moor shows in his classification that all robots (and AIs) are ethical agents in the sense that they can all have an ethical impact.

Thus, even though we're calling into question the wisdom of explicitly ethical robots, that doesn't change the fact that we absolutely must design all robots to minimise the likelihood of ethical harms, in other words we should be designing implicitly ethical robots within Moor's schema.

Here is the full reference to our paper:

Vanderelst D and Winfield AFT (2018), The Dark Side of Ethical Robots, AAAI/ACM Conf. on AI Ethics and Society (AIES 2018), New Orleans.

Related blog posts:
The Dark side of Ethical Robots
Could we make a moral machine?
How ethical is your ethical robot?
Towards ethical robots: an update
Towards an Ethical Robot

Thursday, February 01, 2018

Ethical Governance: what is it and who's doing it?

These days I often find myself talking about ethical governance. Not just talking about but advocating: for instance in written evidence to the 2016 parliamentary select committee on robots and AI I made the link between ethical governance and trust. I believe that without transparent ethical governance robotics and AI will not win public trust, and without trust we will not see the societal benefits of robots and AI that we all hope for.

But what exactly is ethical governance and who is doing it, and perhaps more importantly, who in robotics and AI is doing it well?

In a draft paper on the subject I define ethical governance as
a set of processes, procedures, cultures and values designed to ensure the highest standards of behaviour. Ethical governance thus goes beyond simply good (i.e. effective) governance, in that it inculcates ethical behaviours. Normative ethical governance is seen as an important pillar of responsible research and innovation (RRI), which “entails an approach, rather than a mechanism, so it seeks to deal with ethical issues as or before they arise in a principled manner rather than waiting until a problem surfaces and dealing with it in an ad hoc way [1]” 
The link I make here between ethical governance and responsible research and innovation is I think really important. Ethical governance is a key part of RRI. They are not the same thing but it would be hard to imagine good ethical governance without RRI, and vice versa.

So what would I expect of companies or organisations who claim to be ethical? As a starting point for discussion here are five things that ethical companies should do:
  • Have an ethical code of conduct, so that everyone in the company understands what is expected of them. This should sit alongside a mechanism which allows employees to be able to raise ethical concerns, if necessary in confidence, without fear of displeasing a manager.
  • Provide ethics training for everyone, without exception. Ethics, like quality, is not something you can do as as add-on; simply appointing an ethics manager, while not a bad idea, is not enough. Ethical governance needs to become part of a company's culture and DNA, not just in product development but in management, finance, HR and marketing too.
  • Undertake ethical risk assessments of all new products, and act upon the findings of those assessments. A toolkit, or method, for ethical risk assessment of robots and robotic systems exists in British Standard BS 8611, which - alongside much else - sets out 20 ethical risks and hazards together with recommendations on how to mitigate these and verify that they have been addressed.
  • Be transparent about your ethical governance. Of course your robots and AIs must be transparent too, but here I mean transparency of process, not product. It's not enough to claim to be ethical, you need to show how you are ethical. That means publishing your ethical code of conduct, membership of your ethics board if you have one (and its terms of reference), and ideally case studies showing how you have conducted ethical risk assessments.
  • Really value ethical governance.  Even if you have the four processes above in place you also needs to be sincere about ethical governance; that ethical governance is one of your core values, and just not a smokescreen for what you really value, like maximising shareholder returns.
My final point about really valuing ethical governance is of course hard to evidence. But, like trust, confidence in a company's claim to be ethical has to be earned and - as we've seen - can easily be damaged.

This brings me to my second question: who is doing ethical governance? And are there any examples of best practice? A week or so ago I asked Twitter this question. I've had quite a few nominations but haven't yet looked into them all. When I have, I will complete this blog post.

[1] Rainey, S., and Goujon, P. (2011). Toward a Normative Ethical of Governance of Technology. Contextual Pragmatism and Ethical Governance. In Ren von Schomberg (ed.) Towards Responsible Research and Innovation in the Information and Communication Technologies and Security Technologies Fields, Report of the European Commission-DG Research and Innovation.