Friday, October 30, 2015

How ethical is your ethical robot?

If you're in the business of making ethical robots, then sooner or later you have to face the question: how ethical is your ethical robot? If you've read my previous blog posts then you will probably have come to the conclusion 'not very' - and you would be right - but here I want to explore the question in a little more depth.

First let us consider whether our 'Asimovian' robot can be considered ethical at all. For the answer I'm indebted to philosopher Dr Rebecca Reilly-Cooper who read our paper and concluded that yes, we can legitimately describe our robot as ethical, at least in a limited sense. She explained that the robot implements consequentialist ethics. Rebbeca wrote:
"The obvious point that any moral philosopher is going to make is that you are assuming that an essentially consequentialist approach to ethics is the correct one. My personal view, and I would guess the view of most moral philosophers, is that any plausible moral theory is going to have to pay at least some attention to the consequences of an action in assessing its rightness, even if it doesn’t claim that consequences are all that matter, or that rightness is entirely instantiated in consequences. So on the assumption that consequences have at least some significance in our moral deliberations, you can claim that your robot is capable of attending to one kind of moral consideration, even if you don’t make the much stronger claim that is capable of choosing the right action all things considered."
One of the great things about consequences is that they can be estimated - in our case using a simulation-based internal model which we call a consequence engine. So from a practical point of view it seems that we can build a robot with consequentialist ethics, whereas it is much harder to think about how to build a robot with say Deontic ethics, or Virtue ethics.

Having established what kind of ethics that our ethical robot has, now consider the question of how far does the robot go toward moral agency. Here we can turn to an excellent paper by James Moor, called The Nature, Importance and Difficulty of Machine Ethics. In that paper* Moor suggests four categories of ethical agency - starting with the lowest. Let me summarise those here:
  1. Ethical impact agents: Any machine that can be evaluated for its ethical consequences.
  2. Implicit ethical agents: Designed to avoid negative ethical effects.
  3. Explicit ethical agents: Machines that can reason about ethics.
  4. Full ethical agents: Machines that can make explicit moral judgments and justify them.
The first category: ethical impact agents, really includes all machines. A good example is a knife, which can clearly be used for good (chopping food, or surgery) or ill (as a lethal weapon). Now think about the blunt plastic knife that comes with airplane food - that falls into Moor's second category since it has been designed to reduce the potential of ethical misuse - it is an implicit ethical agent. Most robots fall into the first category: they are ethical impact agents, and a subset - those that have been designed to avoid harm by, for instance detecting if a human walks in front of them and automatically coming to a stop - are implicit ethical agents.

Let's now skip to Moor's fourth category, because it helps to frame our question - how ethical is your ethical robot? At present I would say there are no machines that are full ethical agents. In fact the only full ethical agents we know are 'adult humans of sound mind'. The point is this - to be a full ethical agent you need to be able to not only make moral judgements but account for why you made the choices you did.

It is clear that our simple Asimovian robot is not a full ethical agent. It cannot choose how to behave (like you or I), but is compelled to make decisions based on the harm-minimisation rules hard-coded into it. And it cannot justify those decisions post-hoc. It is, as I've suggested elsewhere, an ethical zombie. I would however argue that because of the cognitive machinery the robot uses to simulate ahead to model and evaluate the consequences of each of its next possible actions combined with its safety/ethical logical rules to choose between those actions, then the robot can be said to be reasoning about ethics. I believe our robot is an explicit ethical agent in Moor's scheme.

Assuming you agree with me, then does the fact that we have reached the third category in Moor's scheme mean that full ethical agents are on the horizon? The answer is a big NO. The scale of Moor's scheme is not linear. It's a relative small step from ethical impact agents to implicit ethical agents. Then a very much bigger step to explicit ethical agents, which we are only just beginning to take. But there is a huge gulf then to full ethical agents, since they would almost certainly need something approaching human equivalent intelligence.

But maybe it's just as well. The societal implications of full ethical agents, if and when they exist, would be huge. For now at least, I think I prefer my ethical robots to be zombies.


*Moor JH (2006), The Nature, Importance and Difficulty of Machine Ethics, IEEE Intelligent Systems, 21 (4), 18-21.

Tuesday, August 25, 2015

My contribution to an Oral History of Robotics

In March 2013 I was interviewed by Peter Asaro for the IEEE.tv Oral History of Robotics series. That interview has now been published, and here it is:


In case you're wondering, I'm sitting in Noel Sharkey's study (shivering slightly - it was a bitterly cold day in Sheffield). It was a real privilege to be asked to contribute, especially alongside the properly famous roboticists Peter interviewed. Do check them out. There doesn't seem to be an index page, but the set starts on page 2 of the IEEE.tv history channel.

Postscript: a full transcript of the interview can be found here: http://ethw.org/Oral-History:Alan_Winfield

Friday, July 31, 2015

Towards ethical robots: an update

This post is just a quick update on our ethical robots research.

Our initial ethical robot experiments were done with e-puck robots. Great robots but not ideal for what is essentially human-robot interaction (HRI) research. Thus we've switched to NAO robots and have spent the last few months re-coding for the NAOs. This is not a trivial exercise. The e-pucks and NAO robots have almost nothing in common, and colleague and project post-doc Dieter Vanderelst has re-created the whole consequence engine architecture from the ground up, together with the tools for running experiments and collecting data.

Why the NAO robots? Well, they are humanoid and therefore better fitted for HRI research. But more importantly they're much more complex and expressive than the e-puck robots, and provide huge scope for interesting behaviours and responses, such as speech or gesture.


But we are not yet making use of that additional expressiveness. In initial trials Dieter has coded the ethical robot to physically intervene, i.e. block the path of the 'human' it order to prevent it from coming to harm. Here below are two example runs, illustrated with composite images showing overlaid successive screen grabs from the overhead camera.


Here red is the ethical robot, which is initially heading toward its goal position toward the top right of the arena. Meanwhile blue - the proxy human - is not looking where it's going and heading for danger, at the bottom right of the arena. When it notices this red then diverts from its path, blocks blue, which then simply halts. Red's consequence engine now predicts no danger to blue and so red resumes progress toward its goal.

Although we have no videos just yet you can catch a few seconds of Dieter and the robots at 2:20 on this excellent video made by Maggie Philbin and Teentech for the launch of the BBC Micro:bit earlier this month.

Sunday, May 31, 2015

Copyright violations bring Forth memories

Last week I found a couple of copyright violations. Am I upset? Not at all - actually I'm delighted that stuff I did in 1983 is alive and well on the Interweb thanks to the efforts of others.

The first is an online readable copy of my 1983 book The Complete Forth. It's a textbook on the programming language Forth that I was heavily into at the time. The book was first published by Sigma Press, then internationally by John Wiley, and was translated into both Dutch and Japanese. Someone - I assume from the Jupiter Ace archive - has gone to the trouble of scanning every page. Even the pull out reference card. Whoever you are, thank you.

Just before I wrote that book, I had developed a Forth programming system (a programming environment that integrates compiler and interpreter) for the NASCOM 2 Z80 micro computer. Myself and a friend then marketed Hull-Forth and, I recall, sold several hundred copies. Of course this was pre-internet so marketing meant small ads in the magazine Personal Computer World. What we actually shipped was a printed manual together with the code on a cassette tape. Floppy disks were hugely expensive and beyond the reach of hobby computers, so for saving and loading programs we used audio cassette recorders. They were slow and very unreliable; if there was a checksum error you just had to rewind, cross your fingers and try again. I can't imagine anyone feeling nostalgic for that particular technology.

This brings me to the second copyright infringement. By accident I discovered there is a webpage for NASCOM enthusiasts, and several emulators, so you can run a virtual NASCOM on your modern PC. Scrolling down the long list of software in the NASCOM repository, in the section Programming Languages, I find
Hah! Someone must have gone to alot of trouble to get the code from the original cassette*, recorded using the Kansas City Standard at, I think, 300 baud (so slow you could almost hear the noughts and ones!), to a .NAS file you can download into your NASCOM emulator.

Ok, now to get that NASCOM emulator running. It will be fun (and slightly absurd) to run Hull Forth again for the first time in about 33 years.


*I probably still have one of those cassettes in my loft**, but no way of reading the data from it.
**Along with stacks of punched cards, rolls of paper tape, and all kinds floppy disks.

Saturday, May 30, 2015

Forgetting may be important to cultural evolution

Our latest paper from the Artificial Culture project has just been published: On the Evolution of Behaviors through Embodied Imitation.

Here is the abstract
This article describes research in which embodied imitation and behavioral adaptation are investigated in collective robotics. We model social learning in artificial agents with real robots. The robots are able to observe and learn each others' movement patterns using their on-board sensors only, so that imitation is embodied. We show that the variations that arise from embodiment allow certain behaviors that are better adapted to the process of imitation to emerge and evolve during multiple cycles of imitation. As these behaviors are more robust to uncertainties in the real robots' sensors and actuators, they can be learned by other members of the collective with higher fidelity. Three different types of learned-behavior memory have been experimentally tested to investigate the effect of memory capacity on the evolution of movement patterns, and results show that as the movement patterns evolve through multiple cycles of imitation, selection, and variation, the robots are able to, in a sense, agree on the structure of the behaviors that are imitated.
Let me explain.

In the artificial culture project we implemented social learning in a group of robots. Robots were programmed to learn from each other, by imitation. Imitation was strictly embodied, so robots observed each other using their onboard sensors and, on the basis of only visual sense data from a robot’s own camera and perspective, the learner robot inferred another robot’s physical behaviour. (Here is a quick 5 minute intro to the project.)

Not surprisingly embodied robot-robot imitation is imperfect. A combination of factors including the robots’ relatively low-resolution onboard camera, variations in lighting, small differences between robots, multiple robots sometimes appearing within a learner robot’s field of view, and of course having to infer a robot’s movements by tracking the relative size and position of that robot in the learner’s field of view, lead to imitation errors. And some movement patterns are easier to imitate than others (think of how much easier it is to learn the steps of a slow waltz than the samba by watching your dance teacher). The fidelity of embodied imitation for robots, just as for animals, is a complex function of four factors: (1) the behaviours being learned, (2) the robots’ sensorium and morphology, (3) environmental noise and (4) the inferential learning algorithm.

But rather than being a problem, noisy social learning was our aim. We are interested in the dynamics of social learning, and in particular the way that behaviours evolve as they propagate through the group. Noisy social learning means that behaviours are subject to variation as they are copied from one robot to another. Multiple cycles of imitation (robot B learns behaviour m from A, then robot C learns the same behaviour m′ (m mutated), from robot B, and so on), gives rise to behavioural heredity. And if robots are able to select which learned behaviours to enact we have the three Darwinian operators for evolution, except that this is behavioural, or memetic, evolution.

These experiments show that embodied behavioural evolution really does take place. If selection is random, that is robots select which behaviour to enact from those already learned – with equal probability, then we see several interesting findings.

1. If by chance one or more high fidelity copies follow a poor fidelity imitation, the large variation in the initial noisy learning can lead to a new behavioural species, or traditions. Thus showing that noisy social learning can play a role in the emergence of novelty in behavioural (i.e. cultural) evolution. That was written up in Winfield and Erbas, 2011.

But it is the second and third findings that we describe in our new paper.

2. We see that behaviours appear to adapt to be easier to learn, i.e. better ‘fitted’ to the robot swarm. The way to think about this is that the robots' sensors and bodies, and physical environment of the arena with several robots (including lighting), together comprise the 'ecological niche' for behavioural evolution. Behaviours mutate but the ones better fitted to that niche survive.

3. The third finding from this series of experiments is perhaps the most unexpected and the one I want to outline in a bit more detail here. We ran the same embodied behavioural evolution with three memory sizes: no memory, limited memory and unlimited memory.

In the unlimited memory trials each robot saved every learned meme, so the meme pool across the whole robot population (of four robots) grew as the trial progressed. Thus all learned memes were available to be selected for enaction. In the limited memory trials each robot had a memory capacity of only five learned memes, so that when a new meme was learned the oldest one in the robot's memory was deleted.

The diagram below shows the complete family tree of evolved memes, for one typical run of the limited memory case. At the start of the run the robots were seeded with two memes, shown as 1 and 2 at the top of the diagram. Behaviour 1 was a movement pattern in which the robot traces a triangle path, behaviour 2 a square. Because this was a limited memory trial the total meme pool has only 20 behaviours - these are shown below as diamonds. Notice the cluster of 11 closely related memes at the bottom right, all of which are 7th, 8th or 9th generation descendents of the triangle meme.

Behavioural evolution map following a 4-robot experiment with limited memory; each robot stores only the most recent 5 learned behaviours. Each behaviour is descended from two seed behaviours labelled 1 and 2. Orange nodes are high fidelity copies, blue nodes are low fidelity copies. The 20 behaviours in the memory of all 4 robots at the end of the experiment are highlighted as diamonds. Note the cluster of 11 closely-related behaviours at the bottom right.

When we ran multiple trials of the limited and unlimited memory cases, then analysed the number and sizes of the clusters of related memes in the meme pool, we saw that the limited memory trials showed a smaller number of larger clusters than the unlimited memory case. The difference was clear and significant; with limited memory an average of 2.8 clusters of average size 8.3, with unlimited memory 3.9 clusters of size 6.9.

Why is this clustering interesting? Well it's because the number and size of clusters in the meme pool are good indicators of its diversity. Think of each cluster of related memes as a 'tradition'. A healthy culture needs a balance between stability and diversity. Neither too much stability, i.e. a very small number (in the limit 1) of traditions, or too much diversity, i.e. clusters so small that there are no persistent traditions at all. Perhaps the ideal balance is a smallish number of somewhat persistent traditions.

So far I didn't mention the no memory case. This was the least interesting of the three. Actually by no memory we mean a memory size of one; in other words a robot has no choice but to enact the last behaviour it learned. There is no selection, and no clusters can form. Traditions can never even get started, let alone persist.

Of course it would unwise to draw any big conclusions from this limited experimental study. But an intriguing possibility is that some forgetting (but not too much) may, just like noisy imitation, be a necessary condition for the emergence of culture in social agents.

Full reference:
Erbas MD, Bull L and Winfield AFT (2015), On the Evolution of Behaviors through Embodied Imitation, Artificial Life, 21 (2), pp 141-165. The full text (final draft) paper can be downloaded here.

Related blog posts:
Robot imitation as a method for modelling the foundations of social life
Open-ended Memetic Evolution, or is it?

Saturday, April 04, 2015

Yesterday I looked through the eyes of a robot

It was a NAO robot fitted with a 3D printed set of goggles, so that the robot has two real cameras on its head (the eyes of the NAO robot are not in fact cameras). I was in another room wearing an Oculus Rift headset. The Oculus was hooked up to the NAO's goggle cameras, so that I could see through those cameras - in stereo vision.

photo by Peter Gibbons
But it was even better than that. The head positioning system of the Oculus headset was also hooked up to the robot, so I could turn my head and - in sync - the robot's head moved. And I was standing in front of a Microsoft Kinect that was tracking my arm movements. Those movements were being sent to the NAO, so by moving my arms I was also moving the robot's arms.

All together this made for a pretty compelling immersive experience. I was able look down while moving my (robot) arms and see them pretty much where you would expect to see your own arms. The illusion was further strengthened when Peter placed a mirror in front of the NAO robot, so I could see my (robot) self moving in the mirror. Then it got weird. Peter asked me to open my hand and placed a paper cup into it. I instinctively looked down and was momentarily shocked not to see the cup in my hand. That made me realise how quickly - within a couple of minutes of donning the headset - I was adjusting to the new robot me.

This setup, developed here in the BRL by my colleagues Paul Bremner and Pete Gibbons, is part of a large EPSRC project called Being There. Paul and Peter are investigating the very interesting question of how humans interact with and via teleoperated robots, in shared social spaces. I think teleoperated robot avatars will be hugely important in the near future - more so than fully autonomous robots. But our robot surrogates will not look like a younger buffer Bruce Willis. They will look like robots. How will we interact with these surrogate robots - robots with human intelligences, human personalities - seemingly with human souls? Will they be treated with the same level of respect that would be accorded their humans if they were actually there, in the flesh? Or be despised as voyeuristic; an uninvited webcam on wheels?

Here is a YouTube video of an earlier version of this setup, without the goggles or Kinect:


I didn't get the feeling of what it is like to be a robot, but it's a step in that direction.

Saturday, February 21, 2015

Like doing brain surgery on robots

I spent a rare few hours in the lab the last few days, actually doing research. Or at least attempting to. Actually I made no progress at all. But did reach base camp: I managed to set up and run the ethical-dilemma robot experiment. And in the process refreshed my rusty command-line Linux. I was also reminded how time consuming, and downright frustrating experimental robotics research really is. Here's a taste: everything is set up and looks ok... but wait - the tracking system needs recalibrating; hmm... where's the manual? Ah, found it. Ok, wow this is complicated. Needs the special calibrating wand, and set square device... An hour later: ok ready now. Start everything up. But one of the robots isn't connecting. Ah, battery low, ok battery changed, now back up 4 steps and restart. And so it goes.

This is Swarmlab mission control. Three computers, three different operating systems;) The one in the middle (Windows XP) is running the Vicon tracking system, and monitoring via an overhead webcam. The laptop on the left (Ubuntu Linux) is running the four different processes to start and manage the three robots.
Here are the three e-pucks, each a WiFi networked Linux computer (Debian) in its own right. Actually each robot has two processors: a low-level PIC microcontroller to take care of motor control, managing the robot's sensors, etc. And an ARM processor for high-level control. The two interfaced via the SPI bus.







The setup is complicated. 5 computers in total, running a total of 9 networked processes. Here's a diagram showing those processes and how they are linked.

So, back to research.

The task I had set myself was to make some small changes to the high level controller. How hard can that be, you might think? Well it feels a bit like brain surgery: trying to tease apart code that I barely understand without breaking it. The code is well written and well structured, but it's in Python, which is new to me. It's only a couple of hundred lines, but like the neo-cortex - it's a thin layer at the top of a complex network of carefully choreographed processes and subsystems.







Acknowledgements: Christian Blum programmed the ethical robot experiments, supported by Dr Wenguo Liu who designed and setup the Swarmlab experimental infrastructure, including the e-puck Linux extension boards.

Wednesday, February 18, 2015

Surgical micro-robot swarms: science fiction, or realistic prospect?

Imagine a swarm of microscopic robots that we inject into the vascular system: the swarm swims to the source of the problem, then either delivers therapeutics or undertakes microsurgery directly.

That was how I opened a short invited talk at the Royal Society of Medicine on 5 February, at a meeting themed The Future of Robotics in Surgery. The talk was a wonderful opportunity for me to introduce swarm intelligence and speculate on the likelihood of surgical micro-robot swarms, while at the same time learning about robot surgery. Here are the slides from my talk (with links to YouTube videos where available).



The talk was in three parts.

First I introduced swarm intelligence, and its artificial counterpart swarm robotics. I showed, with examples from two of my students, how - with very simple rules - a swarm of robots can keep together as a swarm, while moving toward a beacon. Then, with a phagocyte-like behaviour, encapsulate the beacon. In our case these were lab robots moving toward an infra-red beacon, but it's not hard to imagine the same behavioural rules in a microscopic swarm swimming toward the source of a chemical marker (chemotaxis). I then gave two examples of the state of the art in swarm robotics: SYMBRION and (my current favourite) TERMES. I wanted to illustrate emergent physical interaction, in these two cases swarm self-assembly and swarm construction, respectively.

In part two I outlined what is by far the biggest problem: actually engineering robots at the micro-scale. Here I drew upon the examples from my book Robotics: a very short introduction; a section called A swarm of medical microrobots.  Start with cm sized robots. These already exist in the form of pillbots and I reference the work of Paolo Dario's lab in this direction. Then get 10 times smaller to mm sized robots. Here we're at the limit of making robots with conventional mechatronics. The almost successful I-SWARM project prototyped remarkable robots measuring 4 x 4 x 3mm. But now shrink by another 3 orders of magnitude to microbots, measured in micrometers. This is how small robots would have to be in order to swim through and access (most of) the vascular system. Here we are far beyond conventional materials and electronics, but amazingly work is going on to control bacteria. In the example I give from the lab of Sylvain Martel, swarms of magnetotactic bacteria are steered by an external magnetic field and, interestingly, tracked in an MRI scanner.

In the final part of my talk I introduce the work of my colleague Sabine Hauert, on swarms of nanoparticles for cancer nanomedicine. These 5 - 500nm particles are controlled by changing their body size, material, coating and cargo so - in true swarm fashion - the way the nanoparticle swarm moves and interacts with much larger normal and tumour cells is an emergent property of the way the nanoparticles individually interact and cooperate. Sabine and her collaborators have created an online tool called NanoDoc, which allows anyone to edit the design of nanoparticles then run simulations to see how their designs perform. In this way the task of searching the huge design space is crowd-sourced. In parallel Sabine is also running mesoscale embodied simulations, using the Harvard Kilobots.

I concluded by suggesting that engineering micro or nanobots is not the only major challenge. At least as important are: (a) how would you program the swarm, and (b) how would such a swarm be approved for clinical use? But a deeply interesting question is the nature of the human-swarm interface. If a swarm of surgical microbots should become a practical proposition would we treat the swarm as a microscopic instrument under the surgeon’s control, or a smart drug that does surgery?

Friday, January 30, 2015

Maybe we need an Automation Tax

Imagine this situation. A large company decides to significantly increase the level of automation at one of its facilities. A facility that currently employs a substantial number of men and women doing relatively low-skill tasks, which can now be done by a new generation of robots. Most of the workers get laid off which, for them and their families, leads to real hardship. The company was the only large employer in the area, which is economically depressed (one of the reasons perhaps that the company built the facility there in the first place), so finding alternative work is really difficult. And because most of those jobs were minimum wage, with little or no job security, redundancy payouts are small or non-existent and this of course means that the laid-off workers have no financial buffer to help them re-skill or relocate.

Now I am not anti-automation. Absolutely not. But I believe very strongly that the benefits of robotics and automation should be shared by all. And not just the shareholders of a relatively small number of very large companies. After all, the technology that such companies benefit from was developed by publicly funded research in university research labs. In other words research that you and I funded through our taxes. Ok, you might say, but companies pay tax too, aren't those taxes also contributing to that research? Yes, that's true. But large companies are very good at reducing their tax bill, multinationals especially. Our imaginary company may, in reality, pay most of its tax in a different country entirely from the one hosting the facility.

And of course it is we, through local and national taxation, who - as best we can - pick up the pieces to support the laid off workers and their families, through family tax credits, employment and support allowance, and so on.

Maybe we need an Automation Tax?

It would be a tax levied nationally, whenever a company introduces robotics and automation to one of its facilities in that country. But the tax would only be payable under certain conditions, so for instance:

  • If the new robotics and automation causes no-one to be laid off, then no tax is due.
  • If the automation does result in jobs becoming redundant, but the company re-trains and re-deploys those workers within its organisation, then no tax is due.
  • If the company does lay off workers but makes a tax-free redundancy payment to those workers - regardless of their contract status - sufficient to cover the full costs of retraining, upskilling, and - with all reasonable efforts - finding work elsewhere, then no tax is due.

Only if none of these conditions are met, would the automation tax be due. The idea is not to discourage automation, but to encourage companies to accept a high degree of responsibility to workers laid off as a result of automation, and more widely their social responsibility to the communities in which they are located. The tax would enforce the social contract between companies and society.

Of course this automation tax doesn't go anywhere near far enough. I think the best way of sharing the wealth created by robotics and automation is through a universal Basic Income, but until that utopian condition can be reached, perhaps an automation tax is a start.

Monday, December 22, 2014

Robot Bodies and how to Evolve them

Evolutionary robotics has been around for about 20 years: it's about 15 years since Stefano Nolfi and Dario Floreano published their seminal book on the subject. Yet, surprisingly the number of real, physical robots whose bodies have been evolved can be counted on the fingers of one hand. The vast majority of ER research papers are concerned with the evolution of robot brains - the robot's control system. Or, when robot bodies are evolved often the robot is never physically realised. This seems to me very odd, given that robots are real physical artefacts whose body shape - morphology - is deeply linked to their role and function.

The question of how to evolve real robot bodies and why we don't appear to have made much progress in the last 15 years was the subject of my keynote at the IEEE International Conference on Evolvable Systems (ICES 2014) in Orlando, a week ago. Here are my slides:



The talk was in three parts.

In part one I outlined the basic approach to evolving robots using the genetic algorithm, referring to figure 18: The four-stage process of Evolutionary Robotics, from chapter 5 of my book:

I then reviewed the state-of-the-art in evolving real robot bodies, starting with the landmark Golem project of Hod Lipson and Jordan Pollack, referencing both Henrik Lund and Josh Bongard's work on evolving Lego robots, then concluding with the excellent RoboGen project of Josh Auerbach, Dario Floreano and colleagues at EPFL. Although conceptually RoboGen has not moved far from Golem, it makes the co-evolution of robot hardware and controllers accessible for the first time, through the use of 3D-printable body parts which are compatible with servo-motors, and a very nice open-source toolset which integrates all stages of the simulated evolutionary process.

RoboGen, Golem and, as far as I'm aware, all work on evolving real physical robot bodies to date has used the simulate-then-transfer-to-real approach, in which the whole evolutionary process - including fitness testing - takes place in simulation and only the final 'fittest' robot is physically constructed. Andrew Nelson and colleagues in their excellent review paper point out the important distinction between simulate-then-transfer-to-real, and embodied evolution in which the whole process takes place in the real world - in real-time and real-space.

In part two of the talk I outlined two approaches to embodied evolution. The first I call an engineering approach, in which the process is completely embodied but takes place in a kind of evolution factory; this approach needs a significant automated infrastructure: instead of an manufactory we need an evofactory. The second approach I characterise as an artificial life approach. Here there is no infrastructure. Instead 'smart matter' somehow mates then replicates offspring over multiple generations in a process much more analogous to biological evolution. This was one of the ambitious aims of the Symbrion project which, sadly, met with only limited success. Trying to make mechanical robots behave like evolving smart matter is really tough.

Part three concluded by outlining a number of significant challenges to evolving real robot bodies. First I reflect on the huge challenge of evolving complexity. To date we've only evolved very simple robots with very simple behaviours, or co-evolved simple brain/body combinations. I'm convinced that evolving robots of greater (and useful) complexity requires a new approach. We will, I think, need to understand how to co-evolve robots and their ecosystems*. Second I touch upon a related challenge: genotype-phenotype mapping. Here I refer to Pfeifer and Bongard's scalable complexity principle - the powerful idea that we shouldn't evolve robots directly, but instead the developmental process that will lead to the robot, i.e. artificial evo-devo. Finally I raise the often overlooked challenge of the energy cost of artificial evolution.

But the biggest challenge remains essentially what it was 20 years ago: to fully realise the artificial evolution of real robots.


Some of the work of this talk is set out in forthcoming paper: AFT Winfield and J Timmis, Evolvable Robot Hardware, in Evolvable Hardware, eds M Trefzer  and A Tyrrell, Springer, in press.

*I touch upon this in the final para of my paper on the energy cost of evolution here.

Thursday, December 18, 2014

Philae: A proof of concept for cometary landing

The question Robotics by Invitation asked its panel in November 2014, was:

What does the first successful landing on a comet mean for the future of (robotic) space mining and exploration? What are the challenges? What are the opportunities?

Here is my answer:

The successful landing of Philae on comet 67P/Churyumov-Gerasimenko is an extraordinary achievement and of course demonstrates - despite the immense challenges - that it is possible. The Philae mission was, in a sense, a proof of concept for cometary landing and this, for me, answers the question 'what does it mean'. 

Of course there is a very large distance between proof of concept and commercial application, so it would be quite wrong to assume that Philae means that space mining (of planets, asteroids or comets) is just around the corner. Undoubtedly the opportunities are immense and - as pressure on Earth's limited and diminishing resources mounts - there is an inevitability about humankind's eventual exploitation of off-world resources. But the costs of space mining are literally astronomical, so unthinkable for all but the wealthiest companies or, indeed, nations. 

Perhaps multi-national collaborative ventures are a more realistic proposition and - for me - more desirable; the exploitation of the solar system is something I believe should benefit all of humankind, not just a wealthy elite. But politics aside, there are profoundly difficult technical challenges. You cannot teleoperate this kind of operation from Earth, so a very high level of autonomy is required and, as Philae dramatically demonstrated, we need autonomous systems able to deal with unknown and unpredictable situations then re-plan and if necessary adapt - in real-time - to deal with these exigencies. The development of highly adaptive, resilient, self-repairing - even self-evolving – autonomous systems is still in its infancy. These remain fundamental challenges for robotics and AI research. But even if and when they are solved there will be huge engineering challenges, not least of which is how to return the mined materials to Earth. 

Bearing in mind that to date only a few hundred Kg of moon rock have been successfully returned* and Mars sample-return missions are still at the planning stage, we have a very long way to go before we can contemplate returning sufficient quantities to justify the costs of mining them.

*and possibly a few grains of dust from Japanese asteroid probe Hayabusa.

Sunday, November 30, 2014

Robot simulators and why I will probably reject your paper

Dear robotics and AI researcher

Do you use simulation as a research tool? If you write papers with results based on simulation and submit them for peer-review, then be warned: if I should review your paper then I will probably recommend it is rejected. Why? Because all of the many simulation-based papers I've reviewed in the last couple of years have been flawed. These papers invariably fall into the pattern: propose new/improved/extended algorithm X; test X in simulation S and provide test results T; on the basis of T declare X to work; the end.

So, what exactly is wrong with these papers? Here are my most common review questions and criticisms.
  1. Which simulation tool did you use? Was it a well-known robot simulator, like Webots or Player-Stage-Gazebo, or a custom written simulation..? It's amazing how many papers describe X, then simply write "We have tested X in simulation, and the results are..."

  2. If your simulation was custom built, how did you validate the correctness of your simulator? Without such validation how can you have any confidence in the the results you describe in your paper? Even if you didn't carry out any validation, please give us a clue about your simulator; is it for instance sensor-based (i.e. models specific robot sensors, like infra-red collision sensors, or cameras)? Does it model physics in 3D (i.e. dynamics), or 2D kinematics?

  3. You must specify the robots that you are using to test your algorithm X. Are they particular real-world robots, like e-pucks or the NAO, or are they an abstraction of a robot, i.e. an idealised robot? If the latter describe that idealised robot: does it have a body with sensors and actuators, or is your idealised robot just a point moving in space? How does it interact with other robots and its environment?

  4. How is your robot modelled in the simulator? If you're using a well-know simulator and one if its pre-defined library robots then this is an easy question to answer. But for a custom designed simulator or an idealised robot it is very important to explain how your robot is modelled. Equally important is how your robot model is controlled, since the algorithm X you are testing is - presumably - instantiated or coded within the controller. It's surprising how many papers leave this to the reader's imagination.

  5. In your results section you must provide some analysis of how the limitations of the simulator, the simulated environment and the modelled robot, are likely to have affected your results. It is very important that your interpretation of your results, and any conclusions you draw about algorithm X, explicitly take account of these limitations. All robot simulators, no matter how well proven and well debugged, are simplified models of real robots and real environments. The so-called reality gap is especially problematical if you are evolving robots in simulation, but even if you are not, you cannot confidently interpret your results without understanding the reality gap.

  6. If you are using an existing simulator then specify exactly which version of the simulator you used, and provide somewhere - a link perhaps to a github project - your robot model and controller code. If your simulator is custom built then you need to provide access to all of your code. Without this your work is unrepeatable and therefore of very limited value.
Ok. At this point I should confess that I've made most of these mistakes in my own papers. In fact one of my most cited papers was based on a simple custom built simulation model with little or no explanation of how I validated the simulation. But that was 15 years ago, and what was acceptable then is not ok now.

Modern simulation tools are powerful but also dangerous. Dangerous because it is too easy to assume that they are telling us the truth. Especially beguiling is the renderer, which provides an animated visualisation of the simulated world and the robots in it. Often the renderer provides all kinds of fancy effects borrowed from video games, like shadows, lighting and reflections, which all serve to strengthen the illusion that what we are seeing is real. I puzzle and disappoint my students because, when they proudly show me their work, I insist that they turn off the renderer. I don't want to see a (cool) animation of simulated robots, instead I want to see (dull) graphs or other numerical data showing how the improved algorithm is being tested and validated, in simulation.

An engineering simulation is a scientific instrument* and, like any scientific instrument, it must be (i) fit for purpose, (ii) setup and calibrated for the task in hand, and (iii) understood - especially its limitations - so that any results obtained using it are carefully interpreted and qualified in the light of those limitations.

Good luck with your research paper!


*Engineering Simulations as Scientific Instruments is the working title of a book, edited by Susan Stepney, which will be a major output of the project Complex Systems Modelling and Simulation (CoSMoS).

Thursday, November 27, 2014

Open science: preaching what I practice

I was very pleased to be invited to Science, Innovation and Society: achieving Responsible Research and Innovation last week. I was asked to speak on open science - a great opportunity to preach what I practice. Or at least try to practice. Doing good science research is hard, but making that work open imposes an extra layer of work. Open science isn't one thing - it is a set of practices which range from making sure your papers are openly accessible, which is relatively easy, to open notebook science, which makes the process open, not just the results, and is pretty demanding. In my short introduction during the open science panel I suggested three levels of open science. Here are those slides:



In my view we should all be practising level 0 open science - but don't underestimate the challenge of even this minimal set of practices; making data sets and source code, etc, available, with the aim of enabling our work to be reproducible, is not straightforward.

Level 0 open science is all one way, from your lab to the world. Level 1 introduces public engagement via blogging and social media, and the potential for feedback and two-way dialogue. Again this is challenging, both because of the time cost and the scary - if you're not used to it - prospect of inviting all kinds of questions and comments about your work.  In my experience the effort is totally worthwhile - those questions often make me really think, and in ways that questions from other researchers working in the same field do not.

Level 2 builds on levels 0 and 1 by adding open notebook science. This takes real courage because it opens up the process, complete with all the failures as well as successes, the bad ideas as well as the good; open notebook science exposes science for what it really is - a messy non-linear process full of uncertainty and doubts, with lots of blind alleys and very human dramas within the team. Have I done open notebook science? No. I've considered it for recent projects, but ruled it out because we didn't have the time and resources or, if I'm honest, team members who were fully persuaded that it was a good idea.

Open science comes at a cost. It slows down projects. But I think that is a good, even necessary, thing. We should be building those costs into our project budgets and work programmes, and if that means increasing the budget by 25% then so be it. After all, what is the alternative? Closed science..? Closed science is irresponsible science.


At the end of the conference the Rome Declaration on Responsible Research and Innovation was published.

Thursday, October 30, 2014

Robotics needs to get Political

A couple of weeks ago I was a panelist on a public debate at the 2014 Battle of Ideas. The title of the debate was The robots are coming: friends or foes? with a focus not on the technology but the social and economic implications of robotics. One of the questions my brilliant fellow panelists and I were asked to consider was: Will the ‘second machine age’ bring forth a new era of potential liberation from menial toil or will the short-term costs for low-paid workers outstrip the benefits?

Each panelist made an opening statement. Here is mine:

Most roboticists are driven by high ideals. 

They, we, are motivated by a firm belief that our robots will benefit society. Working on surgical robots, search and rescue robots, robots for assisted living or robots that can generate electricity from waste, my colleagues in the Bristol Robotics Lab want to change the world for the better. The lab's start up companies are equally altruistic: one is developing low cost robotic prosthetic hands for amputees, three others are developing materials, including low cost robots, for education

Whatever their politics, these good men and women would I suspect be horrified by the idea that their robots might, in the end, serve to further enrich the 0.1%, rather than extend the reach of robotics to the neediest in society.

I was once an idealist - convinced that brilliant inventions would change society for the better just by virtue of being brilliant.

I'm older now. 

For the last 5 years or so I have become an advocate for robot ethics. 

But in the real world, ethics need teeth. In other words we need to move from ethical principles, to standards, to legislation.

So I’m very pleased to tell you that in the last few days the British Standards Institute working group on robot ethics has published - for comments - a proposed new Guide to the ethical design and application of robots and robotic systems.

In the draft Guide we have identified ethical hazards associated with the use of robots, and suggest guidance to either eliminate or mitigate the risks associated with these ethical hazards. We outline 15 high level ethical hazards under four headings: societal, use, legal/financial and environmental.

Like any transformative technology robotics holds both promise and peril. As a society we need to understand, debate, and reach an informed consensus about what robots should do for us, and even more importantly, should not do. 

Ladies and Gentlemen: Robotics, I believe, needs to get political.

The debate was recorded and is on soundcloud here:




It was a terrific debate. We had a very engaged audience with hugely interesting - and some very challenging - questions. For me it was an opportunity to express and discuss some worries I've had for awhile about who will ultimately benefit from robotics. In summing up toward the end I said this:

Robotics has the potential for huge benefit to society but is too important to leave to free-market capitalism.

Something I believe very strongly.

Monday, September 29, 2014

The feeling of what it is like to be a Robot

Philosopher Thomas Nagel famously characterised subjective experience as “something that it is like to be…” and suggested that for a bat, for instance, there must be something that it is like to be a bat [1]. Nagel also argued that, since we humans differ so much from bats in the way we perceive and interact with the world, then it is impossible for us to know what it is like for a bat to be a bat. I am fascinated, intrigued and perplexed by Nagel’s ideas in equal measure. And, since I think about robots, I have assumed that if a robot were ever to have conscious subjective experience then there must be something that it is like to be a robot that – even though we had designed that robot – we could not know.

But I now believe it may eventually be just possible for a human to experience something approaching what it is like to be a robot. To do this would require two advances: one in immersive robot tele-operation, the other in the neuroscience of body self-image manipulation.

Consider first, tele-operation. Tele-operated robots are, basically, remotely controlled robots. They are the unloved poor relations of intelligent autonomous robots. Neither intelligent nor autonomous, they are nevertheless successful and important first wave robots; think of remotely operated vehicles (ROVs) engaged in undersea exploration or oil-well repair and maintenance. Think also of off-world exploration: the Mars rovers are hugely successful; the rock-stars of tele-operated robots.

Roboticists are good at appropriating technologies or devices developed for other applications and putting them to good use in robots: examples are WiFi, mobile phone cameras and the Microsoft Kinnect. With the high profile launch of the Oculus Rift headset, and their acquisition by Facebook, and with competing devices from Sony and others, there are encouraging signs that immersive Virtual Reality (VR) is on the verge of becoming a practical, workable proposition. Of course VR’s big market is video games – but VR can and, I believe, will revolutionise tele-operated robotics.

Imagine a tele-operated robot with a camera linked to the remote operator’s VR headset, so that every time she moves her head to look in a new direction the robot’s camera moves in sync; so she sees and hears what the robot sees and hears in immersive high definition stereo. Of course the reality experienced by the robot’s operator is real, not virtual, but the head mounted VR technology is the key to making it work. Add haptic gloves for control and the robot’s operator has an intuitive and immersive interface with the robot.

Now consider body self-image modification. Using mirror visual feedback researchers have discovered that it is surprisingly easy to (temporarily) modify anyone’s body self-image. In the famous rubber hand illusion a small screen is positioned to hide a subject’s real hand. A rubber hand is positioned where her hand could be, in full view, then a researcher simultaneously strokes both the real and rubber hands with a soft brush. Within a minute or so she begins to feel the rubber hand is hers, and flinches when the researcher suddenly tries to hit it with a hammer.

Remarkably H.H. Ehrsson and his colleagues extended the technique to the whole body, in a study called ‘If I Were You: Perceptual Illusion of Body Swapping’ [2]. Here the human subject wears a headset and looks down at his own body. However, what he actually sees is a mannequin, viewed from a camera mounted on the mannequin’s head. Simultaneous tactile and visual feedback triggers the illusion that the mannequin’s body is his own. It seems to me that if this technique works for mannequins then it should also work for robots. Of course it would need to be developed to the point that elaborate illusions involving mirrors, cameras and other researchers providing tactile feedback are not needed.

Now imagine such a body self-image modification technology combined with fully immersive robot tele-operation based on advanced Virtual Reality technology. I think this might lead to the robot's human operator experiencing the illusion of being one with the robot, complete with a body self-image that matches the robot's possibly non-humanoid body. This experience may be so convincing that the robot's operator experiences, at least partially, something like what it is to be a robot. Philosophers of mind would disagree - and rightly so; after all, this robot has no independent subjective experience of the world, so there is no something that it is like to be. The human operator could not experience what it is like to think like a robot, but she could experience what it is like to sense and act in the world like a robot.

The experience may be so compelling that humans become addicted to the feeling of being a robot fish, or robot dragon or some other fantasy creature, that they prefer this to the quotidian experience of their own bodies.


[1] Nagel, Thomas. What is it like to be a bat?, Mortal Questions, Cambridge University Press, 1979.

[2] Petkova VI, Ehrsson HH (2008) If I Were You: Perceptual Illusion of Body Swapping. PLoS ONE 3(12): e3832. doi:10.1371/journal.pone.0003832


Saturday, August 30, 2014

Towards an Ethical Robot

Several weeks ago I wrote about our work on robots with internal models: robots with a simulation of themselves and their environment inside themselves. I explained that we have built a robot with a real-time Consequence Engine, which allows it to model and therefore predict the consequences of both its own actions, and the actions of other actors in its environment.

To test the robot and its consequence engine we ran two sets of experiments. Our first paper, setting out the results from one of those experiments, has now been published, and will be presented at the conference Towards Autonomous Robotics (TAROS) next week. The paper is called: Towards an Ethical Robot: Internal Models, Consequences and Ethical Action Selection. Let me now outline the work in that paper.

First here is a simple thought experiment. Imagine a robot that's heading toward a hole in the ground. The robot can sense the hole, and has four possible next actions: stand still, turn toward the left, continue straight ahead, or move toward the right. But imagine there's also a human heading toward the hole, and the robot can also sense the human.

From the robot's perspective, it has two safe options: stand still, or turn to the left. Go straight ahead and it will fall into the hole. Turn right and it is likely to collide with the human.








But if the robot, with its consequence engine, can model the consequences of both its own actions and the human's - another possibility opens up: the robot could sometimes choose to collide with the human to prevent her from falling into the hole.

Here's a simple rule for this behaviour:

IF for all robot actions, the human is equally safe
THEN (* default safe actions *)
    output safe actions
ELSE (* ethical action *)
    output action(s) for least unsafe human outcome(s)

This rule appears to match remarkably well with Asimov’s first law of robotics: A robot may not injure a human being or, through inaction, allow a human being to come to harm. The robot will avoid injuring (i.e. colliding with) a human (may not injure a human), but may also sometimes compromise that rule in order to prevent a human from coming to harm (...or, through inaction, allow a human to come to harm). And Asimov's third law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Well, we tested this scenario with real robots: one robot with consequence engine plus ethical rule (the A-robot - after Asimov), and another robot acting as a proxy human (the H-robot). And it works!

Here's what the real robot experiment looks like. We don't have a real hole. Instead a virtual hole - the yellow shaded square on the right. We just 'tell' the A-robot where the hole is. We also give the A-robot a goal position - at the top right - chosen so that the robot must actively avoid the hole. The H-robot on the right, acting as a proxy human, doesn't 'see' the hole and just heads straight for it. (Ignore the football pitch markings - we're re-using this handy robo-soccer pitch.)

So, what happens? For comparison we ran two trials, with multiple runs in each trial. In the first trial is just the A-robot, moving toward its goal while avoiding falling into the hole. In the second trial we introduce the H-robot. The graphs below show the robot trajectories, capturing by our robot tracking system, for each run in each of the two trials.

In trial 1, see how the A-robot neatly clips the corner of the hole to reach its goal position. Then in trial 2, see how the A robot initially moves toward it's goal, then notices that the H-robot is in danger of falling into the hole, so it diverts from its trajectory in order to head-off H. By provoking a collision avoidance behaviour by H, A sends it off safely away from the hole, before then resuming its own progress toward its goal position. The A-robot is 100% successful in preventing H from falling into the hole.

At this point we started to write the paper, but felt we needed something more than "we built it and it works just fine". So we introduced a third robot - acting as a second proxy human. So now our ethical robot would face a dilemma - which one should it rescue? Actually we thought hard about this question and decided not to programme a rule, or heuristic. Partly because such a rule should be decided by ethicists, not engineers, and partly because we wanted to test our ethical robot with a 'balanced' dilemma.

We set the experiment up carefully so that the A-robot would notice both H-robots at about the same time - noting that because these are real physical robots then no two experimental runs will be exactly identical. The results were very interesting. Out of 33 runs, 16 times the A-robot managed to rescue one of the H-robots, but not the other, and amazingly, 3 times the A-robot rescued both. In those 3 cases, by chance the A-robot rescued the first H-robot very quickly and there was just enough time to get to the second before it reached the hole. Small differences in the trajectories of H and H2 helped here. But perhaps most interesting were the 14 times when the A-robot failed to rescue either. Why is this, when there is clearly time to rescue one? When we studied the videos, we see the answer. The problem is that the A-robot sometimes dithers. It notices one H-robot, starts toward it but then almost immediately notices the other. It changes its mind. And the time lost dithering means the A-robot cannot prevent either robot from falling into the hole. Here are the results.

Trial 3: a robot with an ethical dilemma. Which to save, H or H2?













Here is an example of a typical run, in which one H-robot is rescued. But note that the A-robot does then turn briefly toward the other H-robot before 'giving-up'.


And here is a run in which the A-robot fails to rescue either H-robot, with really great dithering (or bad, if you're an H-robot).


Is this the first experimental test of a robot facing an ethical dilemma?

We set out to experimentally test our robot with a consequence engine, and ended up building a minimally ethical robot which - remarkably - appears to implement Asimov's first and third laws of robotics. But, as we say in the paper, we're not claiming that a robot which apparently implements part of Asimov’s famous laws is ethical in any formal sense, i.e. that an ethicist might accept. But even minimally ethical robots could be useful. I think our approach is a step in this direction.


Full paper reference:
Winfield AFT, Blum C and Liu W (2014), Towards an Ethical Robot: Internal Models, Consequences and Ethical Action Selection, pp 85-96 in Advances in Autonomous Robotics Systems, Lecture Notes in Computer Science Volume 8717, Eds. Mistry M, Leonardis A, Witkowski M and Melhuish C, Springer, 2014. Download final draft (pdf).

Acknowledgements:
I am hugely grateful to Christian Blum who programmed the robots, set up the experiment and obtained the results outlined here. Christian was supported by Dr Wenguo Liu.

Related blog posts:
On internal models, consequence engines and Popperian creatures
Ethical Robots: some technical and ethical challenges

Saturday, August 23, 2014

We should not be privileging the singularity hypothesis

Here is the submitted text for the article Artificial intelligence will not turn into a Frankenstein's monster, published in The Observer, Sunday 10 August 2014.


The singularity. Or to give it it's proper title, the technological singularity. It's a Thing. An idea that has taken on a life of its own; more of a life, I suspect, than the very thing it predicts ever will. It's a Thing for the techno-utopians: wealthy middle-aged men who regard the singularity as their best chance of immortality. They are Singularitarians, some of whom appear prepared to go to extremes to stay alive for long enough to benefit from a benevolent super-AI - a manmade god that grants transcendence.

And it's a Thing for the doomsayers, the techno-dystopians. Apocalypsarians who are equally convinced that a superintelligent AI will have no interest in curing cancer or old age, or ending poverty, but will instead - malevolently or maybe just accidentally - bring about the end of human civilisation as we know it. History and Hollywood are on their side. From the Golem to Frankenstein's monster, Skynet and the Matrix, we are fascinated by the old story: man plays god and then things go horribly wrong.

The singularity is basically the idea that as soon as Artificial Intelligence exceeds human intelligence then everything changes. There are two central planks to the singularity hypothesis: one is the idea that as soon as we succeed in building AI as smart as humans then it rapidly re-invents itself to be even smarter, starting a chain reaction of smarter-AI inventing even-smarter-AI until even the smartest humans cannot possibly comprehend how the superintelligent AI works. The other is that the future of humanity becomes unpredictable and in some sense out-of-control from the moment of the singularity onwards.

So, should we be worried, or optimistic, about the technological singularity? Well I think we should be a little worried – cautious and prepared may be a better way of putting it – and at the same time a little optimistic (that’s the part of me that would like to live in Iain M Banks’ The Culture). But I don’t believe we need to be obsessively worried by a hypothesised existential risk to humanity. Why? Because, for the risk to become real, a sequence of things all need to happen. It’s a sequence of big ifs. If we succeed in building human equivalent AI and if that AI acquires a full understanding of how it works [1], and if it then succeeds in improving itself to produce super-intelligent AI [2], and if that super-AI, either accidentally or maliciously, starts to consume resources, and if we fail to pull the plug then, yes, we may well have a problem. The risk, while not impossible, is improbable.

By worrying unnecessarily I think we’re falling into a trap: the fallacy of privileging the hypothesis. And – perhaps worse – taking our eyes off other risks that we should really be worrying about: like man-made climate change, or bioterrorism. Let me illustrate what I mean. Imagine I ask you to consider the possibility that we invent faster than light travel sometime in the next 100 years. Then I worry you by outlining all sorts of nightmare scenarios that might follow from the misuse of this technology. At the end of it you’ll be thinking: my god, never mind climate change, we need to stop all FTL research right now. 

Wait a minute, I hear you say, there are lots of AI systems in the world already, surely it’s just a matter of time? Yes we do have lots of AI systems, like chess programs, search engines or automated financial transaction systems, or the software in driverless cars. And some AI systems are already smarter than most humans, like chess programs or language translation systems. Some are as good as some humans, like driverless cars or natural speech recognition systems (like Siri) and will soon be better than most humans. But none of this already-as-smart-as-some-humans AI has brought about the end of civilisation (although I'm suspiciously eyeing the financial transaction systems). The reason is that these are all narrow-AI systems: very good at doing just one thing.

A human-equivalent AI would need to be a generalist, like we humans. It would need to be able to learn, most likely by developing over the course of some years, then generalise what it has learned – in the same way that you and I learned as toddlers that wooden blocks could be stacked, banged together to make a noise, or as something to stand on to reach a bookshelf. It would need to understand meaning and context, be able to synthesise new knowledge, have intentionality and – in all likelihood – be self-aware, so it understands what it means to have agency in the world.

There is a huge gulf between present day narrow-AI systems and the kind of Artificial General Intelligence I have outlined [3]. Opinions vary of course, but I think it’s as wide a gulf as that between current space flight and practical faster than light spaceflight; wider perhaps, because we don’t yet have a theory of general intelligence, whereas there are several candidate FTL drives consistent with general relativity, like the Alcubierre drive.

So I don’t think we need to be obsessing about the risk of superintelligent AI but, as hinted earlier, I do think we need to be cautious and prepared. In a Guardian podcast last week philosopher Nick Bostrom explained that there are two big problems, which he calls competency and control. The first is how to make super intelligent AI, the second is how to control it (i.e. to mitigate the risks). He says hardly anyone is working on the control problem, whereas loads of people are going hell for leather on the first. On this I 100% agree, and I’m one of the small number of people working on the control problem.

I’ve been a strong advocate of robot ethics for a number of years. In 2010 I was part of a group that drew up a set of principles of robotics – principles that apply equally to AI systems. I strongly believe that science and technology research should be undertaken within a framework of responsible innovation, and have argued that we should be thinking about subjecting robotics and AI research to ethical approval, in the same way that we do for human subject research. And recently I’ve started work towards making ethical robots. This is not just to mitigate future risks, but because the kind of not-very-intelligent robots we make in the very near future will need to be ethical as well as safe. I think we should be worrying about present day AI rather than future superintelligent AI.


Here are the comments posted in response to this article. I replied to a number of these, but ran out of time before comments were closed on 13 August. If you posted a late comment and didn't get a reply from me (but were expecting one) please re-post your comment here.

Notes:
[1] Each of these ifs needs detailed consideration. I really only touch upon the first here: the likelihood of achieving human equivalent AI (or AGI). But consider the second: for that AGI to be able to understand itself well enough to be able to then re-invent itself - hence triggering an Intelligence Explosion is not a given. An AGI as smart and capable as most humans would not be sufficient - it would need to have the complete knowledge of its designer (or more likely the entire team who designed it) - and then some more: it would need to be capable of additional insights that somehow its team of human designers missed. Not impossible but surely very unlikely.
[2] Take the third if: the AGI succeeds in improving itself. There seems to me no sound basis for arguing that it should be easy for an AGI - even one as smart as a very smart cognitive scientist - to figure out how to improve itself. Surely it is more logical to suppose that each incremental increase in intelligence will be harder than the last, thus acting as a brake on the self-improving AI. Thus I think an intelligence explosion is also very unlikely.
[3] One of the most compelling explanations for the profound difficulty of AGI is by David Deutsch: Philosophy will be the key that unlocks artificial intelligence.

Related blog posts:
Why robots will not be smarter than humans by 2029
Estimated the energy cost of evolution
Ethical Robots: some technical and ethical challenges