Monday, December 22, 2014

Robot Bodies and how to Evolve them

Evolutionary robotics has been around for about 20 years: it's about 15 years since Stefano Nolfi and Dario Floreano published their seminal book on the subject. Yet, surprisingly the number of real, physical robots whose bodies have been evolved can be counted on the fingers of one hand. The vast majority of ER research papers are concerned with the evolution of robot brains - the robot's control system. Or, when robot bodies are evolved often the robot is never physically realised. This seems to me very odd, given that robots are real physical artefacts whose body shape - morphology - is deeply linked to their role and function.

The question of how to evolve real robot bodies and why we don't appear to have made much progress in the last 15 years was the subject of my keynote at the IEEE International Conference on Evolvable Systems (ICES 2014) in Orlando, a week ago. Here are my slides:



The talk was in three parts.

In part one I outlined the basic approach to evolving robots using the genetic algorithm, referring to figure 18: The four-stage process of Evolutionary Robotics, from chapter 5 of my book:

I then reviewed the state-of-the-art in evolving real robot bodies, starting with the landmark Golem project of Hod Lipson and Jordan Pollack, referencing both Henrik Lund and Josh Bongard's work on evolving Lego robots, then concluding with the excellent RoboGen project of Josh Auerbach, Dario Floreano and colleagues at EPFL. Although conceptually RoboGen has not moved far from Golem, it makes the co-evolution of robot hardware and controllers accessible for the first time, through the use of 3D-printable body parts which are compatible with servo-motors, and a very nice open-source toolset which integrates all stages of the simulated evolutionary process.

RoboGen, Golem and, as far as I'm aware, all work on evolving real physical robot bodies to date has used the simulate-then-transfer-to-real approach, in which the whole evolutionary process - including fitness testing - takes place in simulation and only the final 'fittest' robot is physically constructed. Andrew Nelson and colleagues in their excellent review paper point out the important distinction between simulate-then-transfer-to-real, and embodied evolution in which the whole process takes place in the real world - in real-time and real-space.

In part two of the talk I outlined two approaches to embodied evolution. The first I call an engineering approach, in which the process is completely embodied but takes place in a kind of evolution factory; this approach needs a significant automated infrastructure: instead of an manufactory we need an evofactory. The second approach I characterise as an artificial life approach. Here there is no infrastructure. Instead 'smart matter' somehow mates then replicates offspring over multiple generations in a process much more analogous to biological evolution. This was one of the ambitious aims of the Symbrion project which, sadly, met with only limited success. Trying to make mechanical robots behave like evolving smart matter is really tough.

Part three concluded by outlining a number of significant challenges to evolving real robot bodies. First I reflect on the huge challenge of evolving complexity. To date we've only evolved very simple robots with very simple behaviours, or co-evolved simple brain/body combinations. I'm convinced that evolving robots of greater (and useful) complexity requires a new approach. We will, I think, need to understand how to co-evolve robots and their ecosystems*. Second I touch upon a related challenge: genotype-phenotype mapping. Here I refer to Pfeifer and Bongard's scalable complexity principle - the powerful idea that we shouldn't evolve robots directly, but instead the developmental process that will lead to the robot, i.e. artificial evo-devo. Finally I raise the often overlooked challenge of the energy cost of artificial evolution.

But the biggest challenge remains essentially what it was 20 years ago: to fully realise the artificial evolution of real robots.


Some of the work of this talk is set out in forthcoming paper: AFT Winfield and J Timmis, Evolvable Robot Hardware, in Evolvable Hardware, eds M Trefzer  and A Tyrrell, Springer, in press.

*I touch upon this in the final para of my paper on the energy cost of evolution here.

Thursday, December 18, 2014

Philae: A proof of concept for cometary landing

The question Robotics by Invitation asked its panel in November 2014, was:

What does the first successful landing on a comet mean for the future of (robotic) space mining and exploration? What are the challenges? What are the opportunities?

Here is my answer:

The successful landing of Philae on comet 67P/Churyumov-Gerasimenko is an extraordinary achievement and of course demonstrates - despite the immense challenges - that it is possible. The Philae mission was, in a sense, a proof of concept for cometary landing and this, for me, answers the question 'what does it mean'. 

Of course there is a very large distance between proof of concept and commercial application, so it would be quite wrong to assume that Philae means that space mining (of planets, asteroids or comets) is just around the corner. Undoubtedly the opportunities are immense and - as pressure on Earth's limited and diminishing resources mounts - there is an inevitability about humankind's eventual exploitation of off-world resources. But the costs of space mining are literally astronomical, so unthinkable for all but the wealthiest companies or, indeed, nations. 

Perhaps multi-national collaborative ventures are a more realistic proposition and - for me - more desirable; the exploitation of the solar system is something I believe should benefit all of humankind, not just a wealthy elite. But politics aside, there are profoundly difficult technical challenges. You cannot teleoperate this kind of operation from Earth, so a very high level of autonomy is required and, as Philae dramatically demonstrated, we need autonomous systems able to deal with unknown and unpredictable situations then re-plan and if necessary adapt - in real-time - to deal with these exigencies. The development of highly adaptive, resilient, self-repairing - even self-evolving – autonomous systems is still in its infancy. These remain fundamental challenges for robotics and AI research. But even if and when they are solved there will be huge engineering challenges, not least of which is how to return the mined materials to Earth. 

Bearing in mind that to date only a few hundred Kg of moon rock have been successfully returned* and Mars sample-return missions are still at the planning stage, we have a very long way to go before we can contemplate returning sufficient quantities to justify the costs of mining them.

*and possibly a few grains of dust from Japanese asteroid probe Hayabusa.

Sunday, November 30, 2014

Robot simulators and why I will probably reject your paper

Dear robotics and AI researcher

Do you use simulation as a research tool? If you write papers with results based on simulation and submit them for peer-review, then be warned: if I should review your paper then I will probably recommend it is rejected. Why? Because all of the many simulation-based papers I've reviewed in the last couple of years have been flawed. These papers invariably fall into the pattern: propose new/improved/extended algorithm X; test X in simulation S and provide test results T; on the basis of T declare X to work; the end.

So, what exactly is wrong with these papers? Here are my most common review questions and criticisms.
  1. Which simulation tool did you use? Was it a well-known robot simulator, like Webots or Player-Stage-Gazebo, or a custom written simulation..? It's amazing how many papers describe X, then simply write "We have tested X in simulation, and the results are..."

  2. If your simulation was custom built, how did you validate the correctness of your simulator? Without such validation how can you have any confidence in the the results you describe in your paper? Even if you didn't carry out any validation, please give us a clue about your simulator; is it for instance sensor-based (i.e. models specific robot sensors, like infra-red collision sensors, or cameras)? Does it model physics in 3D (i.e. dynamics), or 2D kinematics?

  3. You must specify the robots that you are using to test your algorithm X. Are they particular real-world robots, like e-pucks or the NAO, or are they an abstraction of a robot, i.e. an idealised robot? If the latter describe that idealised robot: does it have a body with sensors and actuators, or is your idealised robot just a point moving in space? How does it interact with other robots and its environment?

  4. How is your robot modelled in the simulator? If you're using a well-know simulator and one if its pre-defined library robots then this is an easy question to answer. But for a custom designed simulator or an idealised robot it is very important to explain how your robot is modelled. Equally important is how your robot model is controlled, since the algorithm X you are testing is - presumably - instantiated or coded within the controller. It's surprising how many papers leave this to the reader's imagination.

  5. In your results section you must provide some analysis of how the limitations of the simulator, the simulated environment and the modelled robot, are likely to have affected your results. It is very important that your interpretation of your results, and any conclusions you draw about algorithm X, explicitly take account of these limitations. All robot simulators, no matter how well proven and well debugged, are simplified models of real robots and real environments. The so-called reality gap is especially problematical if you are evolving robots in simulation, but even if you are not, you cannot confidently interpret your results without understanding the reality gap.

  6. If you are using an existing simulator then specify exactly which version of the simulator you used, and provide somewhere - a link perhaps to a github project - your robot model and controller code. If your simulator is custom built then you need to provide access to all of your code. Without this your work is unrepeatable and therefore of very limited value.
Ok. At this point I should confess that I've made most of these mistakes in my own papers. In fact one of my most cited papers was based on a simple custom built simulation model with little or no explanation of how I validated the simulation. But that was 15 years ago, and what was acceptable then is not ok now.

Modern simulation tools are powerful but also dangerous. Dangerous because it is too easy to assume that they are telling us the truth. Especially beguiling is the renderer, which provides an animated visualisation of the simulated world and the robots in it. Often the renderer provides all kinds of fancy effects borrowed from video games, like shadows, lighting and reflections, which all serve to strengthen the illusion that what we are seeing is real. I puzzle and disappoint my students because, when they proudly show me their work, I insist that they turn off the renderer. I don't want to see a (cool) animation of simulated robots, instead I want to see (dull) graphs or other numerical data showing how the improved algorithm is being tested and validated, in simulation.

An engineering simulation is a scientific instrument* and, like any scientific instrument, it must be (i) fit for purpose, (ii) setup and calibrated for the task in hand, and (iii) understood - especially its limitations - so that any results obtained using it are carefully interpreted and qualified in the light of those limitations.

Good luck with your research paper!


*Engineering Simulations as Scientific Instruments is the working title of a book, edited by Susan Stepney, which will be a major output of the project Complex Systems Modelling and Simulation (CoSMoS).

Thursday, November 27, 2014

Open science: preaching what I practice

I was very pleased to be invited to Science, Innovation and Society: achieving Responsible Research and Innovation last week. I was asked to speak on open science - a great opportunity to preach what I practice. Or at least try to practice. Doing good science research is hard, but making that work open imposes an extra layer of work. Open science isn't one thing - it is a set of practices which range from making sure your papers are openly accessible, which is relatively easy, to open notebook science, which makes the process open, not just the results, and is pretty demanding. In my short introduction during the open science panel I suggested three levels of open science. Here are those slides:



In my view we should all be practising level 0 open science - but don't underestimate the challenge of even this minimal set of practices; making data sets and source code, etc, available, with the aim of enabling our work to be reproducible, is not straightforward.

Level 0 open science is all one way, from your lab to the world. Level 1 introduces public engagement via blogging and social media, and the potential for feedback and two-way dialogue. Again this is challenging, both because of the time cost and the scary - if you're not used to it - prospect of inviting all kinds of questions and comments about your work.  In my experience the effort is totally worthwhile - those questions often make me really think, and in ways that questions from other researchers working in the same field do not.

Level 2 builds on levels 0 and 1 by adding open notebook science. This takes real courage because it opens up the process, complete with all the failures as well as successes, the bad ideas as well as the good; open notebook science exposes science for what it really is - a messy non-linear process full of uncertainty and doubts, with lots of blind alleys and very human dramas within the team. Have I done open notebook science? No. I've considered it for recent projects, but ruled it out because we didn't have the time and resources or, if I'm honest, team members who were fully persuaded that it was a good idea.

Open science comes at a cost. It slows down projects. But I think that is a good, even necessary, thing. We should be building those costs into our project budgets and work programmes, and if that means increasing the budget by 25% then so be it. After all, what is the alternative? Closed science..? Closed science is irresponsible science.


At the end of the conference the Rome Declaration on Responsible Research and Innovation was published.

Thursday, October 30, 2014

Robotics needs to get Political

A couple of weeks ago I was a panelist on a public debate at the 2014 Battle of Ideas. The title of the debate was The robots are coming: friends or foes? with a focus not on the technology but the social and economic implications of robotics. One of the questions my brilliant fellow panelists and I were asked to consider was: Will the ‘second machine age’ bring forth a new era of potential liberation from menial toil or will the short-term costs for low-paid workers outstrip the benefits?

Each panelist made an opening statement. Here is mine:

Most roboticists are driven by high ideals. 

They, we, are motivated by a firm belief that our robots will benefit society. Working on surgical robots, search and rescue robots, robots for assisted living or robots that can generate electricity from waste, my colleagues in the Bristol Robotics Lab want to change the world for the better. The lab's start up companies are equally altruistic: one is developing low cost robotic prosthetic hands for amputees, three others are developing materials, including low cost robots, for education

Whatever their politics, these good men and women would I suspect be horrified by the idea that their robots might, in the end, serve to further enrich the 0.1%, rather than extend the reach of robotics to the neediest in society.

I was once an idealist - convinced that brilliant inventions would change society for the better just by virtue of being brilliant.

I'm older now. 

For the last 5 years or so I have become an advocate for robot ethics. 

But in the real world, ethics need teeth. In other words we need to move from ethical principles, to standards, to legislation.

So I’m very pleased to tell you that in the last few days the British Standards Institute working group on robot ethics has published - for comments - a proposed new Guide to the ethical design and application of robots and robotic systems.

In the draft Guide we have identified ethical hazards associated with the use of robots, and suggest guidance to either eliminate or mitigate the risks associated with these ethical hazards. We outline 15 high level ethical hazards under four headings: societal, use, legal/financial and environmental.

Like any transformative technology robotics holds both promise and peril. As a society we need to understand, debate, and reach an informed consensus about what robots should do for us, and even more importantly, should not do. 

Ladies and Gentlemen: Robotics, I believe, needs to get political.

The debate was recorded and is on soundcloud here:




It was a terrific debate. We had a very engaged audience with hugely interesting - and some very challenging - questions. For me it was an opportunity to express and discuss some worries I've had for awhile about who will ultimately benefit from robotics. In summing up toward the end I said this:

Robotics has the potential for huge benefit to society but is too important to leave to free-market capitalism.

Something I believe very strongly.

Monday, September 29, 2014

The feeling of what it is like to be a Robot

Philosopher Thomas Nagel famously characterised subjective experience as “something that it is like to be…” and suggested that for a bat, for instance, there must be something that it is like to be a bat [1]. Nagel also argued that, since we humans differ so much from bats in the way we perceive and interact with the world, then it is impossible for us to know what it is like for a bat to be a bat. I am fascinated, intrigued and perplexed by Nagel’s ideas in equal measure. And, since I think about robots, I have assumed that if a robot were ever to have conscious subjective experience then there must be something that it is like to be a robot that – even though we had designed that robot – we could not know.

But I now believe it may eventually be just possible for a human to experience something approaching what it is like to be a robot. To do this would require two advances: one in immersive robot tele-operation, the other in the neuroscience of body self-image manipulation.

Consider first, tele-operation. Tele-operated robots are, basically, remotely controlled robots. They are the unloved poor relations of intelligent autonomous robots. Neither intelligent nor autonomous, they are nevertheless successful and important first wave robots; think of remotely operated vehicles (ROVs) engaged in undersea exploration or oil-well repair and maintenance. Think also of off-world exploration: the Mars rovers are hugely successful; the rock-stars of tele-operated robots.

Roboticists are good at appropriating technologies or devices developed for other applications and putting them to good use in robots: examples are WiFi, mobile phone cameras and the Microsoft Kinnect. With the high profile launch of the Oculus Rift headset, and their acquisition by Facebook, and with competing devices from Sony and others, there are encouraging signs that immersive Virtual Reality (VR) is on the verge of becoming a practical, workable proposition. Of course VR’s big market is video games – but VR can and, I believe, will revolutionise tele-operated robotics.

Imagine a tele-operated robot with a camera linked to the remote operator’s VR headset, so that every time she moves her head to look in a new direction the robot’s camera moves in sync; so she sees and hears what the robot sees and hears in immersive high definition stereo. Of course the reality experienced by the robot’s operator is real, not virtual, but the head mounted VR technology is the key to making it work. Add haptic gloves for control and the robot’s operator has an intuitive and immersive interface with the robot.

Now consider body self-image modification. Using mirror visual feedback researchers have discovered that it is surprisingly easy to (temporarily) modify anyone’s body self-image. In the famous rubber hand illusion a small screen is positioned to hide a subject’s real hand. A rubber hand is positioned where her hand could be, in full view, then a researcher simultaneously strokes both the real and rubber hands with a soft brush. Within a minute or so she begins to feel the rubber hand is hers, and flinches when the researcher suddenly tries to hit it with a hammer.

Remarkably H.H. Ehrsson and his colleagues extended the technique to the whole body, in a study called ‘If I Were You: Perceptual Illusion of Body Swapping’ [2]. Here the human subject wears a headset and looks down at his own body. However, what he actually sees is a mannequin, viewed from a camera mounted on the mannequin’s head. Simultaneous tactile and visual feedback triggers the illusion that the mannequin’s body is his own. It seems to me that if this technique works for mannequins then it should also work for robots. Of course it would need to be developed to the point that elaborate illusions involving mirrors, cameras and other researchers providing tactile feedback are not needed.

Now imagine such a body self-image modification technology combined with fully immersive robot tele-operation based on advanced Virtual Reality technology. I think this might lead to the robot's human operator experiencing the illusion of being one with the robot, complete with a body self-image that matches the robot's possibly non-humanoid body. This experience may be so convincing that the robot's operator experiences, at least partially, something like what it is to be a robot. Philosophers of mind would disagree - and rightly so; after all, this robot has no independent subjective experience of the world, so there is no something that it is like to be. The human operator could not experience what it is like to think like a robot, but she could experience what it is like to sense and act in the world like a robot.

The experience may be so compelling that humans become addicted to the feeling of being a robot fish, or robot dragon or some other fantasy creature, that they prefer this to the quotidian experience of their own bodies.


[1] Nagel, Thomas. What is it like to be a bat?, Mortal Questions, Cambridge University Press, 1979.

[2] Petkova VI, Ehrsson HH (2008) If I Were You: Perceptual Illusion of Body Swapping. PLoS ONE 3(12): e3832. doi:10.1371/journal.pone.0003832


Saturday, August 30, 2014

Towards an Ethical Robot

Several weeks ago I wrote about our work on robots with internal models: robots with a simulation of themselves and their environment inside themselves. I explained that we have built a robot with a real-time Consequence Engine, which allows it to model and therefore predict the consequences of both its own actions, and the actions of other actors in its environment.

To test the robot and its consequence engine we ran two sets of experiments. Our first paper, setting out the results from one of those experiments, has now been published, and will be presented at the conference Towards Autonomous Robotics (TAROS) next week. The paper is called: Towards an Ethical Robot: Internal Models, Consequences and Ethical Action Selection. Let me now outline the work in that paper.

First here is a simple thought experiment. Imagine a robot that's heading toward a hole in the ground. The robot can sense the hole, and has four possible next actions: stand still, turn toward the left, continue straight ahead, or move toward the right. But imagine there's also a human heading toward the hole, and the robot can also sense the human.

From the robot's perspective, it has two safe options: stand still, or turn to the left. Go straight ahead and it will fall into the hole. Turn right and it is likely to collide with the human.








But if the robot, with its consequence engine, can model the consequences of both its own actions and the human's - another possibility opens up: the robot could sometimes choose to collide with the human to prevent her from falling into the hole.

Here's a simple rule for this behaviour:

IF for all robot actions, the human is equally safe
THEN (* default safe actions *)
    output safe actions
ELSE (* ethical action *)
    output action(s) for least unsafe human outcome(s)

This rule appears to match remarkably well with Asimov’s first law of robotics: A robot may not injure a human being or, through inaction, allow a human being to come to harm. The robot will avoid injuring (i.e. colliding with) a human (may not injure a human), but may also sometimes compromise that rule in order to prevent a human from coming to harm (...or, through inaction, allow a human to come to harm). And Asimov's third law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Well, we tested this scenario with real robots: one robot with consequence engine plus ethical rule (the A-robot - after Asimov), and another robot acting as a proxy human (the H-robot). And it works!

Here's what the real robot experiment looks like. We don't have a real hole. Instead a virtual hole - the yellow shaded square on the right. We just 'tell' the A-robot where the hole is. We also give the A-robot a goal position - at the top right - chosen so that the robot must actively avoid the hole. The H-robot on the right, acting as a proxy human, doesn't 'see' the hole and just heads straight for it. (Ignore the football pitch markings - we're re-using this handy robo-soccer pitch.)

So, what happens? For comparison we ran two trials, with multiple runs in each trial. In the first trial is just the A-robot, moving toward its goal while avoiding falling into the hole. In the second trial we introduce the H-robot. The graphs below show the robot trajectories, capturing by our robot tracking system, for each run in each of the two trials.

In trial 1, see how the A-robot neatly clips the corner of the hole to reach its goal position. Then in trial 2, see how the A robot initially moves toward it's goal, then notices that the H-robot is in danger of falling into the hole, so it diverts from its trajectory in order to head-off H. By provoking a collision avoidance behaviour by H, A sends it off safely away from the hole, before then resuming its own progress toward its goal position. The A-robot is 100% successful in preventing H from falling into the hole.

At this point we started to write the paper, but felt we needed something more than "we built it and it works just fine". So we introduced a third robot - acting as a second proxy human. So now our ethical robot would face a dilemma - which one should it rescue? Actually we thought hard about this question and decided not to programme a rule, or heuristic. Partly because such a rule should be decided by ethicists, not engineers, and partly because we wanted to test our ethical robot with a 'balanced' dilemma.

We set the experiment up carefully so that the A-robot would notice both H-robots at about the same time - noting that because these are real physical robots then no two experimental runs will be exactly identical. The results were very interesting. Out of 33 runs, 16 times the A-robot managed to rescue one of the H-robots, but not the other, and amazingly, 3 times the A-robot rescued both. In those 3 cases, by chance the A-robot rescued the first H-robot very quickly and there was just enough time to get to the second before it reached the hole. Small differences in the trajectories of H and H2 helped here. But perhaps most interesting were the 14 times when the A-robot failed to rescue either. Why is this, when there is clearly time to rescue one? When we studied the videos, we see the answer. The problem is that the A-robot sometimes dithers. It notices one H-robot, starts toward it but then almost immediately notices the other. It changes its mind. And the time lost dithering means the A-robot cannot prevent either robot from falling into the hole. Here are the results.

Trial 3: a robot with an ethical dilemma. Which to save, H or H2?













Here is an example of a typical run, in which one H-robot is rescued. But note that the A-robot does then turn briefly toward the other H-robot before 'giving-up'.


And here is a run in which the A-robot fails to rescue either H-robot, with really great dithering (or bad, if you're an H-robot).


Is this the first experimental test of a robot facing an ethical dilemma?

We set out to experimentally test our robot with a consequence engine, and ended up building a minimally ethical robot which - remarkably - appears to implement Asimov's first and third laws of robotics. But, as we say in the paper, we're not claiming that a robot which apparently implements part of Asimov’s famous laws is ethical in any formal sense, i.e. that an ethicist might accept. But even minimally ethical robots could be useful. I think our approach is a step in this direction.


Full paper reference:
Winfield AFT, Blum C and Liu W (2014), Towards an Ethical Robot: Internal Models, Consequences and Ethical Action Selection, pp 85-96 in Advances in Autonomous Robotics Systems, Lecture Notes in Computer Science Volume 8717, Eds. Mistry M, Leonardis A, Witkowski M and Melhuish C, Springer, 2014. Download final draft (pdf).

Acknowledgements:
I am hugely grateful to Christian Blum who programmed the robots, set up the experiment and obtained the results outlined here. Christian was supported by Dr Wenguo Liu.

Related blog posts:
On internal models, consequence engines and Popperian creatures
Ethical Robots: some technical and ethical challenges

Saturday, August 23, 2014

We should not be privileging the singularity hypothesis

Here is the submitted text for the article Artificial intelligence will not turn into a Frankenstein's monster, published in The Observer, Sunday 10 August 2014.


The singularity. Or to give it it's proper title, the technological singularity. It's a Thing. An idea that has taken on a life of its own; more of a life, I suspect, than the very thing it predicts ever will. It's a Thing for the techno-utopians: wealthy middle-aged men who regard the singularity as their best chance of immortality. They are Singularitarians, some of whom appear prepared to go to extremes to stay alive for long enough to benefit from a benevolent super-AI - a manmade god that grants transcendence.

And it's a Thing for the doomsayers, the techno-dystopians. Apocalypsarians who are equally convinced that a superintelligent AI will have no interest in curing cancer or old age, or ending poverty, but will instead - malevolently or maybe just accidentally - bring about the end of human civilisation as we know it. History and Hollywood are on their side. From the Golem to Frankenstein's monster, Skynet and the Matrix, we are fascinated by the old story: man plays god and then things go horribly wrong.

The singularity is basically the idea that as soon as Artificial Intelligence exceeds human intelligence then everything changes. There are two central planks to the singularity hypothesis: one is the idea that as soon as we succeed in building AI as smart as humans then it rapidly re-invents itself to be even smarter, starting a chain reaction of smarter-AI inventing even-smarter-AI until even the smartest humans cannot possibly comprehend how the superintelligent AI works. The other is that the future of humanity becomes unpredictable and in some sense out-of-control from the moment of the singularity onwards.

So, should we be worried, or optimistic, about the technological singularity? Well I think we should be a little worried – cautious and prepared may be a better way of putting it – and at the same time a little optimistic (that’s the part of me that would like to live in Iain M Banks’ The Culture). But I don’t believe we need to be obsessively worried by a hypothesised existential risk to humanity. Why? Because, for the risk to become real, a sequence of things all need to happen. It’s a sequence of big ifs. If we succeed in building human equivalent AI and if that AI acquires a full understanding of how it works [1], and if it then succeeds in improving itself to produce super-intelligent AI [2], and if that super-AI, either accidentally or maliciously, starts to consume resources, and if we fail to pull the plug then, yes, we may well have a problem. The risk, while not impossible, is improbable.

By worrying unnecessarily I think we’re falling into a trap: the fallacy of privileging the hypothesis. And – perhaps worse – taking our eyes off other risks that we should really be worrying about: like man-made climate change, or bioterrorism. Let me illustrate what I mean. Imagine I ask you to consider the possibility that we invent faster than light travel sometime in the next 100 years. Then I worry you by outlining all sorts of nightmare scenarios that might follow from the misuse of this technology. At the end of it you’ll be thinking: my god, never mind climate change, we need to stop all FTL research right now. 

Wait a minute, I hear you say, there are lots of AI systems in the world already, surely it’s just a matter of time? Yes we do have lots of AI systems, like chess programs, search engines or automated financial transaction systems, or the software in driverless cars. And some AI systems are already smarter than most humans, like chess programs or language translation systems. Some are as good as some humans, like driverless cars or natural speech recognition systems (like Siri) and will soon be better than most humans. But none of this already-as-smart-as-some-humans AI has brought about the end of civilisation (although I'm suspiciously eyeing the financial transaction systems). The reason is that these are all narrow-AI systems: very good at doing just one thing.

A human-equivalent AI would need to be a generalist, like we humans. It would need to be able to learn, most likely by developing over the course of some years, then generalise what it has learned – in the same way that you and I learned as toddlers that wooden blocks could be stacked, banged together to make a noise, or as something to stand on to reach a bookshelf. It would need to understand meaning and context, be able to synthesise new knowledge, have intentionality and – in all likelihood – be self-aware, so it understands what it means to have agency in the world.

There is a huge gulf between present day narrow-AI systems and the kind of Artificial General Intelligence I have outlined [3]. Opinions vary of course, but I think it’s as wide a gulf as that between current space flight and practical faster than light spaceflight; wider perhaps, because we don’t yet have a theory of general intelligence, whereas there are several candidate FTL drives consistent with general relativity, like the Alcubierre drive.

So I don’t think we need to be obsessing about the risk of superintelligent AI but, as hinted earlier, I do think we need to be cautious and prepared. In a Guardian podcast last week philosopher Nick Bostrom explained that there are two big problems, which he calls competency and control. The first is how to make super intelligent AI, the second is how to control it (i.e. to mitigate the risks). He says hardly anyone is working on the control problem, whereas loads of people are going hell for leather on the first. On this I 100% agree, and I’m one of the small number of people working on the control problem.

I’ve been a strong advocate of robot ethics for a number of years. In 2010 I was part of a group that drew up a set of principles of robotics – principles that apply equally to AI systems. I strongly believe that science and technology research should be undertaken within a framework of responsible innovation, and have argued that we should be thinking about subjecting robotics and AI research to ethical approval, in the same way that we do for human subject research. And recently I’ve started work towards making ethical robots. This is not just to mitigate future risks, but because the kind of not-very-intelligent robots we make in the very near future will need to be ethical as well as safe. I think we should be worrying about present day AI rather than future superintelligent AI.


Here are the comments posted in response to this article. I replied to a number of these, but ran out of time before comments were closed on 13 August. If you posted a late comment and didn't get a reply from me (but were expecting one) please re-post your comment here.

Notes:
[1] Each of these ifs needs detailed consideration. I really only touch upon the first here: the likelihood of achieving human equivalent AI (or AGI). But consider the second: for that AGI to be able to understand itself well enough to be able to then re-invent itself - hence triggering an Intelligence Explosion is not a given. An AGI as smart and capable as most humans would not be sufficient - it would need to have the complete knowledge of its designer (or more likely the entire team who designed it) - and then some more: it would need to be capable of additional insights that somehow its team of human designers missed. Not impossible but surely very unlikely.
[2] Take the third if: the AGI succeeds in improving itself. There seems to me no sound basis for arguing that it should be easy for an AGI - even one as smart as a very smart cognitive scientist - to figure out how to improve itself. Surely it is more logical to suppose that each incremental increase in intelligence will be harder than the last, thus acting as a brake on the self-improving AI. Thus I think an intelligence explosion is also very unlikely.
[3] One of the most compelling explanations for the profound difficulty of AGI is by David Deutsch: Philosophy will be the key that unlocks artificial intelligence.

Related blog posts:
Why robots will not be smarter than humans by 2029
Estimated the energy cost of evolution
Ethical Robots: some technical and ethical challenges

Tuesday, August 19, 2014

In praise of robot football

Republished here is a short piece for The Conversation4-4-2 becomes 0101: inside the competitive world of robot football, published 4 August 2014.

The whistle has just been blown on one of the most thrilling events on the international sporting calendar. It took place in Brazil and pitted teams from all over the world against each other, each hoping to make it into the history books. But no managers were fired, no grass had to be watered and certainly no one got bitten. The event was the Robocup, a tournament that sees professional footballers replaced by robots. It’s one of a number of regular tournaments for teams of programmers and robotics experts to show off their latest work.

The Robocup standard platform league matches play out on a much smaller scale than your average World Cup match. An arena of around 6 metres by 9 metres is marked out as a miniature pitch and 10, rather than 22 players file on to battle it out. The players are NAO robots, state of the art bipedal humanoid robots which stand about 60cm tall.

This is not what you might describe as a high-speed contest. The robots walk to take a kick or a tackle and, really, waddle might be a more apt word for their approach. The ball never gets lofted into the air and that’s probably for the best – a header might cause a serious malfunction.

2014 RoboCup SPL Grand Final

But the game is far from boring. Sitting around the arena, boys and girls, with family standing behind, are rapt, cheering with every contact. And make no mistake, the robots are properly playing. They pass, position and defend just like human players.

On either side of the pitch a row of desks can be found. This is where the brains behind the teams sit. Behind a row of laptops, they anxiously watch their players perform. But they are not controlling the robots. These coder/managers send the command to start the players when the referee signals kick-off but during the match the robots are completely autonomous.

This is what makes robot football so extraordinary. The robots are not just being moved around the pitch by remote control; they are making their own decisions. They control where they run (or waddle), pass the ball and shoot for the goal without any live direction from a human. Their choices are based on what they see and the position of the ball, their teammates and the opposing team.

It’s what’s inside that counts

While a team of human players often comes complete with a dazzling array of ridiculous haircuts and tattoos, it is much harder to tell a team of robots apart. All the players are physically identical – the only visible differences on a robot football pitch are coloured markings to differentiate the two teams.

But appearances can be deceptive. Under their plastic skins the teams are far from the same. Each runs completely different software that has been painstakingly crafted by the team coders. The software to make these robots play football cannot be downloaded from an app store. It has to be crafted from scratch. Football is a complex sport and there are potentially limitless strategies that a team could use to win. This is hard-core coding.

The contest is, in effect, a battle of software. All things being equal – and at the moment they pretty much are – the team with the smartest programming, coding the cleverest plays will emerge victorious. At the end of the first-half the robots are brought to a halt. At this point, the team coders can be seen furiously attacking their laptops. This is their chance to quickly modify their robots’ software after seeing how they performed in the first half. They might have as little as ten minutes to do it, which seems like a risky strategy.

There’s a chance that the coders could make a mistake that renders the robots incapable of doing anything at all, let alone play a better game, but it’s a chance worth taking. If, in the first-half, the other team breaks out some nifty new moves, running rings (perhaps literally) around their opponents, this is the best opportunity the coders will get to raise their team’s game. It’s the robot equivalent of the tough talking in the half-time dressing room.

It’s easy to see why Robocup and the FIRA world cup, the two major international competitions, are so successful. Both contests have been running since around 1996. Some teams enter every year, building tremendous experience and a sophisticated code base. And several world-leading research groups use these contests as a test-bed for new approaches to multi-robot collaboration, publishing their findings in leading robotics journals afterwards.

As a robotics competition robot football ticks all the boxes: a game with universal appeal yet also hugely demanding for robots; it’s a fun way for young roboticists to learn robot programming, and it’s a great spectator sport too.


Acknowledgements: this article was commissioned and edited by The Conversation Technology Editor Laura Hood.

Related blog posts:
FIRA 2012 Robot World Cup to be hosted by the Bristol Robotics Lab

Tuesday, July 29, 2014

On internal models, consequence engines and Popperian creatures

So. We've been busy in the lab the last few months. Really exciting. Let me explain.

For a couple of years I've been thinking about robots with internal models. Not internal models in the classical control-theory sense, but simulation based models; robots with a simulation of themselves and their environment inside themselves, where that environment could contain other robots or, more generally, dynamic actors. The robot would have, inside itself, a simulation of itself and the other things, including robots, in its environment. It takes a bit of getting your head round. But I'm convinced that this kind of internal model opens up all kinds of possibilities. Robots that can be safe, for instance, in unknown or unpredictable environments. Robots that can be ethical. Robot that are self-aware. And robots with artificial theory of mind.

I'd written and talked about these ideas but, until now, not had a chance to test them with real robots. But, between January and June the swarm robotics group was joined by Christian Blum, a PhD student from the cognitive robotics research group of the Humboldt University of Berlin. I suggested Christian work on an implementation on our e-puck robots and happily he was up for the challenge. And he succeeded. Christian, supported by my post-doc Research Fellow Wenguo, implemented what we call a Consequence Engine, running in real-time, on the e-puck robot.

Here is a block diagram. The idea is that for each possible next action of the robot, it simulates what would happen if the robot were to execute that action for real. This is the loop shown on the left. Then, the consequences of each of those next possible actions are evaluated. Those actions that have 'bad' consequences, for either the robot or other actors in its environment, are then inhibited.

This short summary hides alot of detail. But let me elaborate on two aspects. First, what do I mean by 'bad'? Well it depends on what capability we are trying to give the robot. If we're making a safer robot, 'bad' means 'unsafe'; if we're trying to build an ethical robot, 'bad' would mean something different - think of Asimov's laws of robotics. Or bad might simply mean 'not allowed' if we're building a robot whose behaviours are constrained by standards, like ISO 13482:2014.

Second, notice that the consequence engine is not controlling the robot. Instead it runs in parallel. Acting as a 'governor', it links with the robot controller's action selection mechanism, inhibiting those actions evaluated as somehow bad. Importantly the consequence engine doesn't tell the robot what to do, it tells it what not to do.

Running the open source 2D robot simulator Stage as its internal simulator our consequence engine runs at 2Hz, so every half a second it is able to simulate about 30 next possible actions and their consequences. The simulation budget allows us to simulate ahead around 70cm of motion for each of those next possible actions. In fact Stage is actually running on a laptop, linked to the robot over the fast WiFi LAN. But logically it is inside the robot. What's important here is the proof of principle.

Dan Dennett, in his remarkable book Darwin's Dangerous Idea, describes the Tower of Generate-and-Test; a conceptual model for the evolution of intelligence that has become known as Dennett's Tower.

In a nutshell Dennett's tower is set of conceptual creatures each one of which is successively more capable of reacting to (and hence surviving in) the world through having more sophisticated strategies for 'generating and testing' hypotheses about how to behave. Read chapter 13 of Darwin's Dangerous Idea for the full account, but there are some good précis to be found on the web; here's one. The first three storeys of Dennett's tower, starting on the ground floor, have:
  • Darwinian creatures have only natural selection as the generate and test mechanism, so mutation and selection is the only way that Darwinian creatures can adapt - individuals cannot.
  • Skinnerian creatures can learn but only by literally generating and testing all different possible actions then reinforcing the successful behaviour (which is ok providing you don't get eaten while testing a bad course of action).
  • Popperian creatures have the additional ability to internalise the possible actions so that some (the bad ones) are discarded before they are tried out for real.
Like the Tower of Hanoi each successive storey is smaller (a sub-set) of the storey below, thus all Skinnerian creatures are Darwinian, but only a sub-set of Darwinian creatures are Skinnerian and so on.

Our e-puck robot, with its consequence engine capable of generating and testing next possible actions, is an artificial Popperian Creature: a working model for studying this important kind of intelligence.

In my next blog post, I'll outline some of our experimental results.

Acknowledgements:
I am hugely grateful to Christian Blum who brilliantly implemented the architecture outlined here, and conducted experimental work. Christian was supported by Dr Wenguo Liu, with his deep knowledge of the e-puck, and our experimental infrastructure.

Related blog posts:

Saturday, July 19, 2014

Estimating the energy cost of evolution

Want to create human-equivalent AI? Well, broadly speaking, there are 3 approaches open to you: design it, reverse-engineer it or evolve it. The third of these - artificial evolution - is attractive because it sidesteps the troublesome problem of having to understand how human intelligence works. It's a black box approach: create the initial conditions then let the blind watchmaker of artificial evolution do the heavy lifting. This approach has some traction. For instance David Chalmers, in his philosophical analysis of the technological singularity, writes "if we produce an AI by artificial evolution, it is likely that soon after we will be able to improve the evolutionary algorithm and extend the evolutionary process, leading to AI+". And since we can already produce simple AI by artificial evolution, then all that's needed is to 'improve the evolutionary algorithm'. Hmm. If only it were that straightforward.

About six months ago I asked myself (and anyone else who would listen): ok, but even if we had the right algorithm, what would be the energy cost of artificially evolving human-equivalent AI? My hunch was that the energy cost would be colossal; so great perhaps as to rule out the evolutionary approach altogether. That thinking, and some research, resulted in me submitting a paper to ALIFE 14. Here is the abstract:
This short discussion paper sets out to explore the question: what is the energy cost of evolving complex artificial life? The paper takes an unconventional approach by first estimating the energy cost of natural evolution and, in particular, the species Homo Sapiens Sapiens. The paper argues that such an estimate has value because it forces us to think about the energy costs of co-evolution, and hence the energy costs of evolving complexity. Furthermore, an analysis of the real energy costs of evolving virtual creatures in a virtual environment, leads the paper to suggest an artificial life equivalent of Kleiber's law - relating neural and synaptic complexity (instead of mass) to computational energy cost (instead of real energy consumption). An underlying motivation for this paper is to counter the view that artificial evolution will facilitate the technological singularity, by arguing that the energy costs are likely to be prohibitively high. The paper concludes by arguing that the huge energy cost is not the only problem. In addition we will require a new approach to artificial evolution in which we construct complex scaffolds of co-evolving artificial creatures and ecosystems.
The full proceedings of ALIFE 14 have now been published online, and my paper Estimating the Energy Cost of (Artificial) Evolution can be downloaded here.

And here's a very short (30 second) video introduction on YouTube:


My conclusion? Well I reckon that the computational energy cost of simulating and fitness testing something with an artificial neural and synaptic complexity equivalent to humans could be around 10^14 KJ, or 0.1 EJ. But evolution requires many generations and many individuals per generation, and - as I argue in the paper - many co-evolving artificial species. Also taking account of the fact that many evolutionary runs will fail (to produce smart AI), the whole process would almost certainly need to be re-run from scratch many times over. If multiplying those population sizes, generations, species and re-runs gives us (very optimistically) a factor of 1,000,000 - then the total energy cost would be 100,000 EJ. In 2010 total human energy use was about 539 EJ. So, artificially evolving human-equivalent AI would need the whole human energy generation output for about 200 years.


The full paper reference:

Winfield AFT, Estimating the Energy Cost of (Artificial) Evolution, pp 872-875 in Proceedings of the Fourteenth International Conference on the Synthesis and Simulation of Living Systems, Eds. H Sayama, J Rieffel, S Risi, R Doursat and H Lipson, MIT Press, 2014.

Related blog posts:

Saturday, June 28, 2014

Your robot doggie could really be pleased to see you

There have been several stories in the last few weeks about emotional robots; robots that feel. Some are suggesting that this is the next big thing in robotics. It's something I wrote about in this blog post seven years ago: could a robot have feelings?

My position on this question has always been pretty straightforward. It's easy to make robots that behave as if they have feelings, but quite a different matter to make robots that really have feelings. 

But now I'm not so sure. There are I think two major problems with this apparently clear distinction between as if and really have.

The first is what do we mean by really have feelings. I'm reminded that I once said to a radio interviewer who asked me if a robot have feelings: if you can tell me what feelings are, I'll tell you whether a robot can have them or not. Our instinct (feeling even) is that feelings are something to do with hormones, the messy and complicated chemistry that too often seems to get in the way of our lives. Thinking, on the other hand, we feel to be quite different; the cool clean process of neurons firing, brains working smoothly. Like computers. Of course this instinct, this dualism, is quite wrong. We now know, for instance, that damage to the emotional centre of the brain can lead to an inability to make decisions. This false dualism has led I think to the trope of the cold, calculating unfeeling robot.

I think there is also some unhelpful biological essentialism at work here. We prefer it to be true that only biological things can have feelings. But which biological things? Single celled organisms? No, they don't have feelings. Why not? Because they are too simple. Ah, so only complex biological things have feelings. Ok, what about sharks or crocodiles; they're complex biological things; do they have feelings? Well, basic feelings like hunger, but not sophisticated feelings, like love or regret. Ah, mammals then. But which ones? Well elephants seem to mourn their dead. And dogs of course. They have a rich spectrum of emotions. Ok, but how do we know? Well because of the way they behave; your dog behaves as if he's pleased to see you because he really is pleased to see you. And of course they have the same body chemistry as us, and since our feelings are real* so must theirs be.

And this brings me to the second problem. The question of as if. I've written before that when we (roboticists) talk about a robot being intelligent, what we mean is a robot that behaves as if it is intelligent. In other words an intelligent robot is not really intelligent, it is an imitation of intelligence. But for a moment let's not think about artificial intelligence, but artificial flight. Aircraft are, in some sense, an imitation of bird flight. And some recent flapping wing flying robots are clearly a better imitation - a higher fidelity simulation - than fixed-wing aircraft. But it would be absurd to argue that an aircraft, or a flapping wing robot, is not really flying. So how do we escape this logical fix? It's simple. We just have to accept that an artefact, in this case an aircraft or flying robot, is both an emulation of bird flight and really flying. In other words an artificial thing can be both behaving as if it has some property of natural systems and really demonstrating that property. A robot can be behaving as if it is intelligent and - at the same time - really be intelligent. Are there categories of properties for which this would not be true? Like feelings..? I used to think so, but I've changed my mind.

I'm now convinced that we could, eventually, build a robot that has feelings. But not by simply programming behaviours so that the robot behaves as if it has feelings. Or by having to invent some exotic chemistry that emulates bio-chemical hormonal systems. I think the key is robots with self-models. Robots that have simulations of themselves inside themselves. If a robot is capable of internally modelling the consequences of it's, or other's actions, on itself, then it seems to me it could demonstrate something akin to regret (about being switched off, for instance). A robot with a self-model has the computational machinery to also model the consequences of actions on conspecifics - other robots. It would have an artificial Theory of Mind and that, I think, is a prerequisite for empathy. Importantly we would also program the robot to model heterospecifics, in particular humans, because we absolutely require empathic robots to be empathic towards humans (and, I would argue, animals in general).

So, how would this robot have feelings? It would, I believe, have feelings by virtue of being able to model the consequences of actions, both its own and others' actions, on itself and others. This would lead to it making decisions about how to act, and behave, which would demonstrate feelings, like regret, guilt, pleasure or even love, with an authenticity which would make it impossible to argue that it doesn't really have feelings.

So your robot doggie could really be pleased to see you.

*except when they're not.


Postscript. A colleague has tweeted that I am confusing feelings and emotion here. Mea culpa. I'm using the word feelings here in a pop-psychology everyday sense of feeling hungry, or tired, or a sense of regret. Wikipedia defines feelings, in psychology, as a word is 'usually reserved for the conscious subjective experience of emotion'. The same colleague asserts that what I've outlined could lead to artificial empathy, but not artificial emotion (or presumably feelings). I'm not sure I understand what emotions are well enough to argue. But I guess the idea I'm really trying to explore here is artificial subjectivity. Surely a robot with artificial subjectivity who's behaviour expresses and reflects that subjective experience could be said to be behaving emotionally?

Related blog posts:
Robot know thyself
Could a robot have feelings?

Thursday, June 26, 2014

The Next Big Things in Robotics

Last week I attended the launch event for a new NESTA publication called Our work here is done: Visions of a Robot Economy. It was an interesting event, and not at all what I was expecting. In fact I didn't know what to expect. Even though I contributed a chapter to the book I had no idea, until last week, who else had written for it - or the scope of those contributions and the book as a whole. I was very pleasantly surprised. Firstly because it was great to find myself in such good company: economists, philosophers, historians, (ex-) financiers and all round deep thinkers. And second because the volume faces up to some of the difficult societal questions raised by second wave robotics.

The panel discussion was excellent, and the response by economist Carlota Perez was engaging and thought provoking - check here for the Storified tweets and pictures. Perhaps the thing that surprised me the most, given the serious economists on the panel (FT, The Economist) was that the panel ended up agreeing that the Robot Economy will necessitate something like a Living Wage. Music to this socialist's ears.

In my contribution: The Next Big Things in Robotics (pages 38-44) I do a bit of near-future gazing and suggest four aspects of robotics that will, I think, be huge. They are:
  • Wearable Robotics
  • Immersive Teleoperated Robots
  • Driverless Cars
  • Soft Robotics
To see why I chose these - and to read the other great articles - please download the book. Let me know if you disagree with my choice, or to suggest other Next Big Things in robotics. I end my chapter with a section called What's not coming soon: super intelligent robots
"My predicted things that will be really big in robotics don't need to be super intelligent. Wearable robots will need advanced adaptive (and very safe and reliable) control systems, as well as advanced neural–electronics interfaces, and these are coming. But ultimately it’s the human wearing the robot who is in charge. The same is true for teleoperated robots: again, greater low–level intelligence is needed, so that the robot can operate autonomously some of the time but ask for help when it can’t figure out what to do next. But the high–level intelligence remains with the human operator and – with advanced immersive interfaces as I have suggested – human and robot work together seamlessly. The most autonomous of the next big things in robotics is the driverless car, but again the car doesn't need to be very smart. You don't need to debate philosophy with your car – just trust it to take you safely from A to B."


Related blog posts:
New Robotics and New Opportunities
Soft Robotics in Space
Google robot car: Great but proving the AI is safe is the real challenge
Why robots will not be smarter than humans by 2029

Friday, February 28, 2014

Why robots will not be smarter than humans by 2029

In the last few days we've seen a spate of headlines like 2029: the year when robots will have the power to outsmart their makers, all occasioned by an Observer interview with Google's newest director of engineering Ray Kurzweil.

Much as I respect Kurzweil's achievements as an inventor, I think he is profoundly wrong. Of course I can understand why he would like it to be so - he would like to live long enough to see this particular prediction come to pass. But optimism doesn't make for sound predictions. Here are several reasons that robots will not be smarter than humans by 2029.

  • What exactly does as-smart-as-humans mean? Intelligence is very hard to pin down. One thing we do know about intelligence is that it is not one thing that humans or animals have more or less of. Humans have several different kinds of intelligence - all of which combine to make us human. Analytical or logical intelligence of course - the sort that makes you good at IQ tests. But emotional intelligence is just as important, especially (and oddly) for decision making. So is social intelligence - the ability to intuit others' beliefs, and to empathise. 
  • Human intelligence is embodied. As Rolf Pfeifer and Josh Bongard explain in their outstanding book you can't have one without the other. The old Cartesian dualism - the dogma that robot bodies (the hardware) and mind (the software) are distinct and separable - is wrong and deeply unhelpful. We now understand that the hardware and software have to be co-designed. But we really don't understand how to do this - none of our engineering paradigms fit. A whole new approach needs to be invented.
  • As-smart-as-humans probably doesn't mean as-smart-as newborn babies, or even two year old infants. They probably mean somehow-comparable-in-intelligence-to adult humans. But an awful lot happens between birth and adulthood. And the Kurzweilians probably also mean as-smart-as-well-educated-humans. But of course this requires both development - a lot of which somehow happens automatically - and a great deal of nurture. Again we are only just beginning to understand the problem, and developmental robotics - if you'll forgive the pun - is still in its infancy.
  • Moore's Law will not help. Building human-equivalent robot intelligence needs far more than just lots of computing power. It will certainly need computing power, but that's not all. It's like saying that all you need to build a cathedral is loads of marble. You certainly do need large quantities of marble - the raw material - but without (at least) two other things: the design for a cathedral, and/or the knowhow of how to realise that design - there will be no cathedral. The same is true for human-equivalent robot intelligence. 
  • The hard problem of learning and the even harder problem of consciousness. (I'll concede that a robot as smart as a human doesn't have to be conscious - a philosophers-zombie-bot would do just fine.) But the human ability to learn, then generalise that learning and apply it to completely different problems is fundamental, and remains an elusive goal for robotics and AI. In general this is called Artificial General Intelligence, which remains as controversial as it is unsolved.

These are the reasons I can be confident in asserting that robots will not be smarter than humans within 15 years. It's not just that building robots as smart as humans is a very hard problem. We have only recently started to understand how hard it is well enough to know that whole new theories (of intelligence, emergence, embodied cognition and development, for instance) will be needed, as well as new engineering paradigms. Even if we had solved these problems and a present day Noonian Soong had already built a robot with the potential for human equivalent intelligence - it still might not have enough time to develop adult-equivalent intelligence by 2029.

That thought leads me to another reason that it's unlikely to happen so soon. There is - to the best of my knowledge - no very-large-scale multidisciplinary research project addressing, in a coordinated way, all of the difficult problems I have outlined here. The irony is that there might have been. The project was called Robot Companions, it made it to the EU FET 10-year Flagship project shortlist but was not funded.


Search Results

Saturday, February 22, 2014

What does it mean to have giants like Google, Apple and Amazon investing in robotics?

This was the latest question posed to the Robotics by Invitation panel on Robohub. Here, reposted, is my answer.

Judging by the levels of media coverage and frenzied speculation that has followed each acquisition, the short answer to what does it mean is: endless press exposure. I almost wrote ‘priceless exposure’ but then these are companies with very deep pockets; nevertheless the advertising value equivalent must be very high indeed. The coverage really illustrates the fact that these companies have achieved celebrity status. They are the Justin Beibers of the corporate world. Whatever they do, whether it is truly significant or not, is met with punditry and analysis about what it means. A good example is Google’s recent acquisition of British company DeepMind. In other words: large AI Company buys small AI Company. Large companies buy small companies all the time but mostly they don’t make prime time news. It’s the Beiberisation of the corporate world.
But the question is about robotics, and to address it in more detail I think we need to think about the giants separately.
Take Amazon. We think of Amazon as an Internet company, but the web is just its shop window. Behind that shop window is a huge logistics operation with giant warehouses – Amazon’s distribution centres, so no one should be at all surprised by their acquisition of brilliant warehouse automation company Kiva Systems. Amazon’s recent stunt with the ‘delivery drone’ was I think just that – a stunt. Great press. But I wouldn’t be at all surprised to see more acquisitions toward further automation of Amazon’s distribution and delivery chain.
Apple is equally unsurprising. They are a manufacturing company with a justifiable reputation for super high quality products. As an electronics engineer who started his career by taking wirelesses and gramophones apart as a boy, I’m fascinated by the tear-downs that invariably follow each new product release. It’s obvious that each new generation of Apple devices is harder to manufacture than the last. Precision products need precision manufacture and it is no surprise that Apple is investing heavily in the machines needed to make its products.
Google is perhaps the least obvious candidate to invest in robotics. You could of course take the view that a company with more money than God can make whatever acquisitions it likes without needing a reason – that these are vanity acquisitions. But I don’t think that’s the case. I think Google has its eyes on the long term. It is an Internet company and the undisputed ruler of the Internet of Information. But computers are no longer the only things connected to the Internet. Real world devices are increasingly networked – the so-called Internet of Things. I think Google doesn’t want to be usurped by a new super company that emerges as the Google of real-world stuff. It’s not quite sure how the transition to the future Internet of Everything will pan out, but figures that mobile robots – as well as smart environments– will feature heavily in that future. I think Google is right. I think it’s buying into robotics because it wants to be a leader and shape the future of the Internet of Everything.


Do please read the other panelists' answers - all interesting, and different!