Here is the submitted text for the article
Artificial intelligence will not turn into a Frankenstein's monster, published in The Observer, Sunday 10 August 2014.
The singularity. Or to give it it's proper title, the
technological
singularity. It's a Thing. An idea that has taken on a life of its own;
more of a life, I suspect, than the very thing it predicts ever will. It's a
Thing for the techno-utopians: wealthy middle-aged men who regard the
singularity as their best chance of immortality. They are
Singularitarians,
some of whom appear prepared to go to extremes to stay alive for long enough to
benefit from a benevolent super-AI - a manmade god that grants transcendence.
And it's a Thing for the doomsayers, the techno-dystopians.
Apocalypsarians who are equally convinced that a superintelligent AI will have
no interest in curing cancer or old age, or ending poverty, but will instead -
malevolently or maybe just accidentally - bring about the end of human
civilisation as we know it. History and Hollywood are on their side. From the
Golem to Frankenstein's monster, Skynet and the Matrix, we are fascinated by
the old story: man plays god and then things go horribly wrong.
The singularity is basically the idea that as soon as
Artificial Intelligence exceeds human intelligence then everything changes.
There are two central planks to the singularity hypothesis: one is the idea
that as soon as we succeed in building AI as smart as humans then it rapidly
re-invents itself to be even smarter, starting a chain reaction of smarter-AI
inventing even-smarter-AI until even the smartest humans cannot possibly
comprehend how the superintelligent AI works. The other is that the future of
humanity becomes unpredictable and in some sense out-of-control from the moment
of the singularity onwards.
So, should we be worried, or optimistic, about the
technological singularity? Well I think we should be a little worried –
cautious and prepared may be a better way of putting it – and at the same time
a little optimistic (that’s the part of me that would like to live in Iain M
Banks’
The Culture). But
I don’t believe we need to be
obsessively
worried by a hypothesised existential risk to humanity. Why? Because, for the
risk to become real, a sequence of things all need to happen. It’s a sequence
of big ifs.
If we succeed in building
human equivalent AI and
if that AI
acquires a full understanding of how it works [1], and
if it then succeeds in improving itself to produce
super-intelligent AI [2], and
if that
super-AI, either accidentally or maliciously, starts to consume resources, and
if we fail to pull the plug then, yes,
we may well have a problem. The risk, while not impossible, is improbable.
By worrying unnecessarily I think we’re falling into a trap:
the fallacy of
privileging the hypothesis. And – perhaps worse – taking our
eyes off other risks that we should
really
be worrying about: like man-made climate change, or bioterrorism. Let me
illustrate what I mean. Imagine I ask you to consider the possibility that we
invent faster than light travel sometime in the next 100 years. Then I worry
you by outlining all sorts of nightmare scenarios that might follow from the
misuse of this technology. At the end of it you’ll be thinking: my god, never
mind climate change, we need to stop all FTL research right now.
Wait a minute, I hear you say, there are lots of AI systems in
the world already, surely it’s just a matter of time? Yes we do have lots of AI
systems, like chess programs, search engines or automated financial transaction
systems, or the software in driverless cars. And some AI systems are already
smarter than most humans, like chess programs or language translation systems.
Some are as good as some humans, like driverless cars or natural speech
recognition systems (like
Siri)
and will soon be better than most humans. But none of this
already-as-smart-as-some-humans AI has brought about the end of civilisation
(although I'm suspiciously eyeing the financial transaction systems). The
reason is that these are all narrow-AI systems: very good at doing just one
thing.
A human-equivalent AI would need to be a generalist, like we
humans. It would need to be able to learn, most likely by developing over the
course of some years, then generalise what it has learned – in the same way
that you and I learned as toddlers that wooden blocks could be stacked, banged
together to make a noise, or as something to stand on to reach a bookshelf. It
would need to understand meaning and context, be able to synthesise new
knowledge, have intentionality and – in all likelihood – be self-aware, so it
understands what it means to have agency in the world.
There is a huge gulf between present day narrow-AI systems and
the kind of
Artificial
General Intelligence I have outlined [3]. Opinions vary of course, but I think
it’s as wide a gulf as that between current space flight and practical faster
than light spaceflight; wider perhaps, because we don’t yet have a theory of
general intelligence, whereas there are several candidate FTL drives consistent
with general relativity, like the
Alcubierre drive.
So I don’t think we need to be obsessing about the risk of superintelligent
AI but, as hinted earlier, I do think we need to be cautious and prepared. In a
Guardian
podcast last week philosopher Nick Bostrom explained that there are two big
problems, which he calls competency and control. The first is how to make super
intelligent AI, the second is how to control it (i.e. to mitigate the risks).
He says hardly anyone is working on the control problem, whereas loads of
people are going hell for leather on the first. On this I 100% agree, and I’m
one of the small number of people working on the control problem.
I’ve been a strong advocate of robot ethics
for a number of years. In 2010 I was part of a group that drew up a set of
principles
of robotics – principles that apply equally to AI systems. I strongly
believe that science and technology research should be undertaken within a
framework of responsible
innovation, and have argued that we should be thinking about subjecting
robotics and AI research to ethical approval, in the same way that we do for
human subject research. And recently I’ve started
work
towards making ethical robots. This is not just to mitigate future risks, but
because the kind of not-very-intelligent robots we make in the very near future
will need to be ethical as well as safe. I think we should be worrying about
present day AI rather than future superintelligent AI.
Here are the comments posted in response to this article. I replied to a number of these, but ran out of time before comments were closed on 13 August. If you posted a late comment and didn't get a reply from me (but were expecting one) please re-post your comment here.
Notes:
[1] Each of these ifs needs detailed consideration. I really only touch upon the first here: the likelihood of achieving human equivalent AI (or AGI). But consider the second: for that AGI to be able to understand itself well enough to be able to then re-invent itself - hence triggering an
Intelligence Explosion is not a given. An AGI as smart and capable as most humans would not be sufficient - it would need to have the
complete knowledge of its designer (or more likely the entire team who designed it) - and then some more: it would need to be capable of additional insights that somehow its team of human designers missed. Not impossible but surely
very unlikely.
[2] Take the third if: the AGI succeeds in improving itself. There seems to me no sound basis for arguing that it should be easy for an AGI - even one as smart as a very smart cognitive scientist - to figure out how to improve itself. Surely it is more logical to suppose that each incremental increase in intelligence will be harder than the last, thus acting as a brake on the self-improving AI. Thus I think an intelligence explosion is also very unlikely.
[3] One of the most compelling explanations for the profound difficulty of AGI is by David Deutsch:
Philosophy will be the key that unlocks artificial intelligence.
Related blog posts:
Why robots will not be smarter than humans by 2029
Estimated the energy cost of evolution
Ethical Robots: some technical and ethical challenges