Wednesday, July 31, 2019

On the simulation (and energy costs) of human intelligence, the singularity and simulationism

For many researchers the Holy Grail of robotics and AI is the creation of artificial persons: artefacts with equivalent general competencies as humans. Such artefacts would literally be simulations of humans. Some researchers are motivated by the utility of AGI; others have an almost religious faith in the transhumanist promise of the technological singularity. Others, like myself, are driven only by scientific curiosity. Simulations of intelligence provide us with working models of (elements of) natural intelligence. As Richard Feynman famously said ‘What I cannot create, I do not understand’. Used in this way simulations are like microscopes for the study of intelligence; they are scientific instruments.

Like all scientific instruments simulation needs to be used with great care; simulations need to be calibrated, validated and – most importantly – their limitations understood. Without that understanding any claims to new insights into the nature of intelligence – or for the quality and fidelity of an artificial intelligence as a model of some aspect of natural intelligence – should be regarded with suspicion.

In this essay I have critically reflected on some of the predictions for human-equivalent AI (AGI); the paths to AGI (and especially via artificial evolution); the technological singularity, and the idea that we are ourselves simulations in a simulated universe (simulationism). The quest for human-equivalent AI clearly faces many challenges. One (perhaps stating the obvious) is that it is a very hard problem. Another, as I have argued in this essay, is that the energy costs are likely to limit progress.

However, I believe that the task is made even more difficult for two further reasons. The first is – as hinted above – that we have failed to recognize simulations of intelligence (which all AIs and robots are) as scientific instruments, which need to be designed, operated and results interpreted, with no less care than we would a particle collider or the Hubble telescope.

The second, and more general observation, is that we lack a general (mathematical) theory of intelligence. This lack of theory means that a significant proportion of AI research is not hypothesis  driven, but incrementalist and ad-hoc. Of course such an approach can and is leading to interesting  and (commercially) valuable advances in narrow AI. But without strong theoretical foundations, the grand challenge of human-equivalent AI seems rather like trying to build particle accelerators to understand the nature of matter, without the Standard Model of particle physics.

The text above is the concluding discussion of my essay On the simulation (and energy costs) of human intelligence, the singularity and simulationism, which appears in an edited collection of essays in a book called From Astrophysics to Unconventional Computation. Published in April 2019, the book marks the 60th birthday of astrophysicist, computer scientist and all round genius, Susan Stepney.

Note: regular visitors to the blog will recognise themes covered in several previous blog posts, brought together in I hope a coherent and interesting way.

Monday, July 29, 2019

Ethical Standards in Robotics and AI: what they are and why they matter

Here are the slides for my keynote, presented this morning at the International Conference on Robot Ethics and Standards (ICRES 2019). The talk is based on my paper Ethical Standards in Robotics and AI published in Nature Electronics a few months ago (here is a pre-print).



To see the speaker notes click on the options button on the google slides toolbar above.