Friday, June 28, 2019

Energy and Exploitation: AIs dirty secrets

A couple of days ago I gave a short 15 minute talk at an excellent 5x15 event in Bristol. The talk I actually gave was different to the one I'd originally suggested. Two things prompted the switch: one was seeing the amazing line up of speakers on the programme - all covering more or less controversial topics - and the other was my increasing anger in recent months over the energy and human costs of AI. So it was that I wrote a completely new talk the day before this event.

But before I get to my talk I must mention the amazing other speakers: we heard Phillipa Perry speaking on child parent relationships, Hallie Rubenhold on the truth about Jack the Ripper's victims,  Jenny Riley speaking very movingly about One25's support for Bristol's (often homeless) sex workers, and Amy Sinclair introducing her activism with Extinction Rebellion.

Here is the script for my talk (for the slides go to the end of this blog post).


Artificial Intelligence and Machine Learning are often presented as bright clean new technologies with the potential to solve many of humanity's most pressing problems.

We already enjoy the benefit of truly remarkable AI technology, like machine translation and smart maps. Driverless cars might help us get around before too long, and DeepMind's diagnostic AI can detect eye diseases from retinal scans as accurately as a doctor.

Before getting into the ethics of AI I need to give you a quick tutorial on machine learning. The most powerful and exciting AI today is based on Artificial Neural Networks. Here [slide 3] is a simplified diagram of a Deep Learning network for recognizing images. Each small circle is a *very* simplified mathematical model of biological neurons, and the outputs of each layer of artificial neurons feed the inputs of the next layer. In order to be able to recognise images the network must first be trained with images that are already labelled - in this case my dog Lola.

But in order to reliably recognise Lola the network needs to be trained not with one picture of Lola but many. This set of images is called the training data set and without a good data set the network will not work at all or will be biased. (In reality there will need to be not 4 but hundreds of images of Lola).

So what does an AI ethicist do? Well, the short answer is worry. I worry about the ethical and societal impact of AI on individuals, society and the environment. Here are some keywords on ethics [slide 4], reflecting that we must work toward AI that respects Human Rights, diversity and dignity, is unbiased and sustainable, transparent, accountable and socially responsible.

But I do more than just worry. I also take practical steps like drafting ethical principles, and helping to write ethical standards for the British Standards Institute and the IEEE Standards Association. I lead P7001: a new standard on transparency in of autonomous systems based on the simple ethical principle that it should always be possible to find out why an AI made a particular decision. I have given evidence in parliament several times, and recently took part in a study of AI and robotics in healthcare and what this means for the workforce of the NHS.

Now I want to share two serious new worries with you.

The first is about the energy cost of AI. In 2016 Go champion Lee Sedol was famously defeated by DeepMind's AlphaGo. It was a remarkable achievement for AI. But consider the energy cost. In a single two hour match Sedol burned around 170 kcals: roughly the amount of energy you would get from an egg sandwich.  Or about the power of an LED night light -  1 Watt. In the same two hours the AlphaGo machine reportedly consumed 50,000 times more energy than Sedol. Equivalent to a 50 kW generator for industrial lighting. And that's not taking account of the energy used to train AlphaGo.

Now some people think we can make human equivalent AI by simulating the human brain. But the most complex animal brain so far simulated is that of c-elegans – the nematode worm. It has 302 neurons and about 5000 synapses - these are the connections between neurons. A couple of years ago I worked out that simulating a neural network for a simple robot with only a 10th the number of neurons of c-elegans costs 2000 times more energy than the whole worm.

In a new paper that came out just a few days ago we have for the first time estimates of the carbon cost of training large AI models for natural language processing such as machine translation [1]. The carbon cost of simple models is quite modest, but with tuning and experimentation the carbon cost leaps to 7 times the carbon footprint of an average human in one year (or 2 times if you're an American).

And the energy cost of optimising the biggest model is a staggering 5 times the carbon cost of a car over its whole lifetime, including manufacturing it in the first place. The dollar cost of that amount of energy is estimated at between one and 3 million US$. (Something that only companies with very deep pockets can afford.)

These energy costs seem completely at odds with the urgent need to halve carbon dioxide emissions by 2030. At the very least AI companies need to be honest about the huge energy costs of machine learning.

Now I want to turn to the human cost of AI. It is often said that one of the biggest fears around AI is the loss of jobs. In fact the opposite is happening. Many new jobs are being created, but the tragedy is that they are not great jobs, to say the least. Let me introduce you to two of these new kinds of jobs.

The first is AI tagging. This is manually labelling objects in images to, for instance, generate training data sets for driverless car AIs. Better (and safer) AI needs huge training data sets and a whole new outsourced industry has sprung up all over the world to meet this need. Here [slide 9] is an AI tagging factory in China.

Conversational AI or chat bots also need human help. Amazon for instance employs thousands of both full-time employees and contract workers to listen to and annotate speech. The tagged speech is then fed back to Alexa to improve its comprehension. And last month the Guardian reported that Google employs around 100,000 temps, vendors and contractors: literally an army of linguists working in "white collar sweatshops" to create the handcrafted data sets required for Google translate to learn dozens of languages. Not surprisingly there is a huge disparity between the wages and working conditions of these workers and Google's full time employees.

AI tagging jobs are dull, repetitive and in the case of the linguists highly skilled. But by far the worst kind of new white collar job in the AI industry is content moderation.

These tens of thousands of people, employed by third-party contractors, are required to watch and vet offensive content: hate speech, violent pornography, cruelty and sometimes murder of both animals and humans for Facebook, YouTube and other media platforms [2]. These jobs are not just dull and repetitive they are positively dangerous. Harrowing reports tell of PTSD-like trauma symptoms, panic attacks and burnout after one year, alongside micromanagement, poor working conditions and ineffective counselling. And very poor pay - typically $28,800 a year. Compare this with average annual salaries at Facebook of ~$240,000.

The big revelation to me over the past few months is the extent to which AI has a human supply chain, and I am an AI insider! The genius designers of this amazing tech rely on both huge amounts of energy and a hidden army of what Mary Gray and Siddhartha Suri call Ghost Workers.

I would like to leave you with a question: how can we, as ethical consumers, justify continuing to make use of unsustainable and unethical AI technologies?





References:

[1] Emma Strubell, Ananya Ganesh, Andrew McCallum (2019) Energy and Policy Considerations for Deep Learning in NLP, arXiv:1906.02243
[2] Sarah Roberts (2016) Digital Refuse: Canadian Garbage, Commercial Content Moderation and the Global Circulation of Social Media’s Waste, Media Studies Publications. 14.

2 comments:

  1. The more you look at it the less spectacular AI seems. While there's obviously a lot of potential is there yet any industrial application of machine learning, neural networks etc..., which can't be handled better by a normal hard coded program with a human defined set of rules? Evolved algorithms have the clear advantage that while they weren't human produced they are still human readable and can be, when necessary, modified to fix bugs, or simply used as inspiration for a human coded piece of software to be based upon. Neural networks on the other hand look like they work fine, right up until a black dot turns a banana into a toaster.

    ReplyDelete
  2. Thanks for sharing these very interesting considerations!! I only disagree about the negative conclusion on the "ethics of AI". AI (unline in books and movies like I, Robot and Blade Runner) has no ethics. Humans (e.g. company managers and consumers) do.
    Worker exploitation has nothing to do with the underlying technology, which by and far is neutral (nothing in AI algorithms implies that linguists must be paid less than programmers; in fact, since accurate labeling is at the basis of the "quality" of all ML, while algorithms are inherently robust to noise, linguists should be paid more :-) ).
    The fact that most of our fruit and vegetables are picked by illegal immigrants from Africa or Mexico, depending on the continent, does not mean that eating them is immoral. :-) :-)
    In both cases we, as consumers, must make companies legally responsible for the fairness of their employment practices, and workers must unite again, rather than competing against each other, to ensure fair wages.
    In that respect, we are just beginning to understand how workers' rights can be guaranteed in an (unavoidably) globalized world. I can only hope that progress in that domain, supported by the kind of consumer awareness that you rightly advocate, will be almost as swift as the technological progress that we have witnessed.

    ReplyDelete