Monday, June 09, 2025

AI and why we should all be worried - AI's dirty secrets

I gave a new talk on AI for the Swindon Science Cafe on 3rd June. Here below are the slides. This talk is an updated version of a short talk I gave in June 2019.

Slide 1

Hi, my name is Alan Winfield. Thank you Rod and Claire for inviting me to speak this evening.

Slide 2

So, what does a robot and AI ethicist do? Well, the short answer is worry.

I worry about the ethical and societal impact of AI on individuals, society and the environment. Here are some keywords on ethics, reflecting that we must work toward AI that respects Human Rights, diversity and dignity, is unbiased and sustainable, transparent, accountable and socially responsible. I also work in Standards with both the British Standards Institute and the International IEEE Standards Association.

Slide 3

Before getting into the ethics of AI I need to give you a quick tutorial on machine learning.

The most powerful and exciting AI today is based on Artificial Neural Networks (ANNs). Here is a simplified diagram of a Deep Learning network for recognizing images. Each small circle is a *very* simplified mathematical model of biological neurons, and the outputs of each layer of artificial neurons feed the inputs of the next layer. In order to be able to recognise images the network must first be trained with images that are already labelled - in this case my dear late dog Lola.

But in order to reliably recognise Lola the network needs to be trained not with one picture of Lola but many. This set of images is called the training data set and without a good data set the network will not work at all or will be biased. (In reality there will need to be not 4 but hundreds of images of Lola).

But even a simple ANN can get things wrong. A famous example was an ANN like this trained on pictures of wolves. After training they input of a bear, but the network identified it as a wolf. Why? Because all of the wolf pictures had snowy backgrounds, so the network learned to recognize snow, not a wolf. The bear also had a snowy background.

This is one example of what we now call a ‘hallucination’ in big AIs.

Slide 4

We’ve been worrying about the existential threat of AI for a long time: here is a piece I wrote for the Guardian in 2014, when ‘the singularity was the main thing we worried about.

The singularity is the idea that as soon as AI is smarter than humans then the AIs will rapidly improve themselves, with unforeseeable consequences for human civilization.

But the singularity is a thing for the techno-utopians: wealthy middle-aged men who regard the singularity as their best chance of immortality. They are Singularitarians, some of whom appear prepared to go to extremes to stay alive for long enough to benefit from a benevolent super-AI - a manmade god that grants transcendence.

And it's a Thing for the doomsayers, the techno-dystopians. Apocalypsarians who are equally convinced that a superintelligent AI will have no interest in curing cancer or old age, or ending poverty, but will instead - malevolently or maybe just accidentally - bring about the end of human civilisation as we know it. History and Hollywood are on their side. From the Golem to Frankenstein's monster, Skynet and the Matrix, we are fascinated by the old story: man plays god and then things go horribly wrong.

Slide 5

Today we have influential scientists who worry about Artificial General Intelligence (AGI). Notable among these is physicist and cosmologist Max Tegmark.

At the AI safety summit in February 2025 Tegmark argued that we need a middle pathway between No AI and Uncontrollable AGI, which he calls guaranteed safe tool AI. He suggested a policy solution in which the US and China both launch national safety standards preventing their own AI companies from building AGI. Leading to what Tegmark rather optimistically calls ‘an age of unprecendented global prosperity powered by safe tool AI.

Is there an existential threat from AI itself? No. I fear human stupidity much more than artificial intelligence.

So should we be worried? Yes. But the things I worry about are rather more down to earth.

In the rest of this talk I will consider the energy costs of AI, then the human costs.

Slide 6

In 2016 Go champion Lee Sedol was famously defeated by DeepMind's AlphaGo. It was a remarkable achievement for AI. But consider the energy cost. In a single two-hour match Sedol burned around 170 kcals: roughly the amount of energy you would get from an egg sandwich. Or about 1 Watt – the power of an LED night light.

In the same two hours the AlphaGo machine reportedly consumed about 50,000 Watts. The same as a 50 kW generator for industrial lighting. And that's not taking account of the energy used to train AlphaGo.

Slide 7

A paper published in 2019 paper revealed, for the first time, estimates of the carbon cost of training large AI models for natural language processing such as machine translation. The carbon cost of simple models is quite modest, but with tuning and experimentation the carbon cost leaps to 7 times the carbon footprint of an average human in one year (or 2 times if you're an American).

And the energy cost of optimizing the biggest model is a staggering 5 times the carbon cost of a car over its whole lifetime, including manufacturing it in the first place. The dollar cost of that amount of energy is estimated at between one and 3 million US$. (Something that only companies with very deep pockets can afford.)

These energy costs seem completely at odds with the urgent need to meet sustainable development goals. At the very least AI companies need to be honest about the huge energy costs of machine learning.

Slide 8

At the same Paris meeting earlier this year AI ethicist Kate Crawford drew attention to both energy and water costs of AI. Crawford predicts that the energy cost of training generative AIs will soon overtake the total energy consumption of industrialized nations such as Japan.

She also drew attention to the colossal amounts of clean water that AI server farms need to keep them cool. Water that is already a scarce resource.

One study estimated that ChatGPT-3 requires 700,000 litres of clean water for training, And that each user interaction costs around half a litre of water.

Source: https://interestingengineering.com/innovation/training-chatgpt-consumes-water

Slide 9

The very same Kate Crawford, together with a colleague, produced this extraordinary map of the entire process behind the Amazon Echo.

The remarkable Anatomy of an AI System, shows The Amazon Echo as an anatomical map of human labour, data and planetary resources. 

The map is far too detailed to see on this slide but let me just zoom in on the top of this very large iceberg where we find the amazon echo and its human user, shown here in a yellow dashed box.

Slide 10

At the very top of this pyramid of materials and energy (on the left) and waste (on the right) is you – the user of the Amazon Echo – and your unpaid human labour providing habits and preferences that will be used as training data. 

I strongly recommend you check this out. It is truly eye opening.

Slide 11

Now I want to turn to the human cost of AI.

It is often said that one of the biggest fears around AI is the loss of jobs. In fact the opposite is happening. Many new jobs are being created, but the tragedy is that they are not great jobs, to say the least. Let me introduce you to 3 of these new kinds of jobs.

Conversational AI or chat bots also need human help. Amazon for instance employs thousands of both full-time employees and contract workers to listen to and annotate speech. The tagged speech is then fed back to Alexa to improve its comprehension. In 2019 the Guardian reported that Google employs around 100,000 temps, vendors and contractors: literally an army of linguists to create the handcrafted data sets required for Google translate to learn dozens of languages. Not surprisingly there is a considerable disparity between the wages and working conditions of these workers and Google's full-time employees.

AI tagging jobs are dull, repetitive and in the case of the linguists highly skilled.

Slide 12

Consider AI tagging of images. This is the manual labelling of objects in images to, for instance, generate training data sets for driverless car AIs. Better (and safer) AI needs huge training data sets and a whole new outsourced industry has sprung up all over the world to meet this need. Here is an AI tagging factory in China.

Slide 13

But by far the worst kind of new white-collar job in the AI industry is content moderation.
 

These tens of thousands of people, employed by third-party contractors, are required to watch and vet offensive content: hate speech, violent pornography, cruelty and sometimes murder of both animals and humans for Facebook, YouTube and other media platforms. These jobs are not just dull and repetitive they are positively dangerous. Harrowing reports tell of PTSD-like trauma symptoms, panic attacks and burn out after one year, alongside micromanagement, poor working conditions and ineffective counselling. And very poor pay - typically $28,800 a year. Compare this with average annual salaries at Facebook of $250,000+.

Slide 14

The extent to which AI has a human supply chain was a big revelation, and I am an AI insider! The genius designers of this amazing tech rely on both huge amounts of energy and a hidden army of what Mary Gray and Siddhartha Suri call Ghost Workers.

I would ask you to consider the question: how can we, as ethical consumers, justify continuing to make use of unsustainable and unethical AI technologies?

Slide 15  

AI ethics are important because AIs are already causing harm. Actually a *very* wide range of harms.

Fortunately, there is an excellent crowdsourced database which collects reports of accidents and near misses involving robots (including autonomous vehicles) and AIs.

The AI incidents database covers both accidents and near misses. I strongly recommend you check it out.

It is important to note that because the AI incidents database is crowdsourced from accidents and near misses that made it into the press and media, what we see is almost certainly only the tip of the iceberg of the harms being done by AI.

The database contains some truly shocking cases. One is a 14-year-old boy, who died by suicide after reportedly becoming dependent on Character.ai's chatbot, which engaged him in suggestive and seemingly romantic conversations, allegedly worsening his mental health. Source: Can A.I. Be Blamed for a Teen’s Suicide? New York Times, Oct 2024.

The database also highlights the criminal use of AI. Examples include criminals phoning parents claiming they have kidnapped a child and demanding a ransom, with deepfake audio of the teenage girl audible in the background. There are many examples of sextortion, using deepfake AI generated video of famous individuals engaged in sex acts.

The database really highlights the sad truth that AI is a gift to criminals.

Slide 16  

Another more recent database tracks the misuse of AI by lawyers when preparing court cases. The database only shows those instances where the judge (or another officer in the court) spotted the hallucinated decisions, citations or quotations. The cases were thrown out, and in some cases the lawyers found to be using Ai were fined or reported to their bar associations.

While this is not criminal misuse of AI, it does demonstrate a lack of understanding of AI, or at best, naivety. Perhaps the real culprits are hard pressed paralegals. This database underlines the need for professionals to be property trained in AI and its weaknesses.

Since I grabbed this screen shot the number if cases reported has grown.

Slide 17

Lawyer Graziano Mioli elegantly argues that we have a categorical imperative to be imperative when interacting with AIs. Noting that this is not an invitation to be rude.

For Graziano's slides see https://www.youtube.com/watch?v=tjBnGN4u1GA&ab_channel=GrazianoMioli  

I can see why a majority of people interact kindly with AIs. I think it reflects well on those who do say ‘please’, for the Kantian reason that we should not get into the habit of acting unkindly.

See https://www.theaihunter.com/news/ai-etiquette-why-some-people-say-please-to-chatbots/

Slide 18

Having mostly elaborated on the dangers of AI, I want to finish on a positive.

We already enjoy the benefit of useful and reliable AI technology, like smart maps and machine translation. DeepMind's diagnostic AI can detect over 50 eye diseases from retinal scans as accurately as a doctor, and DeepScribe provides automated note taking during a consultation, linking with patient electronic health records

Thank you!




 

 

 

 

 

 

 

 

 

 

 



 

 

 

 

 

 

 

 

No comments:

Post a Comment