Thursday, May 27, 2021

Ethics is the new Quality

This morning I took part in the first panel at the BSI conference The Digital World: Artificial Intelligence.  The subject of the panel was AI Governance and Ethics. My co-panelist was Emma Carmel, and we were expertly chaired by Katherine Holden.

Emma and I each gave short opening presentations prior to the Q&A. The title of my talk was Why is Ethical Governance in AI so hard? Something I've thought about alot in recent months.

Here are the slides exploring that question.

 

And here is what I said.

Early in 2018 I wrote a short blog post with the title Ethical Governance: what is it and who's doing it? Good ethical governance is important because in order for people to have confidence in their AI they need to know that it has been developed responsibly. I concluded my piece by asking for examples of good ethical governance. I had several replies, but none were nominating AI companies.

So. why is it that 3 years on we see some of the largest AI companies on the planet shooting themselves in the foot, ethically speaking? I’m not at all sure I can offer an answer but, in the next few minutes, I would like to explore the question: why is ethical governance in AI so hard? 

But from a new perspective. 

Slide 2

In the early 1970s I spent a few months labouring in a machine shop. The shop was chaotic and disorganised. It stank of machine oil and cigarette smoke, and the air was heavy with the coolant spray used to keep the lathe bits cool. It was dirty and dangerous, with piles of metal swarf cluttering the walkways. There seemed to be a minor injury every day.

Skip forward 40 years and machine shops look very different. 

Slide 3

So what happened? Those of you old enough will recall that while British design was world class – think of the British Leyland Mini, or the Jaguar XJ6 – our manufacturing fell far short. "By the mid 1970s British cars were shunned in Europe because of bad workmanship, unreliability, poor delivery dates and difficulties with spares. Japanese car manufacturers had been selling cars here since the mid 60s but it was in the 1970s that they began to make real headway. Japanese cars lacked the style and heritage of the average British car. What they did have was superb build quality and reliability" [1].

What happened was Total Quality Management. The order and cleanliness of modern machine shops like this one is a strong reflection of TQM practices. 

Slide 4

In the late 1970s manufacturing companies in the UK learned - many the hard way - that ‘quality’ is not something that can be introduced by appointing a quality inspector. Quality is not something that can be hired in.

This word cloud reflects the influence from Japan. The words Japan, Japanese and Kaizen – which roughly translates as continuous improvement – appear here. In TQM everyone shares the responsibility for quality. People at all levels of an organization participate in kaizen, from the CEO to assembly line workers and janitorial staff. Importantly suggestions from anyone, no matter who, are valued and taken equally seriously.

Slide 5

In 2018 my colleague Marina Jirotka and I published a paper on ethical governance in robotics and AI. In that paper we proposed 5 pillars of good ethical governance. The top four are:

  • have an ethical code of conduct, 
  • train everyone on ethics and responsible innovation,
  • practice responsible innovation, and
  • publish transparency reports.

The 5th pillar underpins these four and is perhaps the hardest: really believe in ethics.

Now a couple of months ago I looked again at these 5 pillars and realised that they parallel good practice in Total Quality Management: something I became very familiar with when I founded and ran a company in the mid 1980s [2].

Slide 6 

So, if we replace ethics with quality management, we see a set of key processes which exactly parallel our 5 pillars of good ethical governance, including the underpinning pillar: believe in total quality management.

I believe that good ethical governance needs the kind of corporate paradigm shift that was forced on UK manufacturing industry in the 1970s.

Slide 7

In a nutshell I think ethics is the new quality

Yes, setting up an ethics board or appointing an AI ethics officer can help, but on their own these are not enough. Like Quality, everyone needs to understand and contribute to ethics. Those contributions should be encouraged, valued and acted upon. Nobody should be fired for calling out unethical practices.

Until corporate AI understands this we will, I think, struggle to find companies that practice good ethical governance [3]. 

Quality cannot be ‘inspected in’, and nor can ethics.

Thank you.


Notes.

[1]    I'm quoting here from the excellent history of British Leyland by Ian Nicholls

[2]    My company did a huge amount of work for Motorola and - as a subcontractor - we became certified software suppliers within their six sigma quality management programme.

[3]    It was competitive pressure that forced manufacturing companies in the 1970s to up their game by embracing TQM. Depressingly the biggest AI companies face no such competitive pressures, which is why regulation is both necessary and inevitable.

Saturday, May 15, 2021

The Grim Reality of Jobs in Robotics and AI

The reality is that AI is in fact generating a large number of jobs already. That is the good news. The bad news is that they are mostly - to put it bluntly - crap jobs. 

There are several categories of such jobs. 

At the benign end of the spectrum is the work of annotating images, i.e. looking at images and identifying features then labelling them. This is AI tagging. This work is simple and incredibly dull but important because it generates training data sets for machine learning systems. Those systems could be AIs for autonomous vehicles and the images are identifying bicycles, traffic lights etc. The jobs are low-skill low-pay and a huge international industry has grown up to allow the high tech companies to outsource this work to what have been called white collar sweatshops in China or developing countries. 

A more skilled version of this kind of job are translators who are required to ‘assist’ natural language translation systems who get stuck on a particular phrase or word.

And there is another category of such jobs that are positively dangerous: content moderators. These are again outsourced by companies like Facebook, to contractors who employ people to filter abusive, violent or illegal content. This can mean watching video clips and making a decision on whether the clip is acceptable or not (and apparently the rules are complex), over and over again, all day. Not surprisingly content moderators suffer terrible psychological trauma, and often leave the job burned out after a year or two. Publicly Facebook tells us this is important work, yet content moderators are paid a fraction of what staffers working on the company campus earn. So not that important.

But jobs created by AI and automation can also be physically dangerous. The problem with real robots, in warehouses for instance, is that like AIs they are not yet good enough to do everything in the (for the sake of argument) Amazon warehouse. So humans have to do the parts of the workflow that robots cannot yet do and - as we know from press reports - these humans are required to work super fast and behave, in fact, as if they are robots. And perhaps the most dehumanizing part of the job for such workers is that, like the content moderators (and for that matter Uber drivers or Deliveroo riders), their workflows are managed by algorithms, not humans.

We roboticists used to justifiably claim that robots would do jobs that are too dull, dirty and dangerous for humans. It is now clear that working as human assistants to robots and AIs in the 21st century is dull, and both physically and/or psychologically dangerous. One of the foundational promises of robotics has been broken. This makes me sad, and very angry.

The text above is a lightly edited version of my response to the Parliamentary Office of Science and Technology (POST) request for comments on a draft horizon scanning article. The final piece How technology is accelerating changes in the way we work was published a few weeks ago.

Thursday, May 13, 2021

The Energy Cost of Online Living in Lockdown

Readers of his blog will know that one of the many things ethical I worry about is the energy cost of AI. As part of the work I'm doing with Claudia Pagliari and her National Expert Group on Digital Ethics for Scotland I've been looking also into the energy costs of what is - for many of us - everyday digital life in lockdown. I don't yet have a complete set of results but what I have found so far is surprising - and not in a good way.

So far I've looked into the energy costs of (i) uploading to the cloud, (ii) streaming video (i.e. from iPlayer or Netflix), and (iii) video conferencing.

(i) Uploading to the cloud. This 2017 article in the Stanford Magazine explains that when you save a 1 Gbyte file – that’s about 1 hour of video - to your laptop’s disk drive the energy cost is 0.000005 kWh, or 5 milliWatt hours. Save the same file to the Cloud and the energy cost is between 3 and 7 kWh. For comparison your electric kettle burns about 3 kWh. This mean that the energy cost of saving to the cloud is about a million times higher than to your local disk drive. 

The huge difference makes sense when you consider that there is a very complex international network of switches, routers and exchange hubs, plus countless amplifiers maintaining signal strength over long distance transmission lines. All of this consumes energy. Then add a slice of the energy costs of the server farm.

(ii) Streaming video. This article in The Times from May 2019 makes the claim that streaming a 2 hour HD movie from Netflix incurs the same energy cost as boiling 10 kettles (based on the sustainable computing research of Mike Hazas). To estimate  how much energy that equates to we need to guess how full the kettle is. A half full 3kWh kettle will take about 2 minutes to boil, and consume therefore 100 Watts. Do that 10 times and you've burned 1kW. A DVD player typically consumes 8 Watts, so streaming costs 125 times more energy.

Again this makes sense against uploading to the cloud, except that here you are downloading from Netflix servers. A 2 hour HD movie is alot of data, around 10GBytes, so 10 times more than the case for (i) above.

(iii) Video conferencing. This post on David Mytton's excellent blog explores the energy cost of Zoom meetings in some detail. David estimates that a 1 hour video zoom call with 6 participants generates between 5 and 15GB of data and that the data transfer consumes between 0.07 – 0.22kWh of electricity. Using our benchmark of kettles boiled this is pretty modest - at most less than one tenth of the energy cost. 

However this estimate makes 2 assumptions: first that you are connected via cable or fixed line - which here in the UK costs 0.015kWh per GByte. A mobile connection costs about seven times that at 0.1kWh/GB. And second, this estimate measures only the energy costs of data transmission and fails to take account of the energy costs of Zoom's data centres, which - if (i) and (ii) here are anything to go by, could be significant, especially since there aren't any in the UK and the default servers are in the US.

As this article on the Zoom blog explains, Zoom calls are not peer to peer. The video from each participant is streamed first to a zoom server then broadcast to every other person on the call. As David Mytton says Zoom don't release information on the overall energy costs of calls. I strongly suspect that if server energy costs were factored in they would be in line with cases (i) and (ii) above. Even so, I feel sure that David Mytton's overall conclusion remains true: that the energy cost of Zoom meetings is significantly lower than all but local or regional travel.

 

I would like to see networking services like cloud storage, video on demand and video conferencing publish a meaningful energy cost. When we buy packaged food from the supermarket we expect to read the calorific energy value of each item, broken down into fat, salt and so on.  It would be great if every online transaction, from sending an email, to watching a movie revealed its energy/carbon cost. Not just for energy geeks like me, but to remind all of us that the Digital Economy is *very* energy hungry.


I would welcome any additional data which either adds to the above (especially the energy costs for smaller online transactions like tweets, emails or card payments), or shows that the estimates above are wrong. 

Related blog posts:

On Sustainable Robotics
Energy and Exploitation: AIs dirty secrets
What's wrong with Consumer Electronics? 

 


Monday, March 22, 2021

On Sustainable Robotics

The climate emergency brooks no compromise: every human activity or artefact is either part of the solution or it is part of the problem. 

I've worried about the sustainability of consumer electronics for some time, and, more recently, the shocking energy costs of big AI. But the climate emergency has also caused me to think hard about the sustainability of robots. In recent papers we have defined responsible robotics as

... the application of Responsible Innovation in the design, manufacture, operation, repair and end-of-life recycling of robots, that seeks the most benefit to society and the least harm to the environment.

I will wager that few robotics manufacturers - even the most responsible - pay much attention to repairability and environmental impact. And, I'm ashamed to say, very little robotics research is focused on the development of sustainable robots. A search on google scholar throws up just a handful of great papers detailing work on upcycled and sustainable robots (2018), sustainable robotics for smart cities (2018), and sustainable soft robots (2020).

I was then delighted when, a few weeks ago, my friend and colleague Michael Fisher, drafted a proposal for a new standard on Sustainable Robotics. The proposal received strong support from the BSI robotics committee. Here is the formal notice requesting comments on Michael's proposal: BS XXXX Guide to the Sustainable Design and Application of Robotic Systems. Anyone can comment (although you do need to register first). The deadline is 1 April 2021. 

So what would make a robot sustainable? In my view it would have to be:

  1. Made from sustainable materials. This means the robot should, as far as possible, use recycled materials (plastics or metals), or biodegradable materials like wood. Any new materials should be ethically sourced. 
  2. Low energy. The robot should be designed to use as little energy as possible. It should have energy saving modes. If an outdoor robot then it should use solar cells and/or hydrogen cells when they become small enough for mobile robots. Battery powered robots should always be rechargeable. 
  3. Repairable. The robot would be designed for ease of repair, using modular, replaceable parts as much as possible - especially the battery. Additionally the manufacturers should provide a repair manual so that local workshops could fix most faults. 
  4. Recyclable. Robots will eventually come to the end of their useful life, and if they cannot be repaired or recycled we risk them being dumped in landfill. To reduce this risk the robot should be designed to make it easy to re-use parts, such as electronics and motors, and re-cycle batteries, metals and plastics.

These are, for me, the four fundamental requirements, but there are others. The BSI proposal adds the environmental effects of deployment (it is unlikely we would consider a sustainable robot designed to spray pesticides as truly sustainable), or of failure in the field. Also the environmental effect of maintenance; cleaning materials, for instance. The proposal also looks toward sustainable, upcyclable robots as part of a circular economy.

This is Ecobot III, developed some years ago by colleagues in the Bristol Robotics Lab's Bio-energy group. The robot runs on electricity extracted from biomass by 48 microbial fuel cells (the two concentric brick coloured rings). The robot is 90% 3D printed, and the plastic is recyclable.

 

 

 

 

 

I would love to see, in the near term, not only a new standard on Sustainable Robotics as a guide (and spur) for manufacturers, but the emergence of Sustainable Robotics as a thriving new sub-discipline in robotics.

Friday, March 19, 2021

Back to Robot Coding part 3: testing the EBB

In part 2 a few weeks ago I outlined a Python implementation of the ethical black box. I described the key data structure - a dictionary which serves as both specification for the type of robot, and the data structure used to deliver live data to the EBB. I also mentioned the other key robot specific code: 

# Get data from the robot and store it in data structure spec
def getRobotData(spec):

Having reached this point I needed a robot - and a way of communicating with it - so that I could both write getRobotData(spec) and test the EBB. But how to do this? I'm working from home during lockdown, and my e-puck robots are all in the lab. Then I remembered that the excellent robot simulator V-REP (now called CoppeliaSim) has a pretty good e-puck model and some nice demo scenes. V-REP also offers multiple ways of communicating between simulated robots and external programs (see here). One of them - TCP/IP sockets - appeals to me as I've written sockets code many times, for both real-world and research applications.  Then a stroke of luck: I found that a team at Ensta-Bretagne had written a simple demo which shows how to connect a Python program to a robot in V-REP, using sockets. So, first I got that demo running and figured out how it works, then used the same approach for a simulated e-puck and the EBB. Here is a video capture of the working demo.


So, what's going on in the demo? The visible simulation views in the V-REP window show an e-puck robot following a black line which is blocked by both a potted plant and an obstacle constructed from 3 cylinders. The robot has two behaviours: line following and wall following. The EBB requests data from the e-puck robot once per second, and you can see those data in the Python shell window. Reading from left to right you will see first the EBB date and time stamp, then robot time botT, then the 3 line following sensors lfSe, followed by the 8 infra red proximity sensors irSe. The final two fields show the joint (i.e. wheel) angles jntA, in degrees, then the motor commands jntD. By watching these values as the robot follows its line and negotiates the two obstacles you can see how the line and infra red sensor values change, resulting in updated motor commands.

Here is the code - which is custom written both for this robot and the means of communicating with it - for requesting data from the robot.

# Get data from the robot and store it in spec[]
# while returning one of the following result codes
ROBOT_DATA_OK = 0
CANNOT_CONNECT = 1
SOCKET_ERROR = 2
BAD_DATA = 3

def getRobotData(spec):

    # This function connects, via TCP/IP to an ePuck robot in V-REP

    # create a TCP/IP socket and connect it to the simulated robot
    sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
    try:
        sock.connect(server_address_port)
    except:
        return CANNOT_CONNECT

    sock.settimeout(0.1) # set connection timeout
    
    # pack a dummy packet that will provoke data in response
    #   this is, in effect, a 'ping' to ask for a data record
    strSend = struct.pack('fff',1.0,1.0,1.0)
    sock.sendall(strSend) # and send it to V-REP

    # wait for data back from V-REP
    #   expect a packet with 1 time, 2 joints, 2 motors,   
    #   3 line sensors and 8 irSensors. All floats because V-REP
    #   total packet size = 16 x 4 = 64 bytes
    data = b''
    nch_rx = 64 # expect this many bytes from  V-REP 
    try:
        while len(data) < nch_rx:
            data += sock.recv(nch_rx)
    except:
        sock.close()
        return SOCKET_ERROR

    # unpack the received data
    if len(data) == nch_rx:
        # V-REP packs and unpacks in floats only so...
        vrx = struct.unpack('ffffffffffffffff',data)

        # now move data from vrx[] into spec[], while rounding floats
        spec["botTime"] = [ round(vrx[0],2) ] 
        spec["jntDemands"] = [ round(vrx[1],2), round(vrx[2],2) ]
        spec["jntAngles"] = [ round(vrx[3]*180.0/math.pi,2)
                              round(vrx[4]*180.0/math.pi,2) ]
        spec["lfSensors"] = [ round(vrx[5],2), 
                              round(vrx[6],2), round(vrx[7],2) ]
        for i in range(8):
            spec["irSensors"][i] = round(vrx[8+i],3)       
        result = ROBOT_DATA_OK
    else:       
        result = BAD_DATA

    sock.close()
    return result

The structure of this function is very simple: first create a socket then open it, then make a dummy packet and send it to V-REP to request EBB data from the robot. Then, when a data packet arrives, unpack it into spec, then close the socket before returning. The most complex part of the code is data wrangling.

Would a real EBB collect data in this way? Well if the EBB is embedded in the robot then probably not. Communication between the robot controller and the EBB might be via ROS messages, or even more directly, by - for instance - allowing the EBB code to access a shared memory space which contains the robot's sensor inputs, command outputs and decisions. But an external EBB, either running on a local server or in the cloud, would most likely use TCP/IP to communicate with the robot, so getRobotData() would look very much like the example here. 

Friday, February 19, 2021

Back to Robot Coding part 2: the ethical black box

In the last few days I started some serious coding. The first for 20 years, in fact, when I built the software for the BRL LinuxBots. (The coding I did six months ago doesn't really count as I was only writing or modifying small fragments of Python).

My coding project is to start building an ethical black box (EBB), or to be more accurate, a module that will allow a software EBB to be incorporated into a robot. Conceptually the EBB is very simple, it is a data logger - the robot equivalent of an aircraft Flight Data Recorder, or an automotive Event Data Recorder. Nearly five years ago I made the case, with Marina Jirotka, that all robots (and AIs) should be fitted with an EBB as standard. Our argument is very simple: without an EBB, it will be more or less impossible to investigate robot accidents, or near-misses, and in a recent paper on Robot Accident Investigation we argue that with the increasing use of social robots accidents are inevitable and will need to be investigated. 

Developing and demonstrating the EBB is a foundational part of our 5-year EPSRC funded project RoboTIPS, so it's great to be doing some hands-on practical research. Something I've not done for awhile.

Here is a block diagram showing the EBB and its relationship with a robot controller.



























As shown here the data flows from the robot controller to the EBB are strictly one way. The EBB cannot and must not interfere with the operation of the robot. Coding an EBB for a particular robot would be straightforward, but I have set a tougher goal: a generic EBB module (i.e. library of functions) that would - with some inevitable customisation - apply to any robot. And I set myself the additional challenge of coding in Python, making use of skills learned from the excellent online Codecademy Python 2 course.

There are two elements of the EBB that must be customised for a particular robot. The first is the data structure used to fetch and save the sensor, actuator and decision data in the diagram above. Here is an example from my first stab at an EBB framework, using the Python dictionary structure:

# This dictionary structure serves as both 
# 1 specification of the type of robot, and each data field that
#   will be logged for this robot, &
# 2 the data structure we use to deliver live data to the EBB

# for this model let us create a minimal spec for an ePuck robot
epuckSpec = {
    # the first field *always* identifies the type of robot plus            # version and serial nos
    "robot" : ["ePuck", "v1", "SN123456"],
    # the remaining fields are data we will log, 
    # starting with the motors
    # ..of which the ePuck has just 2: left and right
    "motors" : [0,0],
    # then 8 infra red sensors
    "irSensors" : [0,0,0,0,0,0,0,0],
    # ..note the ePuck has more sensors: accelerometer, camera etc, 
    # but this will do for now
    # ePuck battery level
    "batteryLevel" : [0],
    # then 1 decision code - i.e. what the robot is doing now
    # what these codes mean will be specific to both the robot 
    # and the application
    "decisionCode" : [0]
    }

Whether a dictionary is the best way of doing this I'm not 100% sure, being new to Python (any thoughts from experienced Pythonistas welcome).

The idea is that all robot EBBs will need to define a data structure like this. All must contain the first field "robot", which names the robot's type, its version number and serial number. Then the following fields must use keywords from a standard menu, as needed. As shown in this example each keyword is followed by a list of placeholder values - in which the number of values in the list reflects the specification of the actual robot. The ePuck robot, for instance, has 2 motors and 8 infra-red sensors. 

The final field in the data structure is "decisionCode". The values stored in this field would be both robot and applications specific; for the ePuck robot these might be 1 = 'stop', 2 = 'turn left', 3 = 'turn right' and so on. We could add another value for a parameter, so the robot might decide for instance to turn left 40 degrees, so "decisionCode" : [2,40]. We could also add a 'reason' field, which would save the high-level reason for the decision, as in "decisionCode" : [2,40,"avoid obstacle right"] noting that the decision field could be a string as shown here, or a numeric code.

As I hope I have shown here the design of this data structure and its fields is at the heart of the EBB.

The second element of the EBB library that must be written for the particular robot and application, is the function which fetches data from the robot

# Get data from the robot and store it in data structure spec
def getRobotData(spec):
    
How this function is implemented will vary hugely between robots and robot applications. For our Linux enhanced ePucks with WiFi connections this is likely to be via a TCP/IP client-server, with the server running on the robot, sending data following a request from the client  getRobotData(ePuckspec) For simpler setups in which the EBB module is folded into the robot controller then accessing the required data within getRobotData() should be very straightforward.

The generic part of the EBB module will define the class EBB, with methods for both initialising the EBB and saving a new data record to the EBB. I will cover that in another blog post.

Before closing let me add that it is our intention to publish the specification of the EBB, together with the model EBB code, once it had been fully tested, as open source.

Any comments or feedback would be much appreciated.

Wednesday, December 30, 2020

#heavencalling

    Now it’s personal. I’ve just had a phone call from my mom.  Fine, you might think, but it’s sure as hell not fine. She’s been dead 5 years.

    So, I’m a member of the LAPD CSI assigned to cyber crime. The case that landed on my desk a couple of weeks ago started as a complaint that folk were getting phone calls from dead relatives. At first we thought it was a joke. But after a couple of Hollywood celebs and the mayor of Pasadena started getting calls too – it got real serious real fast. The mayor called my chief: he was furious that someone was impersonating his eldest daughter: she died a couple of years ago in a freak surfing accident. It was only when the chief explained that it wasn’t a person that had called him, but an AI programmed to impersonate his daughter, that he calmed down a bit. Just a bit mind: according to my boss what he said went along the lines of “find out who these sons-of-bitches programmers are, I’m gonna sue the hell out of them”.

    Deepfakes have been around for 5 years or so. Mostly videos doctored with some famous actor’s face substituted for a slightly less famous face. Tom Cruise as spiderman – that kinda thing; mostly harmless.  After the mayor’s call the chief called a departmental meeting. She explained that – according to the DA – impersonation is not a misdemeanour: “Hell if it was that would make the whole entertainment industry a criminal enterprise.” That caused a cynical chuckle across the room. She went on, “nor is creating a fake AI based on a real person.” “Of course people are upset and angry – who wouldn’t be when they get a call from someone dear to them who also happens to be deceased – but upsetting people isn’t a crime.” 

    She looked at me. “Frank, what have you got so far?” “Not much chief”, I replied, “each call seems to be coming from a different number – my guess is they’re one-time numbers”. “Any idea who’s behind this?” she asked. “No – but since no-one is demanding money – my guess would be AI genius college kids doing this for a joke, or maybe their dissertations.” “Of course” I added, “they would need to be scraping the personal data from somewhere to construct the fakes, but so much hacked data is around on the dark web that wouldn’t be too hard.” “Ok good”, she said, “start talking to some college professors”. 

    Two days later I had the call.

    “Hello Frankie, it’s mom.”

    “Mom? But you’ve been gone 5 years.”

    “I know son. I just wanted to call to tell you I love you.”

    “But. Goddam. You sound just like Mom.”

    “Aren’t you pleased to hear from me Frankie?”

    “Yes... No. This isn’t right.”

    “How is Josie doing? And Taylor – she must’ve started college by now?”

    “Yes Josie is good, and Taylor’s ... no dammit I’m not gonna talk to a computer program.”

    “Aw, don’t be mad with your Mom.”

    At that point I hung up. But Jesus it was hard. I knew it wasn’t my Mom but the temptation to stay on the call just to hear her voice again was just overwhelming. It took me awhile to calm down. It’s only in the last year that I started to get over her passing. That call brought it all back: the pain, the anger she had been taken too soon. We were real close.

    This fake was good – they had my Mom’s voice down to a tee – but how? Mom was a high school teacher not a celebrity. She wasn’t big on social media. Sure she used Facebook – who doesn’t – but that doesn’t record voice. Just about everything else mind – that’s where they would have gotten family names and relationships. Then I remembered that we bought her one of those smart speakers a year or so before she passed away. Arthritis made it hard for her to move around so we put in the speaker so she could make voice calls, listen to music or turn on the TV just by asking. She loved it. 

    Then the story broke in the press. Twitter was full of it: #heavencalling and #deadphone were just two of the hashtags; none of them even remotely funny to me. The pundits were all over the newscasts: AI experts gleefully explaining the technology while expressing a dishonest kind of smirking dismay “...of course no AI professional could possibly condone this kind of misuse.” Obviously they hadn’t had the call. 

    Of course the news channels also interviewed folk who had been called. Some were outraged, but more were very happy that they had been ‘chosen’ for a call from heaven. One lady was so pleased to have had a call from her late husband: “It was so wonderful to hear from Jimmy – to talk about old times and know that he’s happy in heaven”. Well I guess I shouldn’t have been surprised. The church pastors they interviewed were indignant. “The devil’s work” was the general tone. One even described it as ‘artificial witchcraft’.  They had good reason to be unhappy, seeing as they have exclusive rights to the intercession business.

    A day later I had an email back from one of the AI Profs at Caltech. I called him straight away and he told me he had a pretty good idea who might be behind this “deeply unethical AI” as he put it. A couple of star students had been working on what one of them had told him was a ‘really cool NLP project’. NLP – that’s natural language processing. He told me that he had already disabled their accounts on the Caltech supercomputer. This kind of real-time conversational AI uses huge amounts of computing power.

    A few hours later the chief and I are in the Dean’s office with the Professor and his two students. In the students I saw a younger me: bright but with that na├»ve innocence that blesses only those for whom nothing bad has ever happened. 

    My chief explained to these two young men that, since no crime had been committed, we would not be pressing charges. But, she stressed, “What you did was not without consequence. The mayor and his wife were deeply distressed to receive a call from someone they thought was their deceased daughter. And my colleague here was mad as hell when he had a call from his late Mom.” From the look in their eyes they obviously had no idea they had set up a heaven call to a cop.  

    Then the Dean gave them one hell of a dressing down. At one point one of the students tried to interject that some of the recipients of the heaven calls had been very happy to be called, at which point the Prof stopped him immediately. “No. Regardless of how people reacted, your AI was a deception. And an egregious one too, as it exploited the vulnerability of grief.” Then he added, “Something that in time you too will experience.” The Dean told them that they should count themselves very lucky that the school had decided not to expel them, on condition that they personally apologise to everyone who had received a heaven call, starting right now with Officer Frank Aaronavitch here. After a very gracious apology, which I accepted, the Prof added that he would be requiring them to submit year papers on the ethics of their heaven calling AI.

    Six months have passed. Heaven calling blew over pretty quickly. Then I noticed a piece in the tech press about a new start up – Heavenly AI – looking for VC. Sure enough the two founders are the same students we saw in the Deans’s office at Caltech. The article claims the company has an ethics driven business model. Great I thought. Then cynical me kicked in; give it six months and these guys are gonna get bought out by Facebook. Heaven forbid.


Previous stories: 

The Gift (2016) 
Word Perfect (2020) 

Monday, December 28, 2020

She had chosen well

For this story, written as the second exercise in my Writing Short Stories course back in June, I attempted a story without dialogue. I love dialogue so expected to find this difficult, which it was. In the story I try to imagine what it might have been like to experience an extinction event, in an effort to capture a sense of being in the liminal state from a limited first-person (or rather animal) perspective.


     She had chosen well.  

    The burrow she shared with her litter was lodged within the vaulted foundations of a mighty tree. The tree had taken root in rocky soil long before her time, its vascular organs splitting the rock enough to allow her to excavate tunnels and chambers three seasons ago. 

    It had been a good spring. Her pups had almost weaned and were growing fat on insects and berries. Even the reckling was looking healthy. He was a survivor, escaping the quick-feathered hunters with sharp eyes and sharper teeth that had taken two of her litter a few moons ago.

    In her world there was much to fear. Death came in many ways: quick from the sharp-teeth or sky-claws; slow from starvation or thirst (the nearest spring was a perilous journey - although she had learned from her mother how to harvest the prickly watery green leaves which grew close to the burrow). But this hillside had one advantage; it was too high and steep for the long-necked ground-shakers that crashed and bellowed through the valley below from time to time.

    The moons passed and, as the nights started to lengthen, she began to harvest the nuts, green leaves and tubers, storing these in dry clean chambers close to the comfortable living nest.  Something – perhaps the unusual bounty of the season – made her collect more this summer.

    It was a warm dusk. After a good night’s forage she and her pups had spent the day sleeping full-bellied in the cool of the burrow. Her pups were now almost full grown and the biggest and boldest were restless to leave. Two, a brother and sister, moved to the burrow entrance with a purpose that she knew from her own time so, with a touch of their noses, mother and eldest made their farewells.  

    Then, just a few moments after she had returned to the nest chamber, the ground shook. But this was not the rhythmic shaking of the long-necks in the valley.  Nor was it the noisy anger of the fire mountain that turned their nights red from time to time. This was different: a silent deep tremor that felt as if it was coming from the belly of the earth. The tremor grew to a crescendo. Terrified the small family nest-huddled as the tree roots groaned while soil and stones rained upon them. Then it was still.

    They waited. She lifted her head and sensed around. The nest air was full of dust. She felt the silence then realised that the breeze-scent of outside was gone. She knew something was wrong, ran to the entrance tunnel and found it blocked with stones and earth. Fear rising she started to dig. She was a good digger with powerful front claws. She dug and dug until she started to feel weak, then – rest-pausing – she heard a scraping sound. A few moments later the soil and stones ahead broke apart and there was her eldest daughter. With joy and relief they touched noses, but she sensed a sadness that told her that her eldest son was gone. 

    Together mother and daughter cleared the spoil from the entrance tunnel, then – followed by the rest of the pups – they emerged, cautiously, into the night. There was no moon. Instead the sky clouds were lit high with lurid reds, greens and purples, yet – she noticed – the fire mountain was silent. The night was quiet at first although some familiar sounds slowly returned: the bellows of the long-necks in the valley below and skyward the distant cries of the sky-claws. The family fed and foraged and still fearful returned to the nest before dawn.

    After sleeping most of the day the nest family was awakened by a long roar of thunder that seemed to roll in from afar and rush over them before receding into the distance. She had heard thunder before but never like this. As it passed it hit their tree – although not with the long shake of the sleep-day before – but with a great cracking crash that was the last thing they heard for awhile. She felt an ear-pain she had never before experienced, and so – it seemed – had her pups. Dazed, deafened and frightened they did not venture out of the burrow that night.

    Restless and hungry the family stirred again before dusk the following day. She was relieved that the ear-pain had gone and her sound sense restored. Cautiously they emerged from the burrow entrance to find that their small exit platform was now a tangle of branch and leaf. Luckily it was not dense, and they quickly made a path through to the open hillside.  What they saw by the dull grey light of dusk was a world changed. No tree was left standing, including their home tree – indeed it was that tree that now provided their exit canopy.

    They sensed something moving nearby, then saw one of the sky-claws fallen onto a prickle leaf bush; it was broken winged and near death, but still able to fix them with its sharp eye. They had never before seen one of these creatures close up and – even in its death throes – their terror of its kind was undimmed, so they quickly retreated into the exit canopy and nervously fed on insects and home tree nuts.

    The next two nights, alerted by the bad tempered chirruping of sharp-teeth feeding on the sky-claw, they did not stray outside the home thicket. She noticed that the nights were cold: too cold for this early in the autumn. A few nights later the sky-claw was joined in death by the sharp-teeth, and the nest family were able to feast on the insects drawn to the carrion. But their forages were short as it was too cold to stay out for more than a few mouthfuls before returning to the warm of the nest. A few nights later even the carrion insects were gone, as the corpses had frozen. 

    With a deep sense of unease the nest family settled for their long winter sleep.


© Alan Winfield 2020


Previous stories:

The Gift (2016)

Word Perfect (2020)

Saturday, December 26, 2020

Word Perfect

Back in June I signed up for an online course on Writing Short Stories (the next steps), run by the Bishopsgate Institute. The course was excellent. There were six weekly zoom sessions of about 2 hours each, with 8 students led by Barbara Marsh, who is a wonderful tutor. I can honestly say that I enjoyed every minute. There was a fair bit of homework including, of course, writing - and a major segment of each class was critiquing each others' work. 

Here below is the first of the three new stories I drafted for the course.


Word Perfect

    “Who the fuck are you?”

    “Don’t you recognise me? I’m you.”

    “Oh fuck off. I’ve never seen you before in my life.”

    “Yes you have – every time you look in the mirror.”

    I wasn’t listening of course. I never did then. I was foul mouthed, arrogant, and full of myself (full of shit actually). I was a first year student: physics at Oxford. Won a scholarship for genius working class kids. Something I never failed to tell everyone.

    “You’re full of shit. What do you want?”

    He paused a moment and looked me in the eye. “I want to talk, you fucker.”

    Now this old guy had my attention. The only person I knew who says ‘you fucker’ was me. It was (and still is) something I only say to close friends: a kind of insult of endearment. 

    I was speechless (which didn’t happen often). For the first time I looked hard at him. Same height and build as me. Clean-shaven and almost bald: not bad looking. Fuck, I thought, he could be my dad. But he died four years ago.

    He read my mind. “No John, I’m not a ghost. I’m you age 60.”

    I may have been a shit, but I was a quick learner. “So, you – future me – have invented time travel? Whoa – that’s so cool. But wait, should you be here – aren’t you changing the future or something?”

    “Yes there are risks, but the risks of me not having this conversation are far greater”. Older me then took something out of his pocket – a kind of glass tablet – he prodded it with his finger and looked at the display. “Look – I haven’t got long – the energy costs of time travel are colossal. Another 10 minutes”.

    He then sat down and talked fast. I listened hard. I asked him if I could take notes. “No please don’t – what I’m about to tell you is dangerous – it’s super important no one knows anything about this conversation”. (‘Super important’ – that’s another thing I say.)

    Older me explained that yes, he had invented a time machine. It had made him famous. Protocols (rules – he clarified – 20-year-old me didn’t know about protocols) had been established.  Following international ethical approval the time machine had been used three times to travel way back in time to settle deep scientific questions about evolution. 

    “Whoa – did you see the dinosaurs?” No, he said. “Only one person can travel and I’m not a palaeontologist”.  But, he said, “one trip was to the Cambrian – far more interesting and controversial than the Jurassic or the Cretaceous”.

    “Now”, older me said, “listen carefully”. “We’re in great danger – some very rich and powerful men are doing everything they can to build another machine.”

    “Why? What do they want to do?”

    “They intend to change history. You see they are white supremacists. They want to go back in time and stop the abolition of slavery. They’re not just racists, they also hate women, so they also want to go back and make sure women – and commoners like us – never get the vote. In short they want to turn the political clock back to the 18th century”.

    “Shit”, I said, “that’s really fucked up.”

    “Yes it is. And that’s why you must not invent the time machine.” Older me said those last words very slowly. I’ve never heard anyone then or since be any more serious than he was.

    Then, anticipating precisely what I was about to say: “John, I know you’re a determinist – that you don’t believe in free will. But you will change you mind. Free will is real and the choices you make have consequences.” 

    “The burden you – we – bear are that those choices are perhaps the most important in the history of humanity.”

    I joked: “So, I guess if I make the wrong choice we’ll be having this conversation again?” 

    “Yes, exactly”, he said – still deadly serious, “in fact this might not be the first time.” As if I wasn’t already freaked out enough by this whole conversation – that took me to the freaked out equivalent of Defcon 1.

    Then his face brightened up. “Goodbye, you fucker” he said, and vanished.


    I write this age 60, forty years to the day that I met future me. I have thought about that conversation every day. Often doubting it happened at all. I had so many questions – enough to sustain a career.

    Yes I did a PhD in theoretical physics and won a bunch of prizes. My work was on the structure of space-time, and rumour has it I’ve been nominated for a Nobel. I did sketch out one paper setting out practical steps toward time travel but deleted the paper before anyone else even saw it. 

    The world is still fucked up of course, but things could have been so much worse if I had not taken older me’s advice. 

    As to those questions – it didn’t take me long to figure out that older me vanished as soon as he convinced me to take his advice: at that moment the time machine that brought him back to meet me no longer existed. But I will never know how many times he failed to persuade me. My guess is that each time we had that conversation older me tried out a different script – until it was word perfect. The bit about “I haven’t got long ... only 10 minutes” was bullshit. After god knows how many repeats the fucker knew exactly when to say goodbye.


© Alan Winfield 2020


Previous stories:

The Gift (2016)

Sunday, October 25, 2020

RoboTED: a case study in Ethical Risk Assessment

A few weeks ago I gave a short paper* at the excellent International Conference on Robot Ethics and Standards (ICRES 2020), outlining a case study in Ethical Risk Assessment - see our paper here. Our chosen case study is a robot teddy bear, inspired by one of my favourite movie robots: Teddy, in A.I. Artificial Intelligence.


Although Ethical Risk Assessment (ERA) is not new - it is after all what research ethics committees do - the idea of extending traditional risk assessment, as practised by safety engineers, to cover ethical risks is new. ERA is I believe one of the most powerful tools available to the responsible roboticist, and happily we already have a published standard setting out a guideline on ERA for robotics in BS 8611, published in 2016.

Before looking at the ERA, we need to summarise the specification of our fictional robot teddy bear: RoboTed. First, RoboTed is based on the following technology:

  • RoboTed is an Internet (WiFi) connected device, 
  • RoboTed has cloud-based speech recognition and conversational AI (chatbot) and local speech synthesis,
  • RoboTed’s eyes are functional cameras allowing RoboTed to recognise faces,
  • RoboTed has motorised arms and legs to provide it with limited baby-like movement and locomotion.
And second RoboTed is designed to:

  • Recognise its owner, learning their face and name and turning its face toward the child.
  • Respond to physical play such as hugs and tickles.
  • Tell stories, while allowing a child to interrupt the story to ask questions or ask for sections to be repeated.
  • Sing songs, while encouraging the child to sing along and learn the song.
  • Act as a child minder, allowing parents to both remotely listen, watch and speak via RoboTed.
The tables below summarise the ERA of RoboTED for (1) psychological, (2) privacy & transparency and (3) environmental risks. Each table has 4 columns, for the hazard, risk, level of risk (high, medium or low) and actions to mitigate the risk. BS8611 defines an ethical risk as the “probability of ethical harm occurring from the frequency and severity of exposure to a hazard”; an ethical hazard as “a potential source of ethical harm”, and an ethical harm as “anything likely to compromise psychological and/or societal and environmental well-being".


(1) Psychological Risks




(2) Security and Transparency Risks

(3) Environmental Risks









For a more detailed commentary on each of these tables see our full paper - which also, for completeness, covers physical (safety) risks.

And here are the slides from my short ICRES 2020 presentation:


Through this fictional case study we argue we have demonstrated the value of ethical risk assessment. Our RoboTed ERA has shown that attention to ethical risks can
  • suggest new functions, such as “RoboTed needs to sleep now”,
  • draw attention to how designs can be modified to mitigate some risks, 
  • highlight the need for user engagement, and
  • reject some product functionality as too risky.
But ERA is not guaranteed to expose all ethical risks. It is a subjective process which will only be successful if the risk assessment team are prepared to think both critically and creatively about the question: what could go wrong? As Shannon Vallor and her colleagues write in their excellent Ethics in Tech Practice toolkit design teams must develop the “habit of exercising the skill of moral imagination to see how an ethical failure of the project might easily happen, and to understand the preventable causes so that they can be mitigated or avoided”.
 
*Which won the conference best paper prize!