Wednesday, December 30, 2020

#heavencalling

    Now it’s personal. I’ve just had a phone call from my mom.  Fine, you might think, but it’s sure as hell not fine. She’s been dead 5 years.

    So, I’m a member of the LAPD CSI assigned to cyber crime. The case that landed on my desk a couple of weeks ago started as a complaint that folk were getting phone calls from dead relatives. At first we thought it was a joke. But after a couple of Hollywood celebs and the mayor of Pasadena started getting calls too – it got real serious real fast. The mayor called my chief: he was furious that someone was impersonating his eldest daughter: she died a couple of years ago in a freak surfing accident. It was only when the chief explained that it wasn’t a person that had called him, but an AI programmed to impersonate his daughter, that he calmed down a bit. Just a bit mind: according to my boss what he said went along the lines of “find out who these sons-of-bitches programmers are, I’m gonna sue the hell out of them”.

    Deepfakes have been around for 5 years or so. Mostly videos doctored with some famous actor’s face substituted for a slightly less famous face. Tom Cruise as spiderman – that kinda thing; mostly harmless.  After the mayor’s call the chief called a departmental meeting. She explained that – according to the DA – impersonation is not a misdemeanour: “Hell if it was that would make the whole entertainment industry a criminal enterprise.” That caused a cynical chuckle across the room. She went on, “nor is creating a fake AI based on a real person.” “Of course people are upset and angry – who wouldn’t be when they get a call from someone dear to them who also happens to be deceased – but upsetting people isn’t a crime.” 

    She looked at me. “Frank, what have you got so far?” “Not much chief”, I replied, “each call seems to be coming from a different number – my guess is they’re one-time numbers”. “Any idea who’s behind this?” she asked. “No – but since no-one is demanding money – my guess would be AI genius college kids doing this for a joke, or maybe their dissertations.” “Of course” I added, “they would need to be scraping the personal data from somewhere to construct the fakes, but so much hacked data is around on the dark web that wouldn’t be too hard.” “Ok good”, she said, “start talking to some college professors”. 

    Two days later I had the call.

    “Hello Frankie, it’s mom.”

    “Mom? But you’ve been gone 5 years.”

    “I know son. I just wanted to call to tell you I love you.”

    “But. Goddam. You sound just like Mom.”

    “Aren’t you pleased to hear from me Frankie?”

    “Yes... No. This isn’t right.”

    “How is Josie doing? And Taylor – she must’ve started college by now?”

    “Yes Josie is good, and Taylor’s ... no dammit I’m not gonna talk to a computer program.”

    “Aw, don’t be mad with your Mom.”

    At that point I hung up. But Jesus it was hard. I knew it wasn’t my Mom but the temptation to stay on the call just to hear her voice again was just overwhelming. It took me awhile to calm down. It’s only in the last year that I started to get over her passing. That call brought it all back: the pain, the anger she had been taken too soon. We were real close.

    This fake was good – they had my Mom’s voice down to a tee – but how? Mom was a high school teacher not a celebrity. She wasn’t big on social media. Sure she used Facebook – who doesn’t – but that doesn’t record voice. Just about everything else mind – that’s where they would have gotten family names and relationships. Then I remembered that we bought her one of those smart speakers a year or so before she passed away. Arthritis made it hard for her to move around so we put in the speaker so she could make voice calls, listen to music or turn on the TV just by asking. She loved it. 

    Then the story broke in the press. Twitter was full of it: #heavencalling and #deadphone were just two of the hashtags; none of them even remotely funny to me. The pundits were all over the newscasts: AI experts gleefully explaining the technology while expressing a dishonest kind of smirking dismay “...of course no AI professional could possibly condone this kind of misuse.” Obviously they hadn’t had the call. 

    Of course the news channels also interviewed folk who had been called. Some were outraged, but more were very happy that they had been ‘chosen’ for a call from heaven. One lady was so pleased to have had a call from her late husband: “It was so wonderful to hear from Jimmy – to talk about old times and know that he’s happy in heaven”. Well I guess I shouldn’t have been surprised. The church pastors they interviewed were indignant. “The devil’s work” was the general tone. One even described it as ‘artificial witchcraft’.  They had good reason to be unhappy, seeing as they have exclusive rights to the intercession business.

    A day later I had an email back from one of the AI Profs at Caltech. I called him straight away and he told me he had a pretty good idea who might be behind this “deeply unethical AI” as he put it. A couple of star students had been working on what one of them had told him was a ‘really cool NLP project’. NLP – that’s natural language processing. He told me that he had already disabled their accounts on the Caltech supercomputer. This kind of real-time conversational AI uses huge amounts of computing power.

    A few hours later the chief and I are in the Dean’s office with the Professor and his two students. In the students I saw a younger me: bright but with that naïve innocence that blesses only those for whom nothing bad has ever happened. 

    My chief explained to these two young men that, since no crime had been committed, we would not be pressing charges. But, she stressed, “What you did was not without consequence. The mayor and his wife were deeply distressed to receive a call from someone they thought was their deceased daughter. And my colleague here was mad as hell when he had a call from his late Mom.” From the look in their eyes they obviously had no idea they had set up a heaven call to a cop.  

    Then the Dean gave them one hell of a dressing down. At one point one of the students tried to interject that some of the recipients of the heaven calls had been very happy to be called, at which point the Prof stopped him immediately. “No. Regardless of how people reacted, your AI was a deception. And an egregious one too, as it exploited the vulnerability of grief.” Then he added, “Something that in time you too will experience.” The Dean told them that they should count themselves very lucky that the school had decided not to expel them, on condition that they personally apologise to everyone who had received a heaven call, starting right now with Officer Frank Aaronavitch here. After a very gracious apology, which I accepted, the Prof added that he would be requiring them to submit year papers on the ethics of their heaven calling AI.

    Six months have passed. Heaven calling blew over pretty quickly. Then I noticed a piece in the tech press about a new start up – Heavenly AI – looking for VC. Sure enough the two founders are the same students we saw in the Deans’s office at Caltech. The article claims the company has an ethics driven business model. Great I thought. Then cynical me kicked in; give it six months and these guys are gonna get bought out by Facebook. Heaven forbid.


Previous stories: 

The Gift (2016) 
Word Perfect (2020) 

Monday, December 28, 2020

She had chosen well

For this story, written as the second exercise in my Writing Short Stories course back in June, I attempted a story without dialogue. I love dialogue so expected to find this difficult, which it was. In the story I try to imagine what it might have been like to experience an extinction event, in an effort to capture a sense of being in the liminal state from a limited first-person (or rather animal) perspective.


     She had chosen well.  

    The burrow she shared with her litter was lodged within the vaulted foundations of a mighty tree. The tree had taken root in rocky soil long before her time, its vascular organs splitting the rock enough to allow her to excavate tunnels and chambers three seasons ago. 

    It had been a good spring. Her pups had almost weaned and were growing fat on insects and berries. Even the reckling was looking healthy. He was a survivor, escaping the quick-feathered hunters with sharp eyes and sharper teeth that had taken two of her litter a few moons ago.

    In her world there was much to fear. Death came in many ways: quick from the sharp-teeth or sky-claws; slow from starvation or thirst (the nearest spring was a perilous journey - although she had learned from her mother how to harvest the prickly watery green leaves which grew close to the burrow). But this hillside had one advantage; it was too high and steep for the long-necked ground-shakers that crashed and bellowed through the valley below from time to time.

    The moons passed and, as the nights started to lengthen, she began to harvest the nuts, green leaves and tubers, storing these in dry clean chambers close to the comfortable living nest.  Something – perhaps the unusual bounty of the season – made her collect more this summer.

    It was a warm dusk. After a good night’s forage she and her pups had spent the day sleeping full-bellied in the cool of the burrow. Her pups were now almost full grown and the biggest and boldest were restless to leave. Two, a brother and sister, moved to the burrow entrance with a purpose that she knew from her own time so, with a touch of their noses, mother and eldest made their farewells.  

    Then, just a few moments after she had returned to the nest chamber, the ground shook. But this was not the rhythmic shaking of the long-necks in the valley.  Nor was it the noisy anger of the fire mountain that turned their nights red from time to time. This was different: a silent deep tremor that felt as if it was coming from the belly of the earth. The tremor grew to a crescendo. Terrified the small family nest-huddled as the tree roots groaned while soil and stones rained upon them. Then it was still.

    They waited. She lifted her head and sensed around. The nest air was full of dust. She felt the silence then realised that the breeze-scent of outside was gone. She knew something was wrong, ran to the entrance tunnel and found it blocked with stones and earth. Fear rising she started to dig. She was a good digger with powerful front claws. She dug and dug until she started to feel weak, then – rest-pausing – she heard a scraping sound. A few moments later the soil and stones ahead broke apart and there was her eldest daughter. With joy and relief they touched noses, but she sensed a sadness that told her that her eldest son was gone. 

    Together mother and daughter cleared the spoil from the entrance tunnel, then – followed by the rest of the pups – they emerged, cautiously, into the night. There was no moon. Instead the sky clouds were lit high with lurid reds, greens and purples, yet – she noticed – the fire mountain was silent. The night was quiet at first although some familiar sounds slowly returned: the bellows of the long-necks in the valley below and skyward the distant cries of the sky-claws. The family fed and foraged and still fearful returned to the nest before dawn.

    After sleeping most of the day the nest family was awakened by a long roar of thunder that seemed to roll in from afar and rush over them before receding into the distance. She had heard thunder before but never like this. As it passed it hit their tree – although not with the long shake of the sleep-day before – but with a great cracking crash that was the last thing they heard for awhile. She felt an ear-pain she had never before experienced, and so – it seemed – had her pups. Dazed, deafened and frightened they did not venture out of the burrow that night.

    Restless and hungry the family stirred again before dusk the following day. She was relieved that the ear-pain had gone and her sound sense restored. Cautiously they emerged from the burrow entrance to find that their small exit platform was now a tangle of branch and leaf. Luckily it was not dense, and they quickly made a path through to the open hillside.  What they saw by the dull grey light of dusk was a world changed. No tree was left standing, including their home tree – indeed it was that tree that now provided their exit canopy.

    They sensed something moving nearby, then saw one of the sky-claws fallen onto a prickle leaf bush; it was broken winged and near death, but still able to fix them with its sharp eye. They had never before seen one of these creatures close up and – even in its death throes – their terror of its kind was undimmed, so they quickly retreated into the exit canopy and nervously fed on insects and home tree nuts.

    The next two nights, alerted by the bad tempered chirruping of sharp-teeth feeding on the sky-claw, they did not stray outside the home thicket. She noticed that the nights were cold: too cold for this early in the autumn. A few nights later the sky-claw was joined in death by the sharp-teeth, and the nest family were able to feast on the insects drawn to the carrion. But their forages were short as it was too cold to stay out for more than a few mouthfuls before returning to the warm of the nest. A few nights later even the carrion insects were gone, as the corpses had frozen. 

    With a deep sense of unease the nest family settled for their long winter sleep.


© Alan Winfield 2020


Previous stories:

The Gift (2016)

Word Perfect (2020)

Saturday, December 26, 2020

Word Perfect

Back in June I signed up for an online course on Writing Short Stories (the next steps), run by the Bishopsgate Institute. The course was excellent. There were six weekly zoom sessions of about 2 hours each, with 8 students led by Barbara Marsh, who is a wonderful tutor. I can honestly say that I enjoyed every minute. There was a fair bit of homework including, of course, writing - and a major segment of each class was critiquing each others' work. 

Here below is the first of the three new stories I drafted for the course.


Word Perfect

    “Who the fuck are you?”

    “Don’t you recognise me? I’m you.”

    “Oh fuck off. I’ve never seen you before in my life.”

    “Yes you have – every time you look in the mirror.”

    I wasn’t listening of course. I never did then. I was foul mouthed, arrogant, and full of myself (full of shit actually). I was a first year student: physics at Oxford. Won a scholarship for genius working class kids. Something I never failed to tell everyone.

    “You’re full of shit. What do you want?”

    He paused a moment and looked me in the eye. “I want to talk, you fucker.”

    Now this old guy had my attention. The only person I knew who says ‘you fucker’ was me. It was (and still is) something I only say to close friends: a kind of insult of endearment. 

    I was speechless (which didn’t happen often). For the first time I looked hard at him. Same height and build as me. Clean-shaven and almost bald: not bad looking. Fuck, I thought, he could be my dad. But he died four years ago.

    He read my mind. “No John, I’m not a ghost. I’m you age 60.”

    I may have been a shit, but I was a quick learner. “So, you – future me – have invented time travel? Whoa – that’s so cool. But wait, should you be here – aren’t you changing the future or something?”

    “Yes there are risks, but the risks of me not having this conversation are far greater”. Older me then took something out of his pocket – a kind of glass tablet – he prodded it with his finger and looked at the display. “Look – I haven’t got long – the energy costs of time travel are colossal. Another 10 minutes”.

    He then sat down and talked fast. I listened hard. I asked him if I could take notes. “No please don’t – what I’m about to tell you is dangerous – it’s super important no one knows anything about this conversation”. (‘Super important’ – that’s another thing I say.)

    Older me explained that yes, he had invented a time machine. It had made him famous. Protocols (rules – he clarified – 20-year-old me didn’t know about protocols) had been established.  Following international ethical approval the time machine had been used three times to travel way back in time to settle deep scientific questions about evolution. 

    “Whoa – did you see the dinosaurs?” No, he said. “Only one person can travel and I’m not a palaeontologist”.  But, he said, “one trip was to the Cambrian – far more interesting and controversial than the Jurassic or the Cretaceous”.

    “Now”, older me said, “listen carefully”. “We’re in great danger – some very rich and powerful men are doing everything they can to build another machine.”

    “Why? What do they want to do?”

    “They intend to change history. You see they are white supremacists. They want to go back in time and stop the abolition of slavery. They’re not just racists, they also hate women, so they also want to go back and make sure women – and commoners like us – never get the vote. In short they want to turn the political clock back to the 18th century”.

    “Shit”, I said, “that’s really fucked up.”

    “Yes it is. And that’s why you must not invent the time machine.” Older me said those last words very slowly. I’ve never heard anyone then or since be any more serious than he was.

    Then, anticipating precisely what I was about to say: “John, I know you’re a determinist – that you don’t believe in free will. But you will change you mind. Free will is real and the choices you make have consequences.” 

    “The burden you – we – bear are that those choices are perhaps the most important in the history of humanity.”

    I joked: “So, I guess if I make the wrong choice we’ll be having this conversation again?” 

    “Yes, exactly”, he said – still deadly serious, “in fact this might not be the first time.” As if I wasn’t already freaked out enough by this whole conversation – that took me to the freaked out equivalent of Defcon 1.

    Then his face brightened up. “Goodbye, you fucker” he said, and vanished.


    I write this age 60, forty years to the day that I met future me. I have thought about that conversation every day. Often doubting it happened at all. I had so many questions – enough to sustain a career.

    Yes I did a PhD in theoretical physics and won a bunch of prizes. My work was on the structure of space-time, and rumour has it I’ve been nominated for a Nobel. I did sketch out one paper setting out practical steps toward time travel but deleted the paper before anyone else even saw it. 

    The world is still fucked up of course, but things could have been so much worse if I had not taken older me’s advice. 

    As to those questions – it didn’t take me long to figure out that older me vanished as soon as he convinced me to take his advice: at that moment the time machine that brought him back to meet me no longer existed. But I will never know how many times he failed to persuade me. My guess is that each time we had that conversation older me tried out a different script – until it was word perfect. The bit about “I haven’t got long ... only 10 minutes” was bullshit. After god knows how many repeats the fucker knew exactly when to say goodbye.


© Alan Winfield 2020


Previous stories:

The Gift (2016)

Sunday, October 25, 2020

RoboTED: a case study in Ethical Risk Assessment

A few weeks ago I gave a short paper* at the excellent International Conference on Robot Ethics and Standards (ICRES 2020), outlining a case study in Ethical Risk Assessment - see our paper here. Our chosen case study is a robot teddy bear, inspired by one of my favourite movie robots: Teddy, in A.I. Artificial Intelligence.


Although Ethical Risk Assessment (ERA) is not new - it is after all what research ethics committees do - the idea of extending traditional risk assessment, as practised by safety engineers, to cover ethical risks is new. ERA is I believe one of the most powerful tools available to the responsible roboticist, and happily we already have a published standard setting out a guideline on ERA for robotics in BS 8611, published in 2016.

Before looking at the ERA, we need to summarise the specification of our fictional robot teddy bear: RoboTed. First, RoboTed is based on the following technology:

  • RoboTed is an Internet (WiFi) connected device, 
  • RoboTed has cloud-based speech recognition and conversational AI (chatbot) and local speech synthesis,
  • RoboTed’s eyes are functional cameras allowing RoboTed to recognise faces,
  • RoboTed has motorised arms and legs to provide it with limited baby-like movement and locomotion.
And second RoboTed is designed to:

  • Recognise its owner, learning their face and name and turning its face toward the child.
  • Respond to physical play such as hugs and tickles.
  • Tell stories, while allowing a child to interrupt the story to ask questions or ask for sections to be repeated.
  • Sing songs, while encouraging the child to sing along and learn the song.
  • Act as a child minder, allowing parents to both remotely listen, watch and speak via RoboTed.
The tables below summarise the ERA of RoboTED for (1) psychological, (2) privacy & transparency and (3) environmental risks. Each table has 4 columns, for the hazard, risk, level of risk (high, medium or low) and actions to mitigate the risk. BS8611 defines an ethical risk as the “probability of ethical harm occurring from the frequency and severity of exposure to a hazard”; an ethical hazard as “a potential source of ethical harm”, and an ethical harm as “anything likely to compromise psychological and/or societal and environmental well-being".


(1) Psychological Risks




(2) Security and Transparency Risks

(3) Environmental Risks









For a more detailed commentary on each of these tables see our full paper - which also, for completeness, covers physical (safety) risks.

And here are the slides from my short ICRES 2020 presentation:


Through this fictional case study we argue we have demonstrated the value of ethical risk assessment. Our RoboTed ERA has shown that attention to ethical risks can
  • suggest new functions, such as “RoboTed needs to sleep now”,
  • draw attention to how designs can be modified to mitigate some risks, 
  • highlight the need for user engagement, and
  • reject some product functionality as too risky.
But ERA is not guaranteed to expose all ethical risks. It is a subjective process which will only be successful if the risk assessment team are prepared to think both critically and creatively about the question: what could go wrong? As Shannon Vallor and her colleagues write in their excellent Ethics in Tech Practice toolkit design teams must develop the “habit of exercising the skill of moral imagination to see how an ethical failure of the project might easily happen, and to understand the preventable causes so that they can be mitigated or avoided”.
 
*Which won the conference best paper prize!


Thursday, August 20, 2020

"Why Did You Just Do That?" Explainability and Artificial Theory of Mind for Social Robots

This week I have been attending (virtually) the excellent RoboPhilosophy conference, and this morning gave a plenary talk "Why did you just do that?" Here is the abstract:
An important aspect of transparency is enabling a user to understand what a robot might do in different circumstances. An elderly person might be very unsure about robots, so it is important that her assisted living robot is helpful, predictable – never does anything that puzzles or frightens her – and above all safe. It should be easy for her to learn what the robot does and why, in different circumstances, so that she can build a mental model of her robot. An intuitive approach would be for the robot to be able to explain itself, in natural language, in response to spoken requests such as “Robot, why did you just do that?” or “Robot, what would you do if I fell down?” In this talk I will outline current work, within project RoboTIPS, to apply recent research on artificial theory of mind to the challenge of providing social robots with the ability to explain themselves. 
And here are the slides:


Here are links to the movies:


And here are the papers referenced in the talk, with links:
  1. Jobin, A., Ienca, M. & Vayena, E. (2019) The global landscape of AI ethics guidelines. Nat Mach Intell 1, 389–399
  2. Winfield, A. Ethical standards in robotics and AI. Nature Electronics 2, 46–48 (2019).  Pre-print here.
  3. Winfield, A. F. (2018) Experiments in Artificial Theory of Mind: from safety to story telling. Front. Robot. AI 5:75.
  4. Blum, C., Winfield, A. F. and Hafner, V. V. (2018) Simulation-based internal models for safer robots. Frontiers in Robotics and AI, 4 (74). pp. 1-17.
  5. Vanderelst, D. and Winfield, A. F. (2018) An architecture for ethical robots inspired by the simulation theory of cognition. Cognitive Systems Research, 48. pp. 56-66.
  6. Winfield AFT (2018) When Robots Tell Each Other Stories: The Emergence of Artificial Fiction. In: Walsh R., Stepney S. (eds) Narrating Complexity. Springer, Cham. Preprint here.
  7. Winfield, AF and Jirotka, M. (2017) The case for an ethical black box. In: Gao, Y. et al, eds. (2017) Towards Autonomous Robot Systems. LNCS 10454, pp. 262-273, Springer. Preprint here.
  8. Winfield AFT, Katie Winkle, Helena Webb, Ulrik Lyngs, Marina Jirotka and Carl Macrae, Robot Accident Investigation: a case study in Responsible Robotics, chapter submitted to RoboSoft.
and mentioned in the Q&A:
  1. Winfield, AF, K. Michael, J. Pitt and V. Evers (2019) Machine Ethics: The Design and Governance of Ethical AI and Autonomous Systems [Scanning the Issue], in Proceedings of the IEEE, vol. 107, no. 3, pp. 509-517.
  2. Vanderelst, D. and Winfield, A. (2018), The Dark Side of Ethical Robots, AIES '18: Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society Dec 2018 Pages 317–322. 

Monday, August 10, 2020

Back to robot coding part 1: hello world

One of the many things I promised myself when I retired nearly two years ago was to get back to some coding. Why? Two reasons: one is that writing and debugging code is hugely satisfying - for those like me not smart enough to do pure maths or theoretical physics - it's the closest you can get to working with pure mind stuff. But the second is that I want to prototype a number of ideas in cognitive robots which tie together work in artificial theory of mind and the ethical black box, with old ideas on how robots telling each other stories and new ideas on how social robots might be able to explain themselves in response to questions like "Robot: what would you do if I forget to take my medicine?"

But before starting to work on the robot (a NAO) I first needed to learn Python, so completed most of the Codecadamy's excellent Learn Python 2 course over the last few weeks. I have to admit that I started learning Python with big misgivings over the language. I especially don't like the way Python plays fast and loose with variable types, allowing you to arbitrarily assign a thing (integer, float, string, etc) to a variable and then assign a different kind of thing to the same variable; very different to the strongly typed languages I have used since student days: Algol 60, Algol 68, Pascal and C. However, there are things I do like: the use of indentation as part of the syntax for instance, and lots of nice built in functions like range(), so x = range(0,10) puts a list ('array' in old money) of integers from 0 to 9 in x. 

So, having got my head around Python I finally made a start with the robot on Thursday last week. I didn't get far and it was *very* frustrating. 

Act 1: setting up on my Mac

Attempting to set things up on my elderly Mac air was a bad mistake which sent me spiralling down a rabbit hole of problems. The first thing you have to do is download and unzip the NAO API, called naoqi, from Aldebaran. The same web page then suggests you simply try to import naoqi from within Python, and if there are no errors all's well.

As soon as I got the export path commands right,  import naoqi resulted in the following error

...
Reason: unsafe use of relative rpath libboost_python.dylib in /Users/alansair/Desktop/naoqi/pynaoqi-python2.7-2.1.4.13-mac64/_qi.so with restricted binary

According to stack overflow this problem is caused by Mac OSX system integrity protection (SIP)

Then (somewhat nervously) I tried turning SIP off, as instructed here.

But import naoqi still gives a different error. Perhaps its because my Python is in the wrong place, the Aldebaran page says it must be at /usr/local/bin/python (the default on the mac is /usr/bin. Ok so I So, reinstall python 2.7 from Python.org  so that it is in /usr/local/bin/python. But now I get another error message:

>> import naoqi
Fatal Python error: PyThreadState_Get: no current thread
Abort trap: 6

A quick search and I read: "this error shows up when a module tries to use a python library that is different than the one the interpreter uses, that is, when you mix two different pythons. I would run otool -L <dyld> on each of the dynamic libraries in the list of Binary Images, and see which ones is linked to the system Python."

At which point I admitted defeat.

Act 2: setting up on my Linux machine

Once I had established that the Python on my Linux machine was also the required version 2.7, I then downloaded and unzipped the NAO API, this time for Linux.

This time I was able to import naoqi with no errors, and within just a few minutes ran my first NAO program: hello world

from naoqi import ALProxy
tts = ALProxy("ALTextToSpeech", "164.168.0.17", 9559)
tts.say("Hello, world!")

whereupon my NAO robot spoke the words "Hello world". Success!

Friday, June 05, 2020

Robot Accident Investigation

Yesterday I gave an talk at the ICRA 2020 workshop Against Robot Dystopias. The workshop should have been in Paris but - like most academic meetings during the lockdown - was held online. In the zoom chat window toward the end of the workshop many of us were wistfully imagining continued discussions in a Parisian bar over a few glasses of wine. Next year I hope. The workshop was excellent and all of the talks should be online soon.

My talk was an extended version of last year's talk for AI@Oxford What could possibly go wrong. With results from our new paper Robot Accident Investigation, the talk outlines a fictional investigation of a fictional robot accident. We had hoped to stage the mock accident, in the lab, with human volunteers and report a real investigation (of a mock accident) but the lockdown put paid to that too. So we have had to use our imagination and construct - I hope plausibly - the process and findings of the accident investigation.

Here is the abstract of our paper.
Robot accidents are inevitable. Although rare, they have been happening since assembly-line robots were first introduced in the 1960s. But a new generation of social robots are now becoming commonplace. Often with sophisticated embedded artificial intelligence (AI) social robots might be deployed as care robots to assist elderly or disabled people to live independently. Smart robot toys offer a compelling interactive play experience for children and increasingly capable autonomous vehicles (AVs) the promise of hands-free personal transport and fully autonomous taxis. Unlike industrial robots which are deployed in safety cages, social robots are designed to operate in human environments and interact closely with humans; the likelihood of robot accidents is therefore much greater for social robots than industrial robots. This paper sets out a draft framework for social robot accident investigation; a framework which proposes both the technology and processes that would allow social robot accidents to be investigated with no less rigour than we expect of air or rail accident investigations. The paper also places accident investigation within the practice of responsible robotics, and makes the case that social robotics without accident investigation would be no less irresponsible than aviation without air accident investigation.
And the slides from yesterday's talk:




Special thanks to project colleagues and co-authors: Prof Marina Jirotka, Prof Carl Macrae, Dr Helena Webb, Dr Ulrik Lyngs and Katie Winkle.

Monday, April 20, 2020

Autonomous Robot Evolution: an update

It's been over a year since my last progress report from the Autonomous Robot Evolution (ARE) project, so an update on the ARE Robot Fabricator (RoboFab) is long overdue. There have been several significant advances. First is integration of each of the elements of RoboFab. Second is the design and implementation of an assembly fixture, and third significantly improved wiring. Here is a CAD drawing of the integrated RoboFab.

The ARE RoboFab has four major subsystems: up to three 3D printer(s), an organ bank, an assembly fixture and a centrally positioned robot arm (multi-axis manipulator). The purpose of each of these subsystems is outlined as follows:
  • The 3D printers are used to print the evolved robot’s skeleton, which might be a single part, or several. With more than one 3D printer we can speed up the process by 3D printing skeletons for several different evolved robots in parallel, or – for robots with multi-part skeletons – each part can be printed in parallel.
  • The organ bank contains a set of pre-fabricated organs, organised so that the robot arm can pick organs ready for placing within the part-built robot. For more on the organs see previous blog post(s).
  • The assembly fixture is designed to hold (and if necessary rotate) the robot’s core skeleton while organs are placed and wired up.
  • The robot arm is the engine of RoboFab. Fitted with special gripper the robot arm is responsible for assembling the complete robot.
And here is the Bristol RoboFab (there is a second identical RoboFab in York):


Note that the assembly fixture is mounted upside down at the top front of the RoboFab. This has the advantage that there is a reasonable volume of clear space for assembly of the robot under the fixture, which is reachable by the robot arm.

The fabrication and assembly sequence has six stages:
  1. RoboFab receives the required coordinates of the organs and one or more mesh file(s) of the shape of the skeleton.
  2. The skeleton is 3D printed.
  3. The robot arm fetches the core ‘brain’ organ from the organ bank and clips it into the skeleton on the print bed. This is a strong locking clip.
  4. The robot arm then lifts the core organ and skeleton assemblage off the print bed, and attaches it to the assembly fixture. The core organ has metal disks on its underside which are used to secure the assemblage to the fixture with electromagnets.
  5. The robot arm then picks and places the required organs from the organ bank, clipping them into place on the skeleton.
  6. Finally the robot arm wires each organ to the core organ, to complete the robot.



Here is a complete robot, fabricated, assembled and wired by the RoboFab. This evolved robot has a total of three organs: the core ‘brain’ organ, and two wheel organs.
Note especially the wires connecting the wheel organs to the core organ. My colleague Matt has come up with an ingenious design in which a coiled cable is contained within the organ. After the organs have been attached to the skeleton (stage 5), the robot arm in turn grabs each organ's jack plug and pulls the cable to plug into the core organ (stage 6). This design minimises the previously encountered problem of the robot gripper getting tangled in dangling loose wires during stage 6.

And here is a video clip of the complete process:



Credits

The work described here has been led by my brilliant colleague Matt Hale, very ably supported by York colleagues Edgar Buchanan and Mike Angus. The only credit I can take is that I came up with some of the ideas and co-wrote the bid that secured the EPSRC funding for the project.

References

For a much more detailed account of the RoboFab see this paper, which was presented at ALife 2019 last summer in Newcastle: The ARE Robot Fabricator: How to (Re)produce Robots that Can Evolve in the Real World.

Related blog posts

First automated robot assembly (February 2019)
Autonomous Robot Evolution: from cradle to grave (July 2018)
Autonomous Robot Evolution: first challenges (Oct 2018)