tag:blogger.com,1999:blog-204022732024-03-18T06:38:01.784+00:00Alan Winfield's Web LogMostly, but not exclusively, about robotsAlan Winfieldhttp://www.blogger.com/profile/08263812573346115168noreply@blogger.comBlogger210125tag:blogger.com,1999:blog-20402273.post-89710246582052264702022-05-16T14:08:00.001+01:002022-05-16T22:03:09.350+01:00A Draft Open Standard for an Ethical Black Box<p>About 5 years ago we proposed that all robots should be fitted with the robot equivalent of an aircraft Flight Data Recorder to continuously record sensor and relevant internal status data. <a href="#">We call this an ethical black box (EBB)</a>. We argued that an ethical black box will play a key role in the <a href="https://alanwinfield.blogspot.com/2020/06/robot-accident-investigation.html" target="_blank">processes of discovering why and how a robot caused an accident</a>, and thus an essential part of establishing accountability and responsibility.<br /><br />Since then, within the RoboTIPS project, we have developed and tested several model EBBs, including <a href="#">one for an e-puck robot that I wrote about in this blog</a>, and another for the <a href="#">MIRO robot</a>. With some experience under our belts, we have now drafted an Open Standard for the EBB for social robots - <a href="https://arxiv.org/abs/2205.06564" target="_blank">initially as a paper</a> submitted to the International Conference on Robots Ethics and Standards. Let me now explain first why we need a standard, and second why it should be an open standard.<br /></p>Why do we need a standard specification for an EBB? As we outline in our new paper, there are four reasons:<ol style="text-align: left;"><li>A standard approach to EBB implementation in social robots will greatly benefit accident and incident (near miss) investigations. </li><li>An EBB will provide social robot designers and operators with data on robot use that can support both debugging and functional improvements to the robot. </li><li>An EBB can be used to support robot ‘explainability’ functions to allow, for instance, the robot to answer ‘Why did you just do that?’ questions from its user. And,</li><li>a standard allows EBB implementations to be readily shared and adapted for different robots and, we hope, encourage manufacturers to develop and market general purpose robot EBBs.</li></ol><p> And why should it be an Open Standard? Bruce Perens, author of <a href="https://opensource.com/resources/what-are-open-standards" target="_blank">The Open Source Definition</a>, outlines a number of criteria an open standard must satisfy, including:</p><ul style="text-align: left;"><li>“<b>Availability</b>: Open standards are available for all to read and implement.<br /></li><li><b>Maximize End-User Choice</b>: Open Standards create a fair, competitive market for implementations of the standard.</li><li><b>No Royalty</b>: Open standards are free for all to implement, with no royalty or fee.</li><li><b>No Discrimination</b>: Open standards and the organizations that administer them do not favor one implementor over another for any reason other than the technical standards compliance of a vendor’s implementation.</li><li><b>Extension or Subset</b>: Implementations of open standards may be extended, or offered in subset form.<span dir="ltr" face="sans-serif" role="presentation" style="font-size: 16.0183px; left: 243.911px; top: 530.045px; transform: scaleX(1.00573);">”</span></li></ul><p>These are *good* reasons.</p><p>The most famous and undoubtedly the most impactful Open Standards are those that specified Internet protocols, such as <a href="https://en.wikipedia.org/wiki/File_Transfer_Protocol" target="_blank">FTP</a> and <a href="https://en.wikipedia.org/wiki/Email" target="_blank">email</a>. They were, and still are, called <a href="https://en.wikipedia.org/wiki/Request_for_Comments" target="_blank">Requests for Comments (RFCs)</a> to reflect the fact that they were - especially in the early years - drafts for revision. As a mark of respect we also regard our draft 0.1 Open Standard for an EBB for Social Robots, as an RFC. <a href="https://arxiv.org/pdf/2205.06564.pdf" target="_blank">You can find draft 0.1 in Annex A of the paper on arXiv here.</a> <br /></p><p>Not only is this a first draft, it is also incomplete, covering only the specification of the data and its format, that should be saved in an EBB for social robots. Given that the EBB data specification is at the heart of the EBB standard, we feel that this is sufficient to be opened up for comments and feedback. We will continue to extend the specification, with subsequent versions also published on arXiv.</p><p>Please feel free to either submit comments to this blog post (best because everyone can see the comments), or by contacting me directly via email. All constructive comments that result in revisions to the standard will be acknowledged in the standard.<br /></p>Alan Winfieldhttp://www.blogger.com/profile/08263812573346115168noreply@blogger.com0tag:blogger.com,1999:blog-20402273.post-63084177500621879902022-04-12T10:02:00.410+01:002022-05-10T19:51:06.177+01:00Our first mock social robot accident and investigation<p>Robot accidents are inevitable. These days the likelihood of serious accidents involving industrial robots is pretty low (but not zero), because such robots are generally inside safety cages. But a newer generation of social robots - robots designed to interact directly with people, including vulnerable elderly people or children - means that accidents are now much more likely. And if we also take into account ethical harms alongside physical harms, then the potential for accidents increases still further. Psychological harms include addiction, over trusting, or deception, and societal harms include privacy violations. For more on these ethical harms see my blog post outlining an <a href="https://alanwinfield.blogspot.com/2020/10/roboted-case-study-in-ethical-risk.html" target="_blank">ethical risk assessment of a smart robot teddy bear</a>.</p><p>It has puzzled me for some years that there has been almost no research on robot accident investigation. In the <a href="https://alanwinfield.blogspot.com/2020/10/roboted-case-study-in-ethical-risk.html" target="_blank">RoboTIPS</a> project we are addressing this deficit by developing both the technology - which we call an <a href="https://alanwinfield.blogspot.com/2017/08/the-case-for-ethical-black-box.html">Ethical Black Box (EBB)</a> - and the processes of robot accident investigation. One of the most exciting aspects of RoboTIPS is that we're running a series of mock, i.e. staged, social robot accidents in order to road test the EBB and investigation processes in as close to a real situation as is feasible in a research project. RoboTIPS started in March 2019, but then just as we were ready to trial our first mock accident the Covid pandemic hit, and closed down the lab.</p><p>So it was great that last week we finally managed to run the a pilot of our first (of three) mock accident scenarios. The scenario, based around an assisted living robot helping an elderly person to live independently, was <a href="https://alanwinfield.blogspot.com/2019/09/whats-worst-that-could-happen-why-we.html" target="_blank">sketched out in late 2019</a>, and then - during the lockdown - rehearsed in a number of online events, including a <a href="https://podfollow.com/928408356/episode/775b73c3f9f7b42c70c6d78a24d8e9994f76163c/view" target="_blank">podcast radio play for Oxford Sparks</a> and CSI Robot during the <a href="https://www.roboticstomorrow.com/news/2021/05/25/announcing-uk-festival-of-robotics-2021-new-7-day-public-celebration-of-all-things-robotic-from-19%E2%80%9325-june/16911/" target="_blank">UKRAS Festival of Robotics 2021</a>. </p><p>Here is the scenario:</p><p><i></i></p><blockquote><i>Imagine that your elderly mother, or grandmother, has an assisted
living robot to help her live independently at home. The robot is
capable of fetching her drinks, reminding her to take her medicine and
keeping in touch with family. Then one afternoon you get a call from a
neighbour who has called round and sees your grandmother collapsed on
the floor. When the paramedics arrive they find the robot wandering
around apparently aimlessly. One of its functions is to call for help if
your grandmother stops moving, but it seems that the robot failed to do
this</i></blockquote>To enact this scenario we needed a number of volunteers: one to act as Rose - the subject of the accident, a second as the neighbour who discovers the accident and raises the alarm, a third as the paramedic who attends to Rose, a fourth who acts in the role of the cleaner and a fifth in the role of manager of the group of homes in which Rose lives. We also needed volunteers to act as members of the accident investigation team who are called in to try and discover what<i> </i>happened, why it happened and, if possible, what changes need to be made to how to ensure the accident doesn't happen again.<p></p><p></p><table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left; margin-right: 1em; text-align: left;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiPaYzZJYN3uqtLLgkCblJSEo6ZISa3IGrQVphRBcMIPZ1Fi69iV2bIejZ7O6pohjZlcMe0Q4u22QOLoacmEWG_HIcqtBW71ijtwV5oRniuVFbj-tQvmGQG0XUknvZQIpJJsgQzDQdJt0a3Y7hiUixrVgDZlWAVHZGsWJgU7hya4ZHHcL9-7Ys/s1853/RoboTIPS_Mock_i.jpg" style="clear: left; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" data-original-height="865" data-original-width="1853" height="186" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiPaYzZJYN3uqtLLgkCblJSEo6ZISa3IGrQVphRBcMIPZ1Fi69iV2bIejZ7O6pohjZlcMe0Q4u22QOLoacmEWG_HIcqtBW71ijtwV5oRniuVFbj-tQvmGQG0XUknvZQIpJJsgQzDQdJt0a3Y7hiUixrVgDZlWAVHZGsWJgU7hya4ZHHcL9-7Ys/w400-h186/RoboTIPS_Mock_i.jpg" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;"><span style="font-family: inherit; font-size: small;">This is the mock accident taking place in the kitchen of our assisted
living studio. Left shows the neighbour, acted by Paul, discovering
Ross, acted by Alex, injured on the floor. (Note the chair on its side.)
Right is the paramedic, role-played by Luc, attending to Ross.
Meanwhile the <a href="https://en.wikipedia.org/wiki/Pepper_(robot)" target="_blank">Pepper robot</a> is moving around somewhat aimlessly.</span></td></tr></tbody></table><p></p><p>Our brilliant Research Fellow Dr Anouk van Maris, who organised the whole setup, persuaded five colleagues from the <a href="https://www.bristolroboticslab.com/" target="_blank">Bristol Robotics Lab</a>. All were male, so Rose became Ross. Only one volunteer: Alex, who played the part of Ross, was fully briefed. The other four role played brilliantly and, although they were briefed on their roles, they were not told what was going to happened to Ross, or the part the Pepper robot played (or maybe didn't play) in the accident. Two colleagues from Oxford, Lars and Keri, kindly volunteered to act as the accident investigators. Lars and Keri also had no prior knowledge of the circumstances of the accident, and had to rely on (i) inspecting the robot and the scene of the accident, (ii) the data from the robot's EBB, and (iii) testimonies from Ross, the neighbour, the paramedic, the cleaner and the facility manager. <br /></p><p></p><table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left; margin-right: 1em; text-align: left;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh_KfLQWoPW6WrAUddj-LfKxrahMvTrzm5n7XaDkLF_H-bCvKDR7GIduJpDa7LOnAjyxIjJ5UHmwDD2lmSdOmbGmZHtfUf03ZGYFSrXpqawu7_JiLTSCtwvyQcMinQBQvk1JaAWcfWC4AySVq2fE_-pdDmjcupRTGtIvS9SzacKq-WtpEIqti8/s4032/RoboTIPS_mock3.jpeg" style="clear: left; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" data-original-height="3024" data-original-width="4032" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh_KfLQWoPW6WrAUddj-LfKxrahMvTrzm5n7XaDkLF_H-bCvKDR7GIduJpDa7LOnAjyxIjJ5UHmwDD2lmSdOmbGmZHtfUf03ZGYFSrXpqawu7_JiLTSCtwvyQcMinQBQvk1JaAWcfWC4AySVq2fE_-pdDmjcupRTGtIvS9SzacKq-WtpEIqti8/s320/RoboTIPS_mock3.jpeg" width="320" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;"><span style="font-size: small;">Here we see Lars interviewing Medhi, who acted as the house manager,
while Ben, acting as the cleaner, waits to be interviewed. Inside the
studio Keri is interviewing the neighbour and parademic.</span></td></tr></tbody></table><p></p><p><br /></p><p><br /></p><p><br /></p><p><br /></p><p><br /></p><p><br /></p><p><br /></p><p><br /></p><p>So, what were the findings of our accident investigators? They did very well indeed. Close examination of the EBB data, alongside consideration of the (not always reliable) witness testimony enabled Lars and Keri to correctly deduce the role that the robot played in the accident. They were also able to make several recommendations on operational changes. But I will not reveal their findings in detail here as we intend to run the same mock accident again soon with a different set of volunteers and - in case any of them should read this blog - I don't want to give the game away!</p><p>Acknowledgements<br /></p><p>Very special thanks to <a href="https://www.robotips.co.uk/team" target="_blank">Dr Anouk van Maris</a>. Also <a href="https://www.robotips.co.uk/team" target="_blank">Dr Pericle Salvini</a>, who worked with Anouk in finalising the detail of the scenario and during the pilot itself. Also, huge thanks to BRL volunteers Dr Alex Smith, Dr Paul Bremner, Dr Luc Wijnen, Mehdi Sobhani and Dr Ben Ward-Cherrier. And last but not least a very big thank you to <a href="https://ori.ox.ac.uk/people/lars-kunze/" target="_blank">Dr Lars Kunze, Oxford Robotics Institute </a>and <a href="https://www.robotips.co.uk/team" target="_blank">Keri Grieman</a>, Dept of Computer Science, Oxford.</p><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhehVg9-nBbPGMMLOXdRCX7JaMjfF0yCeDW6XpuRbsT4lixXU7Q5-XgIv2XxOUwEJ0C6hbgdJRH7twOAwnG_LYPEbMHkGNMHV5TFc7GCmoydhIoUm9wIw9VJhBhNwfe6JugmG3l_B6-0WBOOoCK7FTRwUYRnP7OXK8ZchMekHatAW7FK5rLTh4/s6720/1N3A2359.JPG" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="4480" data-original-width="6720" height="213" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhehVg9-nBbPGMMLOXdRCX7JaMjfF0yCeDW6XpuRbsT4lixXU7Q5-XgIv2XxOUwEJ0C6hbgdJRH7twOAwnG_LYPEbMHkGNMHV5TFc7GCmoydhIoUm9wIw9VJhBhNwfe6JugmG3l_B6-0WBOOoCK7FTRwUYRnP7OXK8ZchMekHatAW7FK5rLTh4/s320/1N3A2359.JPG" width="320" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;"><span style="font-size: small;">From the left: Pericle, Ben, Lars, Alex, Keri, Medhi, Paul, Anouk, Luc, Lola and me. Pepper is looking nervously at Lola.</span><br /></td></tr></tbody></table><br /><p><br /></p>Alan Winfieldhttp://www.blogger.com/profile/08263812573346115168noreply@blogger.com0tag:blogger.com,1999:blog-20402273.post-29115309552910324842021-05-27T12:25:00.003+01:002021-05-27T15:00:01.778+01:00Ethics is the new Quality<p>
This morning I took part in the first panel at the <a href="https://www.bsigroup.com/en-GB/our-services/events/webinars/2021/the-digital-world-artificial-intelligence/" target="_blank">BSI conference The Digital World: Artificial Intelligence</a>. The subject of the panel was AI Governance and Ethics. My co-panelist was <a href="https://researchportal.bath.ac.uk/en/persons/emma-carmel" target="_blank">Emma Carmel</a>, and we were expertly chaired by <a href="https://www.linkedin.com/in/katherine-holden-85605a76/" target="_blank">Katherine Holden</a>.</p><p>Emma and I each gave short opening presentations prior to the Q&A. The title of my talk was Why is Ethical Governance in AI so hard? Something I've thought about alot in recent months.</p><p>Here are the slides exploring that question.<br /></p><p></p><p></p><p></p><p></p><p><iframe allowfullscreen="true" frameborder="0" height="390" mozallowfullscreen="true" src="https://docs.google.com/presentation/d/e/2PACX-1vRl7S5-Xnau50Hrx4MAdGuuL_wKbqFSAx6AT3YZCoOC3-Vxh1BrAuwDt_DoT68-2fFdB5eKKYLDGjTB/embed?start=true&loop=true&delayms=5000" webkitallowfullscreen="true" width="480"></iframe> </p><p>And here is what I said.<br /></p><p>Early in 2018 I wrote a short blog post with the title <a href="https://alanwinfield.blogspot.com/2018/02/ethical-governance-what-is-it-and-whos.html">Ethical Governance: what is it and who's doing it?</a> Good ethical governance is important because in order for people to have confidence in their AI they need to know that it has been developed responsibly. I concluded my piece by asking for examples of good ethical governance. I had several replies, but none were nominating AI companies. <br /><br /> So. why is it that 3 years on we see some of the largest AI companies on the planet <a href="https://www.theverge.com/2021/4/13/22370158/google-ai-ethics-timnit-gebru-margaret-mitchell-firing-reputation" target="_blank">shooting</a> <a href="https://www.huffpost.com/entry/facebooks-ethical-problem_b_59c923f9e4b0b7022a646c61" target="_blank">themselves</a> in the foot, ethically speaking? I’m not at all sure I can offer an answer but, in the next few minutes, I would like to explore the question: why is ethical governance in AI so hard? </p><p>But from a new perspective. </p><p>Slide 2<br /><br /> In the early 1970s I spent a few months labouring in a machine shop. The shop was chaotic and disorganised. It stank of machine oil and cigarette smoke, and the air was heavy with the coolant spray used to keep the lathe bits cool. It was dirty and dangerous, with piles of metal swarf cluttering the walkways. There seemed to be a minor injury every day. <br /><br /> Skip forward 40 years and machine shops look very different. </p><p>Slide 3<br /><br /> So what happened? Those of you old enough will recall that while British design was world class – think of the British Leyland Mini, or the Jaguar XJ6 – our manufacturing fell far short. "By the mid 1970s British cars were shunned in Europe because of bad workmanship, unreliability, poor delivery dates and difficulties with spares. Japanese car manufacturers had been selling cars here since the mid 60s but it was in the 1970s that they began to make real headway. Japanese cars lacked the style and heritage of the average British car. What they did have was superb build quality and reliability" [1]. <br /><br /> What happened was <a href="https://en.wikipedia.org/wiki/Total_quality_management" target="_blank">Total Quality Management</a>. The order and cleanliness of modern machine shops like this one is a strong reflection of TQM practices. </p><p>Slide 4<br /><br /> In the late 1970s manufacturing companies in the UK learned - many the hard way - that ‘quality’ is not something that can be introduced by appointing a quality inspector. Quality is not something that can be hired in. <br /><br /> This word cloud reflects the influence from Japan. The words Japan, Japanese and <a href="https://en.wikipedia.org/wiki/Kaizen" target="_blank">Kaizen</a> – which roughly translates as continuous improvement – appear here. In TQM everyone shares the responsibility for quality. People at all levels of an organization participate in kaizen, from the CEO to assembly line workers and janitorial staff. Importantly suggestions from anyone, no matter who, are valued and taken equally seriously. <br /><br /> Slide 5 <br /><br /> In 2018 my colleague Marina Jirotka and I published a paper on <a href="https://royalsocietypublishing.org/doi/10.1098/rsta.2018.0085" target="_blank">ethical governance in robotics and AI</a>. In that paper we proposed 5 pillars of good ethical governance. The top four are:</p><ul style="text-align: left;"><li> have an ethical code of conduct, </li><li>train everyone on ethics and responsible innovation, </li><li>practice responsible innovation, and </li><li>publish transparency reports. </li></ul><p>The 5th pillar underpins these four and is perhaps the hardest: really <i>believe</i> in ethics. <br /><br /> Now a couple of months ago I looked again at these 5 pillars and realised that they parallel good practice in Total Quality Management: something I became very familiar with when I founded and ran a company in the mid 1980s [2]. <br /><br />Slide 6 </p><p>So, if we replace ethics with quality management, we see a set of key processes which exactly parallel our 5 pillars of good ethical governance, including the underpinning pillar: believe in total quality management. <br /><br /> I believe that good ethical governance needs the kind of corporate paradigm shift that was forced on UK manufacturing industry in the 1970s. <br /><br /> Slide 7<br /><br /> In a nutshell I think <b>ethics is the new quality</b> <br /><br /> Yes, setting up an ethics board or appointing an AI ethics officer can help, but on their own these are not enough. Like Quality, <i>everyone</i> needs to understand and contribute to ethics. Those contributions should be encouraged, valued and acted upon. Nobody should be fired for calling out unethical practices. <br /><br /> Until corporate AI understands this we will, I think, struggle to find companies that practice good ethical governance [3]. </p><p>Quality cannot be ‘inspected in’, and nor can ethics. <br /><br /> Thank you.</p><hr /><p>Notes. <br /></p><p>[1]<span> </span>I'm quoting here from the excellent <a href="https://www.aronline.co.uk/history/british-leyland-grand-illusion-part-one-export-die/" target="_blank">history of British Leyland by Ian Nicholls</a>. </p><p>[2]<span> </span><a href="https://www.apdcomms.com/" target="_blank">My company</a> did a huge amount of work for Motorola and - as a subcontractor - we became certified software suppliers within their six sigma quality management programme.</p><p>[3]<span> It was competitive pressure that forced manufacturing companies in the 1970s to up their game by embracing TQM. Depressingly the biggest AI companies face no such competitive pressures, which is why regulation is both necessary and inevitable.</span><br /></p><p></p>Alan Winfieldhttp://www.blogger.com/profile/08263812573346115168noreply@blogger.com0tag:blogger.com,1999:blog-20402273.post-6670960477451477452021-05-15T19:52:00.004+01:002021-05-16T18:13:04.890+01:00The Grim Reality of Jobs in Robotics and AI<p>The reality is that AI is in fact generating a large number of jobs already. That is the good news. The bad news is that they are mostly - to put it bluntly - crap jobs. </p><p>There are several categories of such jobs. </p><p>At the benign end of the spectrum is the work of annotating images, i.e. looking at images and identifying features then labelling them. This is AI tagging. This work is simple and incredibly dull but important because it generates training data sets for machine learning systems. Those systems could be AIs for autonomous vehicles and the images are identifying bicycles, traffic lights etc. The jobs are low-skill low-pay and a huge international industry has grown up to allow the high tech companies to outsource this work to what have been called <a href="https://www.scmp.com/tech/article/2166655/ai-promises-jobs-revolution-first-it-needs-old-fashioned-manual-labour-china" target="_blank">white collar sweatshops</a> in China or developing countries. </p><p>A more skilled version of this kind of job are <a href="https://www.theguardian.com/technology/2019/may/28/a-white-collar-sweatshop-google-assistant-contractors-allege-wage-theft" target="_blank">translators</a> who are required to ‘assist’ natural language translation systems who get stuck on a particular phrase or word.</p><p>And there is another category of such jobs that are positively dangerous: <a href="https://www.theverge.com/2019/2/25/18229714/cognizant-facebook-content-moderator-interviews-trauma-working-conditions-arizona" target="_blank">content moderators</a>. These are again outsourced by companies like Facebook, to contractors who employ people to filter abusive, violent or illegal content. This can mean watching video clips and making a decision on whether the clip is acceptable or not (and apparently the rules are complex), over and over again, all day. Not surprisingly content moderators suffer terrible psychological trauma, and often leave the job burned out after a year or two. Publicly Facebook tells us this is important work, yet content moderators are paid a fraction of what staffers working on the company campus earn. So not <i>that</i> important.<br /></p><p>But jobs created by AI and automation can also be physically dangerous. The problem with real robots, in warehouses for instance, is that like AIs they are not yet good enough to do everything in the (for the sake of argument) Amazon warehouse. So humans have to do the parts of the workflow that robots cannot yet do and - as we know from press reports - these humans are required to work super fast and behave, in fact, <a href="https://www.independent.co.uk/news/world/americas/treating-us-like-robots-amazon-workers-seek-union-amazon-bernie-sanders-people-birmingham-dave-clark-b1823244.html" target="_blank">as if they are robots</a>. And perhaps the most dehumanizing part of the job for such workers is that, like the content moderators (and for that matter <a href="https://hbr.org/2019/08/what-people-hate-about-being-managed-by-algorithms-according-to-a-study-of-uber-drivers" target="_blank">Uber drivers or Deliveroo riders</a>), their workflows are <a href="https://thenextweb.com/news/amazon-algorithm-keeps-warehouse-workers-working-jeff-bezos" target="_blank">managed by algorithms</a>, not humans.<br /> <br />We roboticists used to justifiably claim that robots would do jobs that are too dull, dirty and dangerous for humans. It is now clear that working as human assistants to robots and AIs in the 21st century is dull, and both physically and/or psychologically dangerous. One of the foundational promises of robotics has been broken. This makes me sad, and very angry.</p>The text above is a lightly edited version of my response to the Parliamentary Office of Science and Technology (POST) request for comments on a draft horizon scanning article. The final piece <a href="https://post.parliament.uk/how-technology-is-accelerating-changes-in-the-way-we-work/" target="_blank">How technology is accelerating changes in the way we work</a> was published a few weeks ago.Alan Winfieldhttp://www.blogger.com/profile/08263812573346115168noreply@blogger.com1tag:blogger.com,1999:blog-20402273.post-50601516669210932722021-05-13T19:26:00.007+01:002021-05-15T10:01:46.096+01:00The Energy Cost of Online Living in Lockdown<div>Readers of his blog will know that one of the many things ethical I worry about is the <a href="https://alanwinfield.blogspot.com/2019/06/energy-and-exploitation-ais-dirty.html" target="_blank">energy cost of AI</a>. As part of the work I'm doing with <a href="https://www.ed.ac.uk/profile/claudia-pagliari" target="_blank">Claudia Pagliari </a>and her <a href="https://www.gov.scot/groups/national-expert-group-on-digital-ethics/" target="_blank">National Expert Group on Digital Ethics</a> for Scotland I've been looking also into the energy costs of what is - for many of us - everyday digital life in lockdown. I don't yet have a complete set of results but what I have found so far is surprising - and not in a good way.<br /><br />So far I've looked into the energy costs of (i) uploading to the cloud, (ii) streaming video (i.e. from iPlayer or Netflix), and (iii) video conferencing.</div><p><b>(i) Uploading to the cloud.</b> This <a href="https://medium.com/stanford-magazine/carbon-and-the-cloud-d6f481b79dfe">2017 article in the Stanford Magazine</a> explains that when you save a 1 Gbyte file – that’s about 1 hour of video - to your laptop’s disk drive the energy cost is 0.000005 kWh, or 5 milliWatt hours. Save the same file to the Cloud and the energy cost is between 3 and 7 kWh. For comparison your electric kettle burns about 3 kWh. This mean that the energy cost of saving to the cloud is about a million times higher than to your local disk drive. </p><p>The huge difference makes sense when you consider that there is a very complex international network of switches, routers and exchange hubs, plus countless amplifiers maintaining signal strength over long distance transmission lines. All of this consumes energy. Then add a slice of the energy costs of the server farm. </p><p><b>(ii) Streaming video.</b> This article in <a href="https://www.thetimes.co.uk/article/energy-used-in-streaming-one-film-on-netflix-makes-60-cuppas-0hkp690rm" target="_blank">The Times from May 2019</a> makes the claim that streaming a 2 hour HD movie from Netflix incurs the same energy cost as boiling 10 kettles (based on the sustainable computing research of <a href="http://www.research.lancs.ac.uk/portal/en/people/Mike-Hazas" target="_blank">Mike Hazas</a>). To estimate how much energy that equates to we need to guess how full the kettle is. A half full 3kWh kettle will take about 2 minutes to boil, and consume therefore 100 Watts. Do that 10 times and you've burned 1kW. A DVD player typically consumes 8 Watts, so streaming costs 125 times more energy.</p><p>Again this makes sense against uploading to the cloud, except that here you are downloading from Netflix servers. A 2 hour HD movie is alot of data, around 10GBytes, so 10 times more than the case for (i) above. <br /></p><p><b>(iii) Video conferencing.</b> <a href="https://davidmytton.blog/zoom-video-conferencing-energy-and-emissions/" target="_blank">This post on David Mytton's excellent blog</a>
explores the energy cost of Zoom meetings in some detail. David
estimates that a 1 hour video zoom call with 6 participants generates
between 5 and 15GB of data and that the data transfer consumes between
0.07 – 0.22kWh of electricity. Using our benchmark of kettles boiled
this is pretty modest - at most less than one tenth of the energy cost. </p><p>However this estimate makes 2 assumptions: first that you are connected via cable or fixed line -
which here in the UK costs 0.015kWh per GByte. A mobile connection costs about seven times that at
0.1kWh/GB. And second, this estimate measures only the energy costs of data transmission and fails to take account of the
energy costs of <a href="https://datacenterlocations.com/zoom/" target="_blank">Zoom's data centres</a>, which - if (i) and (ii) here are
anything to go by, could be significant, especially since there aren't any in the UK and the default servers are in the US.<br /></p><p>As <a href="https://blog.zoom.us/cloud-based-and-peer-peer-meetings/" target="_blank">this article on the Zoom blog explains,</a> Zoom calls are not peer to peer. The video from each participant is streamed first to a zoom server then broadcast to every other person on the call. As David Mytton says Zoom don't release information on the overall energy costs of calls. I strongly suspect that if server energy costs were factored in they would be in line with cases (i) and (ii) above. Even so, I feel sure that David Mytton's overall conclusion remains true: that the energy cost of Zoom meetings is significantly lower than all but local or regional travel.</p><p> </p><p>I would like to see networking services like cloud storage, video on demand and video conferencing publish a meaningful energy cost. When we buy packaged food from the supermarket we expect to read the calorific energy value of each item, broken down into fat, salt and so on. It would be great if every online transaction, from sending an email, to watching a movie revealed its energy/carbon cost. Not just for energy geeks like me, but to remind all of us that the Digital Economy is *very* energy hungry. <br /></p><p>
</p><hr /><p>
I would welcome any additional data which either adds to the above (especially the energy costs for smaller online transactions like tweets, emails or card payments), or shows that the estimates above are wrong. </p><p>Related blog posts:</p><p><a href="https://alanwinfield.blogspot.com/2021/03/on-sustainble-robotics.html" target="_blank">On Sustainable Robotics</a><br /><a href="https://alanwinfield.blogspot.com/2019/06/energy-and-exploitation-ais-dirty.html" target="_blank">Energy and Exploitation: AIs dirty secrets</a><br /><a href="https://alanwinfield.blogspot.com/2012/04/whats-wrong-with-consumer-electronics.html" target="_blank">What's wrong with Consumer Electronics?</a> <br /> </p><p> </p><p><br /></p>Alan Winfieldhttp://www.blogger.com/profile/08263812573346115168noreply@blogger.com0tag:blogger.com,1999:blog-20402273.post-33211683166480254032021-03-22T14:15:00.027+00:002021-03-28T00:33:49.813+00:00On Sustainable Robotics<p>The climate emergency brooks no compromise: every human activity or artefact is either part of the solution or it is part of the problem. </p><p>I've worried about the <a href="https://alanwinfield.blogspot.com/2012/04/whats-wrong-with-consumer-electronics.html" target="_blank">sustainability of consumer electronics</a> for some time, and, more recently, the <a href="https://alanwinfield.blogspot.com/2019/06/energy-and-exploitation-ais-dirty.html" target="_blank">shocking energy costs of big AI</a>. But the climate emergency has also caused me to think hard about the sustainability of robots. In <a href="https://arxiv.org/abs/2007.15864" target="_blank">recent</a> <a href="https://arxiv.org/abs/2005.07474" target="_blank">papers</a> we have defined responsible robotics as</p><p>
</p><div class="page" title="Page 9">
<div class="section" style="background-color: #d9d9d9;">
<div class="layoutArea">
<div class="column">
<p><span style="font-size: small;"><span style="font-family: inherit;">... the application of Responsible Innovation in
the design, manufacture, operation, repair and end-of-life recycling of
robots, that seeks the most benefit to society and the least harm to the
environment.
</span></span></p>
</div>
</div>
</div>
</div><p>I will wager that few robotics manufacturers - even the most responsible - pay much attention to repairability and environmental impact. And, I'm ashamed to say, very little robotics research is focused on the development of sustainable robots. A search on google scholar throws up just a handful of great papers detailing work on <a href="https://ieeexplore.ieee.org/abstract/document/8937945" target="_blank">upcycled and sustainable robots</a> (2018), <a href="https://ieeexplore.ieee.org/abstract/document/8502629" target="_blank">sustainable robotics for smart cities</a><a href="https://ieeexplore.ieee.org/abstract/document/8502629" target="_blank"> </a>(2018), and <a href="https://onlinelibrary.wiley.com/doi/full/10.1002/adma.202004413" target="_blank">sustainable soft robots</a> (2020).<br /><br />I was then delighted when, a few weeks ago, my friend and colleague <a href="https://personalpages.manchester.ac.uk/staff/michael.fisher/" target="_blank">Michael Fisher</a>, drafted a proposal for a new standard on Sustainable Robotics. The proposal received strong support from the <a href="https://standardsdevelopment.bsigroup.com/committees/50129752" target="_blank">BSI robotics committee</a>. Here is the formal notice requesting comments on Michael's proposal: <a href="https://standardsdevelopment.bsigroup.com/projects/9021-05214#/section" target="_blank">BS XXXX Guide to the Sustainable Design and Application of Robotic Systems</a>. Anyone can comment (although you do need to register first). The deadline is 1 April 2021. </p><p>So what would make a robot sustainable? In my view it would have to be: <br /></p><ol style="text-align: left;"><li>Made from <b>sustainable materials</b>. This means the robot should, as far as possible, use recycled materials (plastics or metals), or biodegradable materials like wood. Any new materials should be ethically sourced. </li><li><b>Low energy.</b> The robot should be designed to use as little energy as possible. It should have energy saving modes. If an outdoor robot then it should use solar cells and/or hydrogen cells when they become small enough for mobile robots. Battery powered robots should always be rechargeable. </li><li><b>Repairable. </b>The robot would be designed for ease of repair, using modular, replaceable parts as much as possible - especially the battery. Additionally the manufacturers should provide a repair manual so that local workshops could fix most faults. </li><li><b>Recyclable. </b>Robots will eventually come to the end of their useful life, and if they cannot be repaired or recycled we risk them being dumped in landfill. To reduce this risk the robot should be designed to make it easy to re-use parts, such as electronics and motors, and re-cycle batteries, metals and plastics.<br /></li></ol><p>These are, for me, the four fundamental requirements, but there are others. The BSI proposal adds the <b>environmental effects of deployment</b> (it is unlikely we would consider a sustainable robot designed to spray pesticides as truly sustainable), or of <b>failure in the field</b>. Also the <b>environmental effect of maintenance</b>; cleaning materials, for instance. The proposal also looks toward sustainable, upcyclable robots as part of a <b>circular economy</b>. </p><p><span style="font-size: small;"><span style="font-family: inherit;"><span><span><span style="font-weight: normal;"><span><span style="left: 157.312px; top: 1056.61px; transform: scaleX(1.01147);"><span><span><span style="left: 142.866px; top: 190.178px; transform: scaleX(1.0011);"></span></span></span></span></span></span></span></span></span></span></p><div class="separator" style="clear: both; text-align: center;"><span style="font-size: small;"><span style="font-family: inherit;"><span style="font-weight: normal;"><span><span style="left: 157.312px; top: 1056.61px; transform: scaleX(1.01147);"><span><span><span style="left: 142.866px; top: 190.178px; transform: scaleX(1.0011);"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi8G726efnixlTQqEJ9CwtdNJys58UT7lcYP17809JN7eAue0IhMOaUGjaX0kqLC5VR2li6Jadu5PeuLRpo0oMQEsQtqIGpC7k6nEOxhv7F5GQSW2FCSxkJgDJL2kPxCDdQLWpnyw/s800/Ecobot+III.jpg" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="720" data-original-width="800" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi8G726efnixlTQqEJ9CwtdNJys58UT7lcYP17809JN7eAue0IhMOaUGjaX0kqLC5VR2li6Jadu5PeuLRpo0oMQEsQtqIGpC7k6nEOxhv7F5GQSW2FCSxkJgDJL2kPxCDdQLWpnyw/s320/Ecobot+III.jpg" width="320" /></a></span></span></span></span></span></span></span></span></div>This is Ecobot III, developed some years ago by colleagues in the <a href="https://www.bristolroboticslab.com/bristol-bioenergy-centre" target="_blank">Bristol Robotics Lab's Bio-energy group</a>. The robot runs on electricity extracted from biomass by 48 microbial fuel cells (the two concentric brick coloured rings). The robot is 90% 3D printed, and the plastic is recyclable.<p></p><p><span style="font-size: small;"><span style="font-family: inherit;"><span><span><span style="font-weight: normal;"><span><span style="left: 157.312px; top: 1056.61px; transform: scaleX(1.01147);"><span><span><span style="left: 142.866px; top: 190.178px; transform: scaleX(1.0011);"> </span></span></span></span></span></span></span></span></span></span></p><p><span style="font-size: small;"><span style="font-family: inherit;"><span><span><span style="font-weight: normal;"><span><span style="left: 157.312px; top: 1056.61px; transform: scaleX(1.01147);"><span><span><span style="left: 142.866px; top: 190.178px; transform: scaleX(1.0011);"> </span></span></span></span></span></span></span></span></span></span></p><p><span style="font-size: small;"><span style="font-family: inherit;"><span><span><span style="font-weight: normal;"><span><span style="left: 157.312px; top: 1056.61px; transform: scaleX(1.01147);"><span><span><span style="left: 142.866px; top: 190.178px; transform: scaleX(1.0011);"> </span></span></span></span></span></span></span></span></span></span></p><p><span style="font-size: small;"><span style="font-family: inherit;"><span><span><span style="font-weight: normal;"><span><span style="left: 157.312px; top: 1056.61px; transform: scaleX(1.01147);"><span><span><span style="left: 142.866px; top: 190.178px; transform: scaleX(1.0011);"> </span></span></span></span></span></span></span></span></span></span></p><p><span style="font-size: small;"><span style="font-family: inherit;"><span><span><span style="font-weight: normal;"><span><span style="left: 157.312px; top: 1056.61px; transform: scaleX(1.01147);"><span><span><span style="left: 142.866px; top: 190.178px; transform: scaleX(1.0011);"> </span></span></span></span></span></span></span></span></span></span></p>I would love to see, in the near term, not only a new standard on Sustainable Robotics as a guide (and spur) for manufacturers, but the emergence of Sustainable Robotics as a thriving new sub-discipline in robotics. Alan Winfieldhttp://www.blogger.com/profile/08263812573346115168noreply@blogger.com0tag:blogger.com,1999:blog-20402273.post-6245142224414360122021-03-19T12:11:00.023+00:002021-03-19T23:24:19.722+00:00Back to Robot Coding part 3: testing the EBB<p><a href="https://alanwinfield.blogspot.com/2021/02/back-to-robot-coding-part-2-ethical.html" target="_blank">In part 2 a few weeks ago I outlined</a> a Python implementation of the ethical black box. I described the key data structure - a dictionary which serves as both specification for the type of robot, and the data structure used to deliver live data to the EBB. I also mentioned the other key robot specific code: </p><div style="background-color: #fefdfa; color: #333333; font-family: Arial, Tahoma, Helvetica, FreeSans, sans-serif;"><span style="color: red; font-family: courier;"># Get data from the robot and store it in data structure spec</span></div><div style="background-color: #fefdfa; color: #333333; font-family: Arial, Tahoma, Helvetica, FreeSans, sans-serif;"><span style="font-family: courier;"><span style="color: #ffa400;">def</span> <span style="color: #2b00fe;">getRobotData</span>(spec):</span></div><div style="text-align: left;"><br /></div><div style="text-align: left;">Having reached this point I needed a robot - and a way of communicating with it - so that I could both write <span style="color: #2b00fe; font-family: courier;">getRobotData</span><span style="color: #333333; font-family: courier;">(spec)</span><span style="color: #333333; font-family: inherit;"> </span><span style="font-family: inherit;">and test the EBB. But how to do this? I'm working from home during lockdown, and my e-puck robots are all in the lab. Then I remembered that the excellent robot simulator <a href="https://www.coppeliarobotics.com/" target="_blank">V-REP</a> (now called </span>CoppeliaSim<span style="font-family: inherit;">) has a pretty good e-puck model and some nice demo scenes. V-REP also offers </span>multiple<span style="font-family: inherit;"> ways of communicating between simulated robots and external programs (<a href="https://www.coppeliarobotics.com/helpFiles/en/writingCode.htm" target="_blank">see here</a>). One of them - <a href="https://en.wikipedia.org/wiki/Network_socket" target="_blank">TCP/IP sockets</a> - appeals to me as I've written sockets code many times, for both real-world and research applications. Then a stroke of luck: I found that a team at <a href="https://www.ensta-bretagne.fr/zerr/dokuwiki/doku.php?id=vrep:socket-com-with-robot" target="_blank">Ensta-Bretagne had written a simple demo</a> which shows how to connect a Python program to a robot in V-REP, using sockets. So, first I got that demo running and figured out how it works, then used the same approach for a simulated e-puck and the EBB. Here is a video capture of the working demo.</span></div><div style="text-align: left;"><br /><span style="font-family: inherit;"><div class="separator" style="clear: both; text-align: center;"><iframe allowfullscreen='allowfullscreen' webkitallowfullscreen='webkitallowfullscreen' mozallowfullscreen='mozallowfullscreen' width='477' height='270' src='https://www.blogger.com/video.g?token=AD6v5dyrmNQfjCX03WxMtd8cITSE7xy5u7K_DlcuEmt3wXAkM5HdRC83e6ndPCUVsKUhk2xuPk8uRPwi0v4' class='b-hbp-video b-uploaded' frameborder='0'></iframe></div><br /><span style="font-family: inherit;">So, what's going on in the demo? The visible simulation views in the V-REP window show an e-puck robot following a black line which is blocked by both a potted plant and an obstacle constructed from 3 cylinders. The robot has two behaviours: <a href="https://www.allaboutcircuits.com/projects/how-to-build-a-robot-line-follower/" target="_blank">line following</a> and <a href="https://www.allaboutcircuits.com/projects/how-to-build-a-robot-follow-walls/" target="_blank">wall following</a>. The EBB requests data from the e-puck robot once per second, and you can see those data in the Python shell window. Reading from left to right you will see first the EBB date and time stamp, then robot time <i>botT</i>, then the 3 line following sensors <i>lfSe</i>, followed by the 8 infra red proximity sensors <i>irSe</i>. The final two fields show the joint (i.e. wheel) angles <i>jntA</i>, in degrees, then the motor commands <i>jntD</i>. By watching these values as the robot follows its line and negotiates the two obstacles you can see how the line and infra red sensor values change, resulting in updated motor commands.</span></span></div><div style="text-align: left;"><span style="font-family: inherit;"><span style="font-family: inherit;"><br /></span></span></div><div style="text-align: left;"><span style="font-family: inherit;"><span style="font-family: inherit;">Here is the code - which is custom written both for this robot and the means of communicating with it - for requesting data from the robot.</span></span></div><div style="text-align: left;"><span style="font-family: inherit;"><span style="font-family: inherit;"><br /></span></span></div><div style="text-align: left;"><span><span><div style="font-family: courier; font-size: small;"><span style="color: red;"># Get data from the robot and store it in spec[]</span></div><div style="font-family: courier; font-size: small;"><span style="color: red;"># while returning one of the following result codes</span></div><div style="font-family: courier; font-size: small;">ROBOT_DATA_OK = 0</div><div style="font-family: courier; font-size: small;">CANNOT_CONNECT = 1</div><div style="font-family: courier; font-size: small;">SOCKET_ERROR = 2</div><div style="font-family: courier; font-size: small;">BAD_DATA = 3</div><div style="font-family: courier; font-size: small;"><br /></div><div style="font-family: courier; font-size: small;"><span style="color: #ffa400;">def</span> <span style="color: #2b00fe;">getRobotData</span>(spec):</div><div style="font-family: courier; font-size: small;"><br /></div><div style="font-family: courier; font-size: small;"> <span style="color: red;"># This function connects, via TCP/IP to an ePuck robot in V-REP</span></div><div style="font-family: courier; font-size: small;"><span style="color: red;"><br /></span></div><div style="font-family: courier; font-size: small;"><span style="color: red;"> # create a TCP/IP socket and connect it to the simulated robot</span></div><div style="font-family: courier; font-size: small;"> sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)</div><div style="font-family: courier; font-size: small;"> <span style="color: #ffa400;">try</span>:</div><div style="font-family: courier; font-size: small;"> sock.connect(server_address_port)</div><div style="font-family: courier; font-size: small;"> <span style="color: #ffa400;">except</span>:</div><div style="font-family: courier; font-size: small;"> <span style="color: #ffa400;">return</span> CANNOT_CONNECT</div><div style="font-family: courier; font-size: small;"><br /></div><div style="font-family: courier; font-size: small;"> sock.settimeout(0.1) <span style="color: red;"># set connection timeout</span></div><div style="font-family: courier; font-size: small;"> </div><div style="font-family: courier; font-size: small;"> <span style="color: red;"># pack a dummy packet that will provoke data in response</span></div><div style="font-family: courier; font-size: small;"><span style="color: red;"> # this is, in effect, a 'ping' to ask for a data record</span></div><div style="font-family: courier; font-size: small;"> strSend = struct.pack(<span style="color: #6aa84f;">'fff'</span>,1.0,1.0,1.0)</div><div style="font-family: courier; font-size: small;"> sock.sendall(strSend) <span style="color: red;"># and send it to V-REP</span></div><div style="font-family: courier; font-size: small;"><br /></div><div style="font-family: courier; font-size: small;"> <span style="color: red;"># wait for data back from V-REP</span></div><div style="font-family: courier; font-size: small;"><span style="color: red;"> # expect a packet with 1 time, 2 joints, 2 motors, <span> </span></span></div><div style="font-family: courier; font-size: small;"><span style="color: red;"><span><span> </span># </span></span><span style="color: red;">3 line sensors and 8 irSensors. A</span><span style="color: red;">ll floats because V-REP</span></div><div style="font-family: courier; font-size: small;"><span style="color: red;"> # total packet size = 16 x 4 = 64 bytes</span></div><div style="font-family: courier; font-size: small;"> data = <span style="color: #6aa84f;">b''</span></div><div style="font-family: courier; font-size: small;"> nch_rx = 64 <span style="color: red;"># expect this many bytes from V-REP </span></div><div style="font-family: courier; font-size: small;"> <span style="color: #ffa400;">try</span>:</div><div style="font-family: courier; font-size: small;"> <span style="color: #ffa400;">while</span> len(data) < nch_rx:</div><div style="font-family: courier; font-size: small;"> data += sock.recv(nch_rx)</div><div style="font-family: courier; font-size: small;"> <span style="color: #ffa400;">except</span>:</div><div style="font-family: courier; font-size: small;"> sock.close()</div><div style="font-family: courier; font-size: small;"> <span style="color: #ffa400;">return</span> SOCKET_ERROR</div><div style="font-family: courier; font-size: small;"><br /></div><div style="font-family: courier; font-size: small;"> <span style="color: red;"># unpack the received data</span></div><div style="font-family: courier; font-size: small;"> <span style="color: #ffa400;">if</span> len(data) == nch_rx:</div><div style="font-family: courier; font-size: small;"> <span style="color: red;"># V-REP packs and unpacks in floats only so...</span></div><div style="font-family: courier; font-size: small;"> vrx = struct.unpack('ffffffffffffffff',data)</div><div style="font-family: courier; font-size: small;"><br /></div><div style="font-family: courier; font-size: small;"> <span style="color: red;"># now move data from vrx[] into spec[], while rounding floats</span></div><div style="font-family: courier; font-size: small;"> spec[<span style="color: #6aa84f;">"botTime"</span>] = [ <span style="color: #800180;">round</span>(vrx[0],2) ] </div><div style="font-family: courier; font-size: small;"> spec[<span style="color: #6aa84f;">"jntDemands"</span>] = [ <span style="color: #800180;">round</span>(vrx[1],2), <span style="color: #800180;">round</span>(vrx[2],2) ]</div><div style="font-family: courier; font-size: small;"> spec[<span style="color: #6aa84f;">"jntAngles"</span>] = [ <span style="color: #800180;">round</span>(vrx[3]*180.0/math.pi,2)</div><div style="font-family: courier; font-size: small;"> <span style="color: #800180;">round</span>(vrx[4]*180.0/math.pi,2) ]</div><div style="font-family: courier; font-size: small;"> spec[<span style="color: #6aa84f;">"lfSensors"</span>] = [ <span style="color: #800180;">round</span>(vrx[5],2), </div><div style="font-family: courier; font-size: small;"><span style="color: #800180;"> round</span>(vrx[6],2), <span style="color: #800180;">round</span>(vrx[7],2) ]</div><div style="font-family: courier; font-size: small;"> <span style="color: #ffa400;">for</span> i <span style="color: #ffa400;">in</span> <span style="color: #800180;">range</span>(8):</div><div style="font-family: courier; font-size: small;"> spec[<span style="color: #6aa84f;">"irSensors"</span>][i] = <span style="color: #800180;">round</span>(vrx[8+i],3) </div><div style="font-family: courier; font-size: small;"> result = ROBOT_DATA_OK</div><div style="font-family: courier; font-size: small;"> <span style="color: #ffa400;">else</span>: </div><div style="font-family: courier; font-size: small;"> result = BAD_DATA</div><div style="font-family: courier; font-size: small;"><br /></div><div style="font-family: courier; font-size: small;"> sock.close()</div><div style="font-family: courier; font-size: small;"> <span style="color: #ffa400;">return</span> result</div><div style="font-family: courier; font-size: small;"><br /></div><div><span style="font-family: inherit;">The structure of this function is very simple: first create a socket then open it, then make a dummy packet and send it to V-REP to request EBB data from the robot. Then, when a data packet arrives, unpack it into spec, then close the socket before returning. The most complex part of the code is data wrangling.</span></div><div><span style="font-family: inherit;"><br /></span></div><div><span style="font-family: inherit;">Would a real EBB collect data in this way? Well if the EBB is embedded in the robot then probably not. Communication between the robot controller and the EBB might be via ROS messages, or even more directly, by - for instance - allowing the EBB code to access a shared memory space which contains the robot's sensor inputs, command outputs and decisions. But an external EBB, either running on a local server or in the cloud, would most likely use TCP/IP to communicate with the robot, so </span><span style="font-family: courier;"><span style="color: #2b00fe;">getRobotData</span>()</span><span style="font-family: inherit;"> would look very much like the example here. </span></div><div><span style="font-family: inherit;"><br /></span></div></span></span></div>Alan Winfieldhttp://www.blogger.com/profile/08263812573346115168noreply@blogger.com0tag:blogger.com,1999:blog-20402273.post-42129989271256912662021-02-19T18:09:00.008+00:002021-02-20T10:18:55.497+00:00Back to Robot Coding part 2: the ethical black box<p>In the last few days I started some serious coding. The first for 20 years, in fact, when I built <a href="https://www.researchgate.net/publication/228789103_Linux_an_Embedded_Operating_System_for_Mobile_Robots" target="_blank">the software for the BRL LinuxBots</a>. (The coding I did <a href="https://alanwinfield.blogspot.com/2020/08/back-to-robot-coding-part-1-hello-world_10.html" target="_blank">six months ago</a> doesn't really count as I was only writing or modifying small fragments of Python).</p><p>My coding project is to start building an ethical black box (EBB), or to be more accurate, a module that will allow a software EBB to be incorporated into a robot. Conceptually the EBB is very simple, it is a data logger - the robot equivalent of an aircraft Flight Data Recorder, or an automotive Event Data Recorder. Nearly five years ago <a href="https://alanwinfield.blogspot.com/2017/08/the-case-for-ethical-black-box.html" target="_blank">I made the case, with Marina Jirotka</a>, that <i>all</i> robots (and AIs) should be fitted with an EBB as standard. Our argument is very simple: without an EBB, it will be more or less impossible to investigate robot accidents, or near-misses, and in a <a href="https://alanwinfield.blogspot.com/2020/06/robot-accident-investigation.html" target="_blank">recent paper on Robot Accident Investigation</a> we argue that with the increasing use of social robots accidents are inevitable and will need to be investigated. </p><p>Developing and demonstrating the EBB is a foundational part of our 5-year EPSRC funded project <a href="https://www.robotips.co.uk/" target="_blank">RoboTIPS</a>, so it's great to be doing some hands-on practical research. Something I've not done for awhile.</p><p>Here is a block diagram showing the EBB and its relationship with a robot controller.</p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg9czQW2itv5h6YEKhQmFn5v_s8x1R0HDMiyLO0ihA5Uw_1SvwHdLAgxJbhIa6w5GSa6bu9oZmUYKhGciqwtN39n1ywiHGsvIYTlSMcQseMku-OKkS8rm89NJcXhAuwGXcMdNdqcw/s520/Slide1_cut.jpg" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="482" data-original-width="520" height="436" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg9czQW2itv5h6YEKhQmFn5v_s8x1R0HDMiyLO0ihA5Uw_1SvwHdLAgxJbhIa6w5GSa6bu9oZmUYKhGciqwtN39n1ywiHGsvIYTlSMcQseMku-OKkS8rm89NJcXhAuwGXcMdNdqcw/w470-h436/Slide1_cut.jpg" width="470" /></a></div><div><br /></div><div><br /></div><div><br /></div><div><br /></div><div><br /></div><div><br /></div><div><br /></div><div><br /></div><div><br /></div><div><br /></div><div><br /></div><div><br /></div><div><br /></div><div><br /></div><div><br /></div><div><br /></div><div><br /></div><div><br /></div><div><br /></div><div><br /></div><div><br /></div><div><br /></div><div><br /></div><div><br /></div><div><br /></div><div><br /></div><div>As shown here the data flows from the robot controller to the EBB are strictly one way. The EBB cannot and must not interfere with the operation of the robot. Coding an EBB for a particular robot would be straightforward, but I have set a tougher goal: a generic EBB module (i.e. library of functions) that would - with some inevitable customisation - apply to any robot. And I set myself the additional challenge of coding in Python, making use of skills learned from the excellent <a href="https://www.codecademy.com/learn/learn-python" target="_blank">online Codecademy Python 2 course</a>.</div><div><br /></div><div>There are two elements of the EBB that must be customised for a particular robot. The first is the data structure used to fetch and save the sensor, actuator and decision data in the diagram above. Here is an example from my first stab at an EBB framework, using the Python dictionary structure:</div><div><br /></div><div><div><span style="color: red; font-family: courier;"># This dictionary structure serves as both </span></div><div><span style="color: red; font-family: courier;"># 1 specification of the type of robot, and each data field that</span></div><div><span style="color: red; font-family: courier;"># will be logged for this robot, &</span></div><div><span style="color: red; font-family: courier;"># 2 the data structure we use to deliver live data to the EBB</span></div><div><span style="color: red;"><br /></span></div><div><span style="color: red; font-family: courier;"># for this model let us create a minimal spec for an ePuck robot</span></div><div><span style="font-family: courier;">epuckSpec = {</span></div><div><span style="font-family: courier;"> <span style="color: red;"># the first field *always* identifies the type of robot plus <span> </span><span> </span><span> # </span>version and serial nos</span></span></div><div><span style="font-family: courier;"> <span style="color: #6aa84f;">"robot" </span>:<span style="color: #6aa84f;"> </span>[<span style="color: #6aa84f;">"ePuck", "v1", "SN123456"</span>],</span></div><div><span style="font-family: courier;"> <span style="color: red;"># the remaining fields are data we will log, </span></span></div><div><span style="font-family: courier;"><span style="color: red;"><span> # </span>starting with the motors</span></span></div><div><span style="color: red; font-family: courier;"> # ..of which the ePuck has just 2: left and right</span></div><div><span style="font-family: courier;"> <span style="color: #6aa84f;">"motors"</span> : [0,0],</span></div><div><span style="font-family: courier;"> <span style="color: red;"> # then 8 infra red sensors</span></span></div><div><span style="font-family: courier;"> <span style="color: #6aa84f;">"irSensors"</span> : [0,0,0,0,0,0,0,0],</span></div><div><span style="font-family: courier;"> <span style="color: red;"># ..note the ePuck has more sensors: accelerometer, camera etc, </span></span></div><div><span style="font-family: courier;"><span style="color: red;"><span> # </span>but this will do for now</span></span></div><div><span style="color: red; font-family: courier;"> # ePuck battery level</span></div><div><span style="font-family: courier;"> <span style="color: #6aa84f;">"batteryLevel"</span> : [0],</span></div><div><span style="font-family: courier;"> <span style="color: red;"># then 1 decision code - i.e. what the robot is doing now</span></span></div><div><span style="font-family: courier;"><span style="color: red;"><span> # what these codes mean will be specific to both the robot </span></span></span></div><div><span style="font-family: courier;"><span style="color: red;"><span><span> # </span>and the application</span><br /></span></span></div><div><span style="font-family: courier;"> <span style="color: #6aa84f;">"decisionCode"</span> : [0]</span></div><div><span style="font-family: courier;"> }</span></div></div><div><br /></div><div>Whether a dictionary is the best way of doing this I'm not 100% sure, being new to Python (any thoughts from experienced Pythonistas welcome).</div><div><br /></div><div>The idea is that all robot EBBs will need to define a data structure like this. All must contain the first field <span style="color: #6aa84f; font-family: courier; font-size: small;">"robot"</span>, which names the robot's type, its version number and serial number. Then the following fields must use keywords from a standard menu, as needed. As shown in this example each keyword is followed by a list of placeholder values - in which the number of values in the list reflects the specification of the actual robot. The ePuck robot, for instance, has 2 motors and 8 infra-red sensors. </div><div><br /></div><div>The final field in the data structure is "decisionCode". The values stored in this field would be both robot and applications specific; for the ePuck robot these might be 1 = 'stop', 2 = 'turn left', 3 = 'turn right' and so on. We could add another value for a parameter, so the robot might decide for instance to turn left 40 degrees, so <span style="color: #6aa84f; font-family: courier; font-size: small;">"decisionCode"</span><span style="font-family: courier; font-size: small;"> : [2,40]. </span>We could also add a 'reason' field, which would save the high-level reason for the decision, as in <span style="color: #6aa84f; font-family: courier; font-size: small;">"decisionCode"</span><span style="font-family: courier; font-size: small;"> : [2,40,"avoid obstacle right"] </span>noting that the decision field could be a string as shown here, or a numeric code.</div><div><br /></div><div>As I hope I have shown here the design of this data structure and its fields is at the heart of the EBB.</div><div><br /></div><div>The second element of the EBB library that must be written for the particular robot and application, is the function which fetches data from the robot</div><div><br /></div><div><span style="color: red; font-family: courier;"># Get data from the robot and store it in data structure spec</span></div><div><span style="font-family: courier;"><span style="color: #ffa400;">def</span> <span style="color: #2b00fe;">getRobotData</span>(spec):</span></div><div><span> </span><br /></div><div>How this function is implemented will vary hugely between robots and robot applications. For our Linux enhanced <a href="https://alanwinfield.blogspot.com/2015/02/like-doing-brain-surgery-on-robots.html" target="_blank">ePucks with WiFi</a> connections this is likely to be via a TCP/IP client-server, with the server running on the robot, sending data following a request from the client <span style="color: #2b00fe; font-family: courier; font-size: small;">getRobotData</span><span style="font-family: courier; font-size: small;">(ePuckspec) </span>For simpler setups in which the EBB module is folded into the robot controller then accessing the required data within <span style="color: #2b00fe; font-family: courier; font-size: small;">getRobotData</span><span style="font-family: courier; font-size: small;">()</span> should be very straightforward.</div><div><br /></div><div>The generic part of the EBB module will define the class EBB, with methods for both initialising the EBB and saving a new data record to the EBB. I will cover that in another blog post.</div><div><br /></div><div>Before closing let me add that it is our intention to publish the specification of the EBB, together with the model EBB code, once it had been fully tested, as open source.</div><div><br /></div><div><div>Any comments or feedback would be much appreciated.</div></div>Alan Winfieldhttp://www.blogger.com/profile/08263812573346115168noreply@blogger.com2tag:blogger.com,1999:blog-20402273.post-173689553843808872021-01-19T11:21:00.394+00:002023-11-01T17:32:08.811+00:00New IET online course on Robot Ethics goes live<p style="text-align: left;">Big day today. My online course on Robot Ethics has been launched on the <a href="https://www.theiet.org/" target="_blank">Institution of Engineering and Technology (IET)</a> <a href="https://academy.theiet.org/robot-ethics" target="_blank">Academy web pages</a>. The aim of the course is to give a comprehensive introduction to robot ethics and responsible robotics, and machine ethics. As well as ethical principles the course introduces powerful practical tools including <i>Ethically Aligned Design</i> (also called values driven design), emerging new ethical standards including BS8611 and the powerful method <i>Ethical Risk Assessment</i>, IEEE 7001 on <i>Transparency</i>, and equally essential <i>Ethical Governance</i>, while showing how ethics, standards and regulation are linked. The course took the best part of 18 months to write, not least because of the strict formatting and style required for IET online courses. For academics, writing courses normally means just creating slides, but - to my surprise - IET online courses are narrated by professional voice actors, so I had to write the narration for each slide. Plus, <i>alot</i> of tests to help students to self-test their understanding.</p><p style="text-align: left;">The course is organized as 10 one hour units, each with several modules, and tests at the end of each module and at the end of the unit. Here is the outline syllabus.<br /></p><p style="text-align: left;"><b>Unit 1: <span>What is Robot Ethics?</span></b></p><p style="text-align: left;">This unit defines what we mean by an intelligent robot, robot ethics and ethical robots.</p><p style="margin-left: 40px; text-align: left;">Module 1: Defines what we mean by a robot and robot autonomy, while explaining the difference between first wave (i.e. industrial) robots and second wave (i.e. social) robots<br />Module 2: Defines intelligence and clarifies the distinction between robotics and Artificial Intelligence (AI)<br />Module 3: Robot/AI ethics: ethics for humans and responsible robotics<br />Module 4: Machine ethics: ethics for robots<br /></p><p style="text-align: left;"><b><span>Unit 2: </span><span class="product-name">Inspired by Asimov – The EPSRC Principles of Robotics</span></b></p><p style="text-align: left;">This unit focuses on the influential EPSRC Principles of Robots. </p><p style="margin-left: 40px; text-align: left;">Module 1: Asimov’s <i>Three Laws of Robotics</i>, their limitations, and their contribution to robot ethics<br />Module 2: Why robot ethics are so important today<br />Module 3: The <i>EPSRC Principles of Robotics</i><br />Module 4: Responsible Robotics<br /></p><p style="text-align: left;"><span class="product-name"><b>Unit 3: An Overview of Ethical Frameworks for AI</b></span></p><p style="text-align: left;">This unit looks at some of the more recent ethical frameworks proposed for robotics and AI.</p><div style="margin-left: 40px; text-align: left;">Module 1: A Proliferation of Principles. A helicopter view of all of the ethical frameworks for robotics and artificial intelligence published since Asimov’s laws of robotics. Including what an ethical framework is and what it does and does not offer.<br />Module 2: <i>The Future of Life Institute Asilomar principles for beneficial AI</i><br />Module 3: <i>The UNI Global Union Top 10 Principles for Ethical AI</i><i> </i><br />Module 4: <i>The European Commission’s High Level Expert Group on AI Ethics Guidelines for Trustworthy AI </i></div><div style="margin-left: 40px; text-align: left;">Module 5: <i>The OECD Principles of AI</i></div><div style="margin-left: 40px; text-align: left;">Module 6: Summary: comparing ethical frameworks and their limitations<i> </i><br /></div><p style="text-align: left;"><span><b>Unit 4: Ethical Standards in Robotics</b></span></p><p style="text-align: left;"><span>This unit </span>explores emerging ethical standards.</p><div style="margin-left: 40px; text-align: left;">Module 1: Standards, an introduction<br />Module 2: An Ethical Standard - British Standard BS8611:2016 <i>A Guide to the Ethical Design of Robots and Robotic Systems</i><br />Module 3: Ethical Risk Assessment based on BS8611, including a Case Study<br />Module 4: Standards in Practice</div><p style="text-align: left;"><b>Unit 5: Ethically Aligned Design in Robotics and AI</b></p><p style="text-align: left;">This unit introduces the IEEE global ethics initiative and ethically aligned design.</p><div style="margin-left: 40px; text-align: left;">Module 1: The IEEE Global Ethics Initiative<br />Module 2: The IEEE General Principles<br />Module 3: <i>Ethically Aligned Design</i><br />Module 4: The P70XX Human Standards</div><p style="text-align: left;"><b>Unit 6: Transparency and Explainability in Robotics and AI</b></p><p style="text-align: left;"> This Unit explores transparency, and the related topic of accident investigation.</p><div style="margin-left: 40px; text-align: left;">Module 1: Introduction to Transparency and Explainability<br />Module 2: The IEEE P7001 Standard on <i>Transparency in Autonomous Systems</i><br />Module 3: Robot Accident Investigation, an introduction</div><p style="text-align: left;"><b>Unit 7: Ethical Governance for Robotics</b></p><p style="text-align: left;">This unit focuses on ethical governance for robotics.</p><div style="margin-left: 40px; text-align: left;">Module 1: How do we trust our technology?<br />Module 2: A Roboethics Roadmap, linking ethics, standards and regulation<br />Module 3: Robotics Law and Regulation, with examples from Drones, Autonomous Vehicles and Assisted Living robots<br />Module 4: A framework for ethical governance</div><p style="text-align: left;"><b>Unit 8: Machine Ethics 1 – An Asimovian Ethical Robot</b><br /></p><p style="text-align: left;">In this unit, we will explore machine ethics, and ask the question: is it possible to build a moral machine?</p><div style="margin-left: 40px; text-align: left;">Module 1: A thought experiment: is it possible to build a moral machine?<br />Module 2: The Consequence Engine<br />Module 3: Experimental trials of an Asimovian ethical robot</div><p style="text-align: left;"><b>Unit 9: Machine Ethics 2 – Approaches, Risks and Governance</b></p><div style="margin-left: 40px; text-align: left;">Module 1: Categories of ethical agency<br />Module 2: Approaches to building ethical robots<br />Module 3: The risks of ethical robots<br />Module 4: The governance of ethical machines<br /></div><p style="text-align: left;"><b>Unit 10: Final Assessment</b></p><p style="text-align: left;"> <br /></p>Alan Winfieldhttp://www.blogger.com/profile/08263812573346115168noreply@blogger.com0tag:blogger.com,1999:blog-20402273.post-42602942635675250452020-12-30T19:16:00.008+00:002020-12-30T19:50:05.898+00:00#heavencalling<p><span> </span><i>Now</i> it’s personal. I’ve just had a phone call from my mom. Fine, you might think, but it’s sure as hell not fine. She’s been dead 5 years.</p><p><span> </span>So, I’m a member of the LAPD CSI assigned to cyber crime. The case that landed on my desk a couple of weeks ago started as a complaint that folk were getting phone calls from dead relatives. At first we thought it was a joke. But after a couple of Hollywood celebs and the mayor of Pasadena started getting calls too – it got real serious real fast. The mayor called my chief: he was furious that someone was impersonating his eldest daughter: she died a couple of years ago in a freak surfing accident. It was only when the chief explained that it wasn’t a person that had called him, but an AI programmed to impersonate his daughter, that he calmed down a bit. Just a bit mind: according to my boss what he said went along the lines of “find out who these sons-of-bitches programmers are, I’m gonna sue the hell out of them”.</p><p><span> </span>Deepfakes have been around for 5 years or so. Mostly videos doctored with some famous actor’s face substituted for a slightly less famous face. Tom Cruise as spiderman – that kinda thing; mostly harmless. After the mayor’s call the chief called a departmental meeting. She explained that – according to the DA – impersonation is not a misdemeanour: “Hell if it was that would make the whole entertainment industry a criminal enterprise.” That caused a cynical chuckle across the room. She went on, “nor is creating a fake AI based on a real person.” “Of course people are upset and angry – who wouldn’t be when they get a call from someone dear to them who also happens to be deceased – but upsetting people isn’t a crime.” </p><p> She looked at me. “Frank, what have you got so far?” “Not much chief”, I replied, “each call seems to be coming from a different number – my guess is they’re one-time numbers”. “Any idea who’s behind this?” she asked. “No – but since no-one is demanding money – my guess would be AI genius college kids doing this for a joke, or maybe their dissertations.” “Of course” I added, “they would need to be scraping the personal data from somewhere to construct the fakes, but so much hacked data is around on the dark web that wouldn’t be too hard.” “Ok good”, she said, “start talking to some college professors”. </p><p><span> </span>Two days later I had the call.</p><p><span> </span>“Hello Frankie, it’s mom.”</p><p><span> </span>“Mom? But you’ve been gone 5 years.”</p><p><span> </span>“I know son. I just wanted to call to tell you I love you.”</p><p><span> </span>“But. Goddam. You sound just like Mom.”</p><p><span> </span>“Aren’t you pleased to hear from me Frankie?”</p><p><span> </span>“Yes... No. This isn’t right.”</p><p><span> </span>“How is Josie doing? And Taylor – she must’ve started college by now?”</p><p><span> </span>“Yes Josie is good, and Taylor’s ... no dammit I’m not gonna talk to a computer program.”</p><p><span> </span>“Aw, don’t be mad with your Mom.”</p><p><span> </span>At that point I hung up. But Jesus it was hard. I knew it wasn’t my Mom but the temptation to stay on the call just to hear her voice again was just overwhelming. It took me awhile to calm down. It’s only in the last year that I started to get over her passing. That call brought it all back: the pain, the anger she had been taken too soon. We were real close.</p><p><span> </span>This fake was good – they had my Mom’s voice down to a tee – but how? Mom was a high school teacher not a celebrity. She wasn’t big on social media. Sure she used Facebook – who doesn’t – but that doesn’t record voice. Just about everything else mind – that’s where they would have gotten family names and relationships. Then I remembered that we bought her one of those smart speakers a year or so before she passed away. Arthritis made it hard for her to move around so we put in the speaker so she could make voice calls, listen to music or turn on the TV just by asking. She loved it. </p><p><span> </span>Then the story broke in the press. Twitter was full of it: #heavencalling and #deadphone were just two of the hashtags; none of them even remotely funny to me. The pundits were all over the newscasts: AI experts gleefully explaining the technology while expressing a dishonest kind of smirking dismay “...of course no AI professional could possibly condone this kind of misuse.” Obviously they hadn’t had the call. </p><p><span> </span>Of course the news channels also interviewed folk who had been called. Some were outraged, but more were very happy that they had been ‘chosen’ for a call from heaven. One lady was so pleased to have had a call from her late husband: “It was so wonderful to hear from Jimmy – to talk about old times and know that he’s happy in heaven”. Well I guess I shouldn’t have been surprised. The church pastors they interviewed were indignant. “The devil’s work” was the general tone. One even described it as ‘artificial witchcraft’. They had good reason to be unhappy, seeing as they have exclusive rights to the intercession business.</p><p><span> </span>A day later I had an email back from one of the AI Profs at Caltech. I called him straight away and he told me he had a pretty good idea who might be behind this “deeply unethical AI” as he put it. A couple of star students had been working on what one of them had told him was a ‘really cool NLP project’. NLP – that’s natural language processing. He told me that he had already disabled their accounts on the Caltech supercomputer. This kind of real-time conversational AI uses huge amounts of computing power.</p><p><span> </span>A few hours later the chief and I are in the Dean’s office with the Professor and his two students. In the students I saw a younger me: bright but with that naïve innocence that blesses only those for whom nothing bad has ever happened. </p><p><span> </span>My chief explained to these two young men that, since no crime had been committed, we would not be pressing charges. But, she stressed, “What you did was not without consequence. The mayor and his wife were deeply distressed to receive a call from someone they thought was their deceased daughter. And my colleague here was mad as hell when he had a call from his late Mom.” From the look in their eyes they obviously had no idea they had set up a heaven call to a cop. </p><p><span> </span>Then the Dean gave them one hell of a dressing down. At one point one of the students tried to interject that some of the recipients of the heaven calls had been very happy to be called, at which point the Prof stopped him immediately. “No. Regardless of how people reacted, your AI was a deception. And an egregious one too, as it exploited the vulnerability of grief.” Then he added, “Something that in time you too will experience.” The Dean told them that they should count themselves very lucky that the school had decided not to expel them, on condition that they personally apologise to everyone who had received a heaven call, starting right now with Officer Frank Aaronavitch here. After a very gracious apology, which I accepted, the Prof added that he would be requiring them to submit year papers on the ethics of their heaven calling AI.</p><p><span> </span>Six months have passed. Heaven calling blew over pretty quickly. Then I noticed a piece in the tech press about a new start up – Heavenly AI – looking for VC. Sure enough the two founders are the same students we saw in the Deans’s office at Caltech. The article claims the company has an ethics driven business model. Great I thought. Then cynical me kicked in; give it six months and these guys are gonna get bought out by Facebook. Heaven forbid.</p>
<hr />
Previous stories: <div><br /></div><div><a href="https://alanwinfield.blogspot.com/2016/12/the-gift.html" target="_blank">The Gift</a> (2016) </div><div><a href="https://alanwinfield.blogspot.com/2020/12/word-perfect.html" target="_blank">Word Perfect</a> (2020) </div><div><a href="https://alanwinfield.blogspot.com/2020/12/she-had-chosen-well.html" target="_blank">She had chosen well</a> (2020)
</div>Alan Winfieldhttp://www.blogger.com/profile/08263812573346115168noreply@blogger.com0tag:blogger.com,1999:blog-20402273.post-55652667894513821672020-12-28T18:53:00.006+00:002020-12-30T19:22:20.085+00:00She had chosen well<p>For this story, written as the <a href="https://alanwinfield.blogspot.com/2020/12/word-perfect.html" target="_blank">second exercise in my Writing Short Stories course back in June</a>, I attempted a story without dialogue. I love dialogue so expected to find this difficult, which it was. In the story I try to imagine what it might have been like to experience an extinction event, in an effort to capture a sense of being in the liminal state from a limited first-person (or rather animal) perspective.</p><hr /><span> <span> </span></span><b>She had chosen well. </b><div><br /></div><div><span> </span>The burrow she shared with her litter was lodged within the vaulted foundations of a mighty tree. The tree had taken root in rocky soil long before her time, its vascular organs splitting the rock enough to allow her to excavate tunnels and chambers three seasons ago. <p></p><p><span> </span>It had been a good spring. Her pups had almost weaned and were growing fat on insects and berries. Even the reckling was looking healthy. He was a survivor, escaping the quick-feathered hunters with sharp eyes and sharper teeth that had taken two of her litter a few moons ago.</p><p><span> </span>In her world there was much to fear. Death came in many ways: quick from the sharp-teeth or sky-claws; slow from starvation or thirst (the nearest spring was a perilous journey - although she had learned from her mother how to harvest the prickly watery green leaves which grew close to the burrow). But this hillside had one advantage; it was too high and steep for the long-necked ground-shakers that crashed and bellowed through the valley below from time to time.</p><p><span> </span>The moons passed and, as the nights started to lengthen, she began to harvest the nuts, green leaves and tubers, storing these in dry clean chambers close to the comfortable living nest. Something – perhaps the unusual bounty of the season – made her collect more this summer.</p><p><span> </span>It was a warm dusk. After a good night’s forage she and her pups had spent the day sleeping full-bellied in the cool of the burrow. Her pups were now almost full grown and the biggest and boldest were restless to leave. Two, a brother and sister, moved to the burrow entrance with a purpose that she knew from her own time so, with a touch of their noses, mother and eldest made their farewells. </p><p><span> </span>Then, just a few moments after she had returned to the nest chamber, the ground shook. But this was not the rhythmic shaking of the long-necks in the valley. Nor was it the noisy anger of the fire mountain that turned their nights red from time to time. This was different: a silent deep tremor that felt as if it was coming from the belly of the earth. The tremor grew to a crescendo. Terrified the small family nest-huddled as the tree roots groaned while soil and stones rained upon them. Then it was still.</p><p><span> </span>They waited. She lifted her head and sensed around. The nest air was full of dust. She felt the silence then realised that the breeze-scent of outside was gone. She knew something was wrong, ran to the entrance tunnel and found it blocked with stones and earth. Fear rising she started to dig. She was a good digger with powerful front claws. She dug and dug until she started to feel weak, then – rest-pausing – she heard a scraping sound. A few moments later the soil and stones ahead broke apart and there was her eldest daughter. With joy and relief they touched noses, but she sensed a sadness that told her that her eldest son was gone. </p><p><span> </span>Together mother and daughter cleared the spoil from the entrance tunnel, then – followed by the rest of the pups – they emerged, cautiously, into the night. There was no moon. Instead the sky clouds were lit high with lurid reds, greens and purples, yet – she noticed – the fire mountain was silent. The night was quiet at first although some familiar sounds slowly returned: the bellows of the long-necks in the valley below and skyward the distant cries of the sky-claws. The family fed and foraged and still fearful returned to the nest before dawn.</p><p><span> </span>After sleeping most of the day the nest family was awakened by a long roar of thunder that seemed to roll in from afar and rush over them before receding into the distance. She had heard thunder before but never like this. As it passed it hit their tree – although not with the long shake of the sleep-day before – but with a great cracking crash that was the last thing they heard for awhile. She felt an ear-pain she had never before experienced, and so – it seemed – had her pups. Dazed, deafened and frightened they did not venture out of the burrow that night.</p><p><span> </span>Restless and hungry the family stirred again before dusk the following day. She was relieved that the ear-pain had gone and her sound sense restored. Cautiously they emerged from the burrow entrance to find that their small exit platform was now a tangle of branch and leaf. Luckily it was not dense, and they quickly made a path through to the open hillside. What they saw by the dull grey light of dusk was a world changed. No tree was left standing, including their home tree – indeed it was that tree that now provided their exit canopy.</p><p><span> </span>They sensed something moving nearby, then saw one of the sky-claws fallen onto a prickle leaf bush; it was broken winged and near death, but still able to fix them with its sharp eye. They had never before seen one of these creatures close up and – even in its death throes – their terror of its kind was undimmed, so they quickly retreated into the exit canopy and nervously fed on insects and home tree nuts.</p><p><span> </span>The next two nights, alerted by the bad tempered chirruping of sharp-teeth feeding on the sky-claw, they did not stray outside the home thicket. She noticed that the nights were cold: too cold for this early in the autumn. A few nights later the sky-claw was joined in death by the sharp-teeth, and the nest family were able to feast on the insects drawn to the carrion. But their forages were short as it was too cold to stay out for more than a few mouthfuls before returning to the warm of the nest. A few nights later even the carrion insects were gone, as the corpses had frozen. </p><p><span> With a deep sense of unease the</span> nest family settled for their long winter sleep.</p><p><br /></p><p>© Alan Winfield 2020</p><hr /><p>Previous stories:</p><p><a href="https://alanwinfield.blogspot.com/2016/12/the-gift.html" target="_blank">The Gift</a> (2016)</p><p><a href="https://alanwinfield.blogspot.com/2020/12/word-perfect.html" target="_blank">Word Perfect</a> (2020)</p><p></p></div>Alan Winfieldhttp://www.blogger.com/profile/08263812573346115168noreply@blogger.com0tag:blogger.com,1999:blog-20402273.post-43177235376076693292020-12-26T14:53:00.005+00:002020-12-28T19:14:12.827+00:00Word Perfect<p>Back in June I signed up for an online course on <a href="https://www.bishopsgate.org.uk/whats-on/activity/writing-short-stories-the-next-step-1" target="_blank">Writing Short Stories (the next steps)</a>, run by the <a href="https://www.bishopsgate.org.uk/whats-on" target="_blank">Bishopsgate Institute</a>. The course was excellent. There were six weekly zoom sessions of about 2 hours each, with 8 students led by <a href="https://www.bishopsgate.org.uk/profile/barbara-marsh" target="_blank">Barbara Marsh</a>, who is a wonderful tutor. I can honestly say that I enjoyed every minute. There was a fair bit of homework including, of course, writing - and a major segment of each class was critiquing each others' work. </p><p>Here below is the first of the three new stories I drafted for the course.</p>
<hr />
<p><b>Word Perfect</b></p><p><span> </span>“Who the fuck are you?”</p><p><span> </span>“Don’t you recognise me? I’m you.”</p><p><span> </span>“Oh fuck off. I’ve never seen you before in my life.”</p><p><span> </span>“Yes you have – every time you look in the mirror.”</p><p><span> </span>I wasn’t listening of course. I never did then. I was foul mouthed, arrogant, and full of myself (full of shit actually). I was a first year student: physics at Oxford. Won a scholarship for genius working class kids. Something I never failed to tell everyone.</p><p><span> </span>“You’re full of shit. What do you want?”</p><p><span> </span>He paused a moment and looked me in the eye. “I want to talk, you fucker.”</p><p><span> </span><i>Now</i> this old guy had my attention. The only person I knew who says ‘you fucker’ was me. It was (and still is) something I only say to close friends: a kind of insult of endearment. </p><p><span> </span>I was speechless (which didn’t happen often). For the first time I looked hard at him. Same height and build as me. Clean-shaven and almost bald: not bad looking. Fuck, I thought, he could be my dad. But he died four years ago.</p><p><span> </span>He read my mind. “No John, I’m not a ghost. I’m you age 60.”</p><p><span> </span>I may have been a shit, but I was a quick learner. “So, you – future me – have invented time travel? Whoa – that’s so cool. But wait, should you be here – aren’t you changing the future or something?”</p><p><span> </span>“Yes there are risks, but the risks of me not having this conversation are far greater”. Older me then took something out of his pocket – a kind of glass tablet – he prodded it with his finger and looked at the display. “Look – I haven’t got long – the energy costs of time travel are colossal. Another 10 minutes”.</p><p><span> </span>He then sat down and talked fast. I listened hard. I asked him if I could take notes. “No please don’t – what I’m about to tell you is dangerous – it’s super important no one knows anything about this conversation”. (‘Super important’ – that’s another thing I say.)</p><p><span> </span>Older me explained that yes, he had invented a time machine. It had made him famous. Protocols (rules – he clarified – 20-year-old me didn’t know about protocols) had been established. Following international ethical approval the time machine had been used three times to travel way back in time to settle deep scientific questions about evolution. </p><p><span> </span>“Whoa – did you see the dinosaurs?” No, he said. “Only one person can travel and I’m not a palaeontologist”. But, he said, “one trip was to the Cambrian – far more interesting and controversial than the Jurassic or the Cretaceous”.</p><p><span> </span>“Now”, older me said, “listen carefully”. “We’re in great danger – some very rich and powerful men are doing everything they can to build another machine.”</p><p><span> </span>“Why? What do they want to do?”</p><p><span> </span>“They intend to change history. You see they are white supremacists. They want to go back in time and stop the abolition of slavery. They’re not just racists, they also hate women, so they also want to go back and make sure women – and commoners like us – never get the vote. In short they want to turn the political clock back to the 18th century”.</p><p><span> </span>“Shit”, I said, “that’s really fucked up.”</p><p><span> </span>“Yes it is. And that’s why you must not invent the time machine.” Older me said those last words very slowly. I’ve never heard anyone then or since be any more serious than he was.</p><p><span> </span>Then, anticipating precisely what I was about to say: “John, I know you’re a determinist – that you don’t believe in free will. But you will change you mind. Free will is real and the choices you make have consequences.” </p><p><span> </span>“The burden you – we – bear are that those choices are perhaps the most important in the history of humanity.”</p><p><span> </span>I joked: “So, I guess if I make the wrong choice we’ll be having this conversation again?” </p><p><span> </span>“Yes, exactly”, he said – still deadly serious, “in fact this might not be the first time.” As if I wasn’t already freaked out enough by this whole conversation – that took me to the freaked out equivalent of Defcon 1.</p><p><span> </span>Then his face brightened up. “Goodbye, you fucker” he said, and vanished.</p><p><br /></p><p><span> </span>I write this age 60, forty years to the day that I met future me. I have thought about that conversation every day. Often doubting it happened at all. I had so many questions – enough to sustain a career.</p><p><span> </span>Yes I did a PhD in theoretical physics and won a bunch of prizes. My work was on the structure of space-time, and rumour has it I’ve been nominated for a Nobel. I did sketch out one paper setting out practical steps toward time travel but deleted the paper before anyone else even saw it. </p><p><span> </span>The world is still fucked up of course, but things could have been so much worse if I had not taken older me’s advice. </p><p><span> </span>As to those questions – it didn’t take me long to figure out that older me vanished as soon as he convinced me to take his advice: at that moment the time machine that brought him back to meet me no longer existed. But I will never know how many times he failed to persuade me. My guess is that each time we had that conversation older me tried out a different script – until it was word perfect. The bit about “I haven’t got long ... only 10 minutes” was bullshit. After god knows how many repeats the fucker knew exactly when to say goodbye.</p><p><br /></p><p>© Alan Winfield 2020</p><div><hr /></div><p>Previous stories:</p><p><a href="https://alanwinfield.blogspot.com/2016/12/the-gift.html" target="_blank">The Gift</a> (2016)</p>Alan Winfieldhttp://www.blogger.com/profile/08263812573346115168noreply@blogger.com0tag:blogger.com,1999:blog-20402273.post-40162992577764400442020-10-25T16:26:00.020+00:002021-05-28T13:07:27.334+01:00RoboTED: a case study in Ethical Risk AssessmentA few weeks ago I gave a short paper* at the excellent <a href="https://clawar.org/icres2020/" target="_blank">International Conference on Robot Ethics and Standards (ICRES 2020)</a>, outlining a case study in Ethical Risk Assessment - <a href="https://arxiv.org/abs/2007.15864v2" target="_blank">see our paper here</a>. Our chosen case study is a robot teddy bear, inspired by one of my favourite movie robots: Teddy, in <a href="https://en.wikipedia.org/wiki/A.I._Artificial_Intelligence" target="_blank">A.I. Artificial Intelligence</a>.<br /><div><br /><div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhFeH9FkWxhJ3YBEwmBqX2Nxni8DwuZ44D8_binGi4Feui_Cu8bSw2ld4N4oKS8IZvdRBVtWL6CWA0D4sG3wlNmE5ouoimPEwVHc8oNdszvUcNAj59PwxWtxrZOzHdSTnNV_mHmOA/s375/Teddy.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="375" data-original-width="251" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhFeH9FkWxhJ3YBEwmBqX2Nxni8DwuZ44D8_binGi4Feui_Cu8bSw2ld4N4oKS8IZvdRBVtWL6CWA0D4sG3wlNmE5ouoimPEwVHc8oNdszvUcNAj59PwxWtxrZOzHdSTnNV_mHmOA/s320/Teddy.jpg" /></a></div><div class="separator" style="clear: both; text-align: center;"><br /></div><div style="text-align: left;">
Although Ethical Risk Assessment (ERA) is not new - it is after all what research ethics committees do - the idea of extending traditional risk assessment, as practised by safety engineers, to cover ethical risks is new. ERA is I believe one of the most powerful tools available to the responsible roboticist, and happily we already have a published standard setting out a guideline on ERA for robotics in BS 8611, published in 2016.</div><div style="text-align: left;"><br /></div><div style="text-align: left;">Before looking at the ERA, we need to summarise the specification of our fictional robot teddy bear: RoboTed. First, RoboTed is based on the following technology:</div><p></p><ul style="text-align: left;"><li>RoboTed is an Internet (WiFi) connected device, </li><li>RoboTed has cloud-based speech recognition and conversational AI (chatbot) and local speech synthesis,</li><li>RoboTed’s eyes are functional cameras allowing RoboTed to recognise faces,</li><li>RoboTed has motorised arms and legs to provide it with limited baby-like movement and locomotion.</li></ul>And second RoboTed is designed to:<p></p><p></p><ul style="text-align: left;"><li>Recognise its owner, learning their face and name and turning its face toward the child.</li><li>Respond to physical play such as hugs and tickles.</li><li>Tell stories, while allowing a child to interrupt the story to ask questions or ask for sections to be repeated.</li><li>Sing songs, while encouraging the child to sing along and learn the song.</li><li>Act as a child minder, allowing parents to both remotely listen, watch and speak via RoboTed.</li></ul><div>The tables below summarise the ERA of RoboTED for (1) psychological, (2) privacy & transparency and (3) environmental risks. Each table has 4 columns, for the hazard, risk, level of risk (high, medium or low) and actions to mitigate the risk. BS8611 defines an <b>ethical risk</b> as the “probability of ethical harm occurring from the frequency and severity of exposure to a hazard”; an <b>ethical hazard</b> as “a potential source of ethical harm”, and an <b>ethical harm</b> as “anything likely to compromise psychological and/or societal and environmental well-being".</div><div><br /></div><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh6q8izF3NWHQ0lrEYMpWR0nN1eHXvvT6SHjfvwYAoFu_fojfsayv8KN8B9b86qK1JsKpSpU9A6X3lkLlky0MTyIxCTIbYoSQIhIKQqFT9TF2B-xyLajLE-NdB9tYaw_-aa2pROUA/s672/ERA+1.jpg" style="clear: left; float: left; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="440" data-original-width="672" height="263" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh6q8izF3NWHQ0lrEYMpWR0nN1eHXvvT6SHjfvwYAoFu_fojfsayv8KN8B9b86qK1JsKpSpU9A6X3lkLlky0MTyIxCTIbYoSQIhIKQqFT9TF2B-xyLajLE-NdB9tYaw_-aa2pROUA/w400-h263/ERA+1.jpg" width="400" /></a></div><p></p><div style="text-align: left;"><br /></div><div style="text-align: left;">(1) Psychological Risks</div><p><br /></p><p><br /></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgoubcbuA16kWD9joKbG58Z3t8ERlXZH17XEB-_uDoj6tFyeKqZP1ZEZ9Eoqc4cU7pDuf0eEQcs2HQiACPNDde8UAK0ylmDHejuE3qo1W9_J8GM4i_tIoG7GRiFPbPo6t5QPtxAUw/s672/ERA+2.jpg" style="clear: left; float: left; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="433" data-original-width="672" height="258" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgoubcbuA16kWD9joKbG58Z3t8ERlXZH17XEB-_uDoj6tFyeKqZP1ZEZ9Eoqc4cU7pDuf0eEQcs2HQiACPNDde8UAK0ylmDHejuE3qo1W9_J8GM4i_tIoG7GRiFPbPo6t5QPtxAUw/w400-h258/ERA+2.jpg" width="400" /></a></div><div><br /></div><div>(2) Security and Transparency Risks</div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiR1TrM1dLraPs6vGngNHCaNV_qRpDY2QDSuNV6SvTKMSZTLc4B0OyDteg06IlSfKBJWD3bxFveGDH1NP1NYQajyI4AkW1xa00F6zTpPlc8WykqYKp11bELOwg31E98eNYq05V4bA/s672/ERA+3.jpg" style="clear: left; float: left; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="404" data-original-width="672" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiR1TrM1dLraPs6vGngNHCaNV_qRpDY2QDSuNV6SvTKMSZTLc4B0OyDteg06IlSfKBJWD3bxFveGDH1NP1NYQajyI4AkW1xa00F6zTpPlc8WykqYKp11bELOwg31E98eNYq05V4bA/w400-h240/ERA+3.jpg" width="400" /></a></div><div style="text-align: left;"><br /></div><div style="text-align: left;">(3) Environmental Risks</div><p><br /></p><p><br /></p><p><br /></p><p><br /></p><p><br /></p><p><br /></p><div style="text-align: left;"><br /></div><div style="text-align: left;"><br /></div><div style="text-align: left;">For a more detailed commentary on each of these tables see <a href="https://arxiv.org/abs/2007.15864v2" target="_blank">our full paper</a> - which also, for completeness, covers physical (safety) risks.</div><div style="text-align: left;"><br />And here are the slides from my short ICRES 2020 presentation:</div><div style="text-align: left;"><br /></div>
<iframe allowfullscreen="true" frameborder="0" height="389" mozallowfullscreen="true" src="https://docs.google.com/presentation/d/e/2PACX-1vQECtdB9FLJ1QoapCDXkroQo3RpK3uLG8vqUm1vrjv68ZCSoHj_rbUhbkxn60LijPeuuxi6cevRPHB5/embed?start=true&loop=true&delayms=3000" webkitallowfullscreen="true" width="480"></iframe><div><br /></div><div><div>Through this fictional case study we argue we have demonstrated the value of ethical risk assessment. Our RoboTed ERA has shown that attention to ethical risks can</div><div style="text-align: left;"><ul style="text-align: left;"><li>suggest new functions, such as “RoboTed needs to sleep now”,</li><li>draw attention to how designs can be modified to mitigate some risks, </li><li>highlight the need for user engagement, and</li><li>reject some product functionality as too risky.</li></ul></div><div>But ERA is not guaranteed to expose all ethical risks. It is a subjective process which will only be successful if the risk assessment team are prepared to think both critically and creatively about the question: what could go wrong? As Shannon Vallor and her colleagues write in their excellent <a href="https://www.scu.edu/media/ethics-center/technology-ethics/Ethics-Toolkit.pdf" target="_blank">Ethics in Tech Practice</a> toolkit design teams must develop the “habit of exercising the skill of moral imagination to see how an ethical failure of the project might easily happen, and to understand the preventable causes so that they can be mitigated or avoided”.</div><div> </div><div>*Which won the conference best paper prize!<br /></div></div><div><br /></div><div><br /></div></div></div>Alan Winfieldhttp://www.blogger.com/profile/08263812573346115168noreply@blogger.com0tag:blogger.com,1999:blog-20402273.post-77501442910013155692020-08-20T11:11:00.008+01:002020-08-21T13:13:11.225+01:00"Why Did You Just Do That?" Explainability and Artificial Theory of Mind for Social RobotsThis week I have been attending (virtually) the excellent <a href="https://conferences.au.dk/robo-philosophy/" target="_blank">RoboPhilosophy</a> conference, and this morning gave a plenary talk <a href="https://conferences.au.dk/robo-philosophy/aarhus2020/events/why-did-you-just-do-that-explainability-and-artificial-theory-of-mind-for-social-robots/" target="_blank">"Why did you just do that?"</a> Here is the abstract:<div><blockquote>An important aspect of transparency is enabling a user to understand
what a robot might do in different circumstances. An elderly person
might be very unsure about robots, so it is important that her assisted
living robot is helpful, predictable – never does anything that puzzles
or frightens her – and above all safe. It should be easy for her to
learn what the robot does and why, in different circumstances, so that
she can build a mental model of her robot. An intuitive approach would
be for the robot to be able to explain itself, in natural language, in
response to spoken requests such as “Robot, why did you just do that?”
or “Robot, what would you do if I fell down?” In this talk I will
outline current work, within project <a href="https://www.robotips.co.uk/" target="_blank">RoboTIPS</a>, to apply <a href="https://alanwinfield.blogspot.com/2018/09/experiments-in-artificial-theory-of.html" target="_blank">recent research on artificial theory of mind</a> to the challenge of providing
social robots with the ability to explain themselves. </blockquote>And here are the slides:</div><div><div><br /></div></div>
<iframe allowfullscreen="true" frameborder="0" height="389" mozallowfullscreen="true" src="https://docs.google.com/presentation/d/e/2PACX-1vQ4zxBehE1uSuVE4mLd3VZPiR2E7BFBGWCZ9YZ9PMVpgdjI4THlSGRFP_2oldne-g/embed?start=true&loop=true&delayms=3000" webkitallowfullscreen="true" width="480"></iframe><div><br /></div><div>Here are links to the movies:</div><div><br /></div><div><a href="https://drive.google.com/file/d/1oRl_xlD3BULnHEj8wuDQo2at3rIg-csZ/view?usp=sharing" target="_blank">On slide 16</a>,</div><div><a href="https://drive.google.com/file/d/1jZe6HE2EFRHV9UdsWHRMNR5TiZN5GMPs/view?usp=sharing" target="_blank">on slide 20</a>,</div><div><a href="https://drive.google.com/file/d/1LA5fIx5gaq35MQHr0H7gKZZaSDcqolZp/view?usp=sharing" target="_blank">on slide 21</a>, and</div><div><a href="https://drive.google.com/file/d/1uQ1A8nrSOUvgrlV5H9iw7lBfJ1eVEgxS/view?usp=sharing" target="_blank">on slide 22</a>.</div><div><br /></div><div>And here are the papers referenced in the talk, with links:</div><div><ol style="text-align: left;"><li>Jobin, A., Ienca, M. & Vayena, E. (2019) <a href="https://www.nature.com/articles/s42256-019-0088-2" target="_blank">The global landscape of AI ethics guidelines</a>. Nat Mach Intell 1, 389–399</li><li>Winfield, A. Ethical standards in robotics and AI. Nature Electronics 2, 46–48 (2019). <a href="https://www.researchgate.net/publication/331138667_Ethical_standards_in_robotics_and_AI" target="_blank">Pre-print here.</a></li><li>Winfield, A. F. (2018) <a href="https://www.frontiersin.org/articles/10.3389/frobt.2018.00075/full" target="_blank">Experiments in Artificial Theory of Mind: from safety to story telling</a>. Front. Robot. AI 5:75.</li><li>Blum, C., Winfield, A. F. and Hafner, V. V. (2018) <a href="https://www.frontiersin.org/articles/10.3389/frobt.2017.00074/full" target="_blank">Simulation-based internal models for safer robots. Frontiers in Robotics and AI</a>, 4 (74). pp. 1-17.</li><li>Vanderelst, D. and Winfield, A. F. (2018) <a href="https://www.sciencedirect.com/science/article/pii/S1389041716302005" target="_blank">An architecture for ethical robots inspired by the simulation theory of cognition</a>. Cognitive Systems Research, 48. pp. 56-66.</li><li>Winfield AFT (2018) When Robots Tell Each Other Stories: The Emergence of Artificial Fiction. In: Walsh R., Stepney S. (eds) Narrating Complexity. Springer, Cham. <a href="https://www.researchgate.net/publication/318394753_When_Robots_Tell_Each_Other_Stories_The_Emergence_of_Artificial_Fiction" target="_blank">Preprint here.</a></li><li>Winfield, AF and Jirotka, M. (2017) The case for an ethical black box. In: Gao, Y. et al, eds. (2017) Towards Autonomous Robot Systems. LNCS 10454, pp. 262-273, Springer. <a href="https://www.researchgate.net/publication/318277040_The_Case_for_an_Ethical_Black_Box" target="_blank">Preprint here.</a></li><li>Winfield AFT, Katie Winkle, Helena Webb, Ulrik Lyngs, Marina Jirotka and Carl Macrae, <a href="https://arxiv.org/abs/2005.07474" target="_blank">Robot Accident Investigation: a case study in Responsible Robotics</a>, chapter submitted to RoboSoft.</li></ol><div>and mentioned in the Q&A:</div><ol style="text-align: left;"><li>Winfield, AF, K. Michael, J. Pitt and V. Evers (2019) <a href="https://ieeexplore.ieee.org/document/8662743" target="_blank">Machine Ethics: The Design and Governance of Ethical AI and Autonomous Systems [Scanning the Issue]</a>, in Proceedings of the IEEE, vol. 107, no. 3, pp. 509-517.</li><li>Vanderelst, D. and Winfield, A. (2018), <a href="https://aies-conference.com/2018/contents/papers/main/AIES_2018_paper_98.pdf" target="_blank">The Dark Side of Ethical Robots</a>, AIES '18: Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society Dec 2018 Pages 317–322. </li></ol></div><div><div><br /></div></div>Alan Winfieldhttp://www.blogger.com/profile/08263812573346115168noreply@blogger.com1tag:blogger.com,1999:blog-20402273.post-27306800077685905402020-08-10T13:49:00.196+01:002020-08-22T08:28:40.744+01:00Back to robot coding part 1: hello worldOne of the many things I promised myself when I retired nearly two years ago was to get back to some coding. Why? Two reasons: one is that writing and debugging code is hugely satisfying - for those like me not smart enough to do pure maths or theoretical physics - it's the closest you can get to <a href="https://www.blogger.com/#">working with pure mind stuff</a>. But the second is that I want to prototype a number of ideas in cognitive robots which tie together work in <a href="https://www.blogger.com/#">artificial theory of mind</a> and the <a href="https://www.blogger.com/#">ethical black box</a>, with old ideas on how <a href="https://www.blogger.com/#">robots telling each other stories</a> and new ideas on how social robots might be able to explain themselves in response to questions like <a href="https://alanwinfield.blogspot.com/2020/08/why-did-you-just-do-that-explainability.html" target="_blank">"Robot: what would you do if I forget to take my medicine?"</a><div><br /><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgp5ATcgkE7iq7mmpNiXt6NNsTD7Zepe6J-v_pxlm1d3qEtzJOJYZFCvriRtLFZTVMbKvpOJhoSYl9byiEv8ijQqFZLQzgVDEoMkV4Q0oNKQ0zoSzzEp3CuaZp_kGfItaqYqsHtTA/s639/NAOrobots.png" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="426" data-original-width="639" height="218" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgp5ATcgkE7iq7mmpNiXt6NNsTD7Zepe6J-v_pxlm1d3qEtzJOJYZFCvriRtLFZTVMbKvpOJhoSYl9byiEv8ijQqFZLQzgVDEoMkV4Q0oNKQ0zoSzzEp3CuaZp_kGfItaqYqsHtTA/w327-h218/NAOrobots.png" width="327" /></a></div>But before starting to work on the robot (a NAO) I first needed to learn Python, so completed most of the Codecadamy's excellent <a href="https://www.blogger.com/#">Learn Python 2</a> course over the last few weeks. I have to admit that I started learning Python with big misgivings over the language. I especially don't like the way Python plays fast and loose with variable types, allowing you to arbitrarily assign a thing (integer, float, string, etc) to a variable and then assign a different kind of thing to the same variable; very different to the strongly typed languages I have used since student days: Algol 60, Algol 68, Pascal and C. However, there are things I do like: the use of indentation as part of the syntax for instance, and lots of nice built in functions like range(), so x = range(0,10) puts a list ('array' in old money) of integers from 0 to 9 in x. </div><div><br /></div><div>So, having got my head around Python I finally made a start with the robot on Thursday last week. I didn't get far and it was *very* frustrating. <br /><br /><b>Act 1: setting up on my Mac</b><br /><br /><div>Attempting to set things up on my elderly Mac air was a bad mistake which sent me spiralling down a rabbit hole of problems. The first thing you have to do is download and unzip the <a href="http://doc.aldebaran.com/2-1/dev/python/install_guide.html" target="_blank">NAO API, called naoqi</a>, from Aldebaran. The same web page then suggests you simply try to <span style="font-family: courier;">import naoqi</span> from within Python, and if there are no errors all's well.</div><div><br /></div><div>As soon as I got the <span style="font-family: courier;">export path</span> commands right, <span style="font-family: courier;">import naoqi </span><span style="font-family: inherit;">resulted in the following error</span></div><div><span style="font-family: inherit;"><br /></span></div><div><span style="font-family: courier;">...</span></div><div><span style="font-family: courier;">Reason: unsafe use of relative rpath libboost_python.dylib in /Users/alansair/Desktop/naoqi/pynaoqi-python2.7-2.1.4.13-mac64/_qi.so with restricted binary</span></div><div><span style="font-family: inherit;"><br /></span></div><div><div>According to stack overflow this problem is caused by <a href="https://stackoverflow.com/questions/38641643/library-not-loaded-libboost-python-dylib" target="_blank">Mac OSX system integrity protection (SIP)</a>. </div><div><br /></div><div>Then (somewhat nervously) I tried turning SIP off, <a href="https://www.macworld.com/article/2986118/how-to-modify-system-integrity-protection-in-el-capitan.html" target="_blank">as instructed here</a>.</div><div style="font-family: inherit;"><br /></div></div><div>But <span style="font-family: courier;">import naoqi </span><span style="font-family: inherit;">still gives a different error. Perhaps its because my Python is in the wrong place, the Aldebaran page says it must be at </span>/usr/local/bin/python (the default on the mac is /usr/bin<span style="font-family: inherit;">. Ok so I </span>So, reinstall python 2.7 from Python.org so that it is in /usr/local/bin/python. But now I get another error message:</div><div><br /></div><div><div><span style="font-family: courier;">>> import naoqi</span></div><div><span style="font-family: courier;">Fatal Python error: PyThreadState_Get: no current thread</span></div><div><span style="font-family: courier;">Abort trap: 6</span></div></div><div><br /></div><div>A quick search and I read: "this error shows up when a module tries to use a python library that is different than the one the interpreter uses, that is, when you mix two different pythons. I would run otool -L <dyld> on each of the dynamic libraries in the list of Binary Images, and see which ones is linked to the system Python."</div><div><br /></div><div>At which point I admitted defeat.</div><div><br /></div><div><b>Act 2: setting up on my Linux machine</b></div><div><br /></div><div>Once I had established that the Python on my Linux machine was also the required version 2.7, I then downloaded and unzipped the <a href="http://doc.aldebaran.com/2-1/dev/python/install_guide.html" target="_blank">NAO API, this time for Linux</a>.</div><div><br /></div><div>This time I was able to <span style="font-family: courier;">import naoqi</span> with no errors, and within just a few minutes ran my first NAO program: <a href="http://doc.aldebaran.com/2-1/dev/python/making_nao_speak.html" target="_blank">hello world</a>. </div><div><br /></div><div><div><span style="font-family: courier;">from naoqi import ALProxy</span></div><div><span style="font-family: courier;">tts = ALProxy("ALTextToSpeech", "164.168.0.17", 9559)</span></div><div><span style="font-family: courier;">tts.say("Hello, world!")</span></div></div><div><br /></div><div>whereupon my NAO robot spoke the words "Hello world". Success!</div></div>Alan Winfieldhttp://www.blogger.com/profile/08263812573346115168noreply@blogger.com0tag:blogger.com,1999:blog-20402273.post-7547385796325408932020-06-05T14:02:00.000+01:002020-06-05T20:38:18.836+01:00Robot Accident InvestigationYesterday I gave an talk at the <a href="https://www.icra2020.org/" target="_blank">ICRA 2020</a> workshop <a href="https://against-20.github.io/" target="_blank">Against Robot Dystopias</a>. The workshop should have been in Paris but - like most academic meetings during the lockdown - was held online. In the zoom chat window toward the end of the workshop many of us were wistfully imagining continued discussions in a Parisian bar over a few glasses of wine. Next year I hope. The workshop was excellent and all of the talks should be online soon.<br />
<br />
My talk was an extended version of last year's talk for AI@Oxford <i><a href="https://alanwinfield.blogspot.com/2019/09/whats-worst-that-could-happen-why-we.html" target="_blank">What could possibly go wrong</a></i>. With results from our new paper <i><a href="https://arxiv.org/abs/2005.07474" target="_blank">Robot Accident Investigation</a></i>, the talk outlines a fictional investigation of a fictional robot accident. We had hoped to stage the mock accident, in the lab, with human volunteers and report a real investigation (of a mock accident) but the lockdown put paid to that too. So we have had to use our imagination and construct - I hope plausibly - the process and findings of the accident investigation.<br />
<br />
Here is the abstract of our paper.<br />
<blockquote class="tr_bq">
Robot accidents are inevitable. Although rare, they have been happening since
assembly-line robots were first introduced in the 1960s. But a new generation
of social robots are now becoming commonplace. Often with sophisticated
embedded artificial intelligence (AI) social robots might be deployed as care
robots to assist elderly or disabled people to live independently. Smart robot
toys offer a compelling interactive play experience for children and
increasingly capable autonomous vehicles (AVs) the promise of hands-free
personal transport and fully autonomous taxis. Unlike industrial robots which
are deployed in safety cages, social robots are designed to operate in human
environments and interact closely with humans; <u>the likelihood of robot
accidents is therefore much greater for social robots than industrial robots</u>.
This paper sets out a draft framework for social robot accident investigation;
a framework which proposes both the technology and processes that would allow
social robot accidents to be investigated with no less rigour than we expect of
air or rail accident investigations. The paper also places accident
investigation within the practice of responsible robotics, and makes the case
that <u>social robotics without accident investigation would be no less
irresponsible than aviation without air accident investigation</u>.</blockquote>
And the slides from yesterday's talk:<br />
<br />
<iframe allowfullscreen="true" frameborder="0" height="389" mozallowfullscreen="true" src="https://docs.google.com/presentation/d/e/2PACX-1vSEl1s54lSsfI0GBnsep9h78iFYI1BgUDw3VUhhXJDYlLAWQ0WXgI2tWrpDELaaXw/embed?start=true&loop=true&delayms=3000" webkitallowfullscreen="true" width="480"></iframe><br />
<br />
<hr />
Special thanks to project colleagues and co-authors: Prof Marina Jirotka, Prof Carl Macrae, Dr Helena Webb, Dr Ulrik Lyngs and Katie Winkle.Alan Winfieldhttp://www.blogger.com/profile/08263812573346115168noreply@blogger.com0tag:blogger.com,1999:blog-20402273.post-82145387584198967992020-04-20T10:35:00.000+01:002020-05-12T08:55:57.883+01:00Autonomous Robot Evolution: an updateIt's been over a year since my <a href="http://alanwinfield.blogspot.com/2019/02/first-automated-robot-assembly.html" target="_blank">last progress report</a> from the <a href="https://www.york.ac.uk/robot-lab/are/" target="_blank">Autonomous Robot Evolution (ARE) project</a>, so an update on the ARE Robot Fabricator (RoboFab) is long overdue. There have been several significant advances. First is integration of each of the elements of RoboFab. Second is the design and implementation of an assembly fixture, and third significantly improved wiring. Here is a CAD drawing of the integrated RoboFab.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjqc_-QR3lqnnw5dlpkjjTyJsqJOUGw7oHNqpVubumZ5E5IFJslt7LmruWcqTyyP5AEYfcG6iKTXHY5uztgaqo2t-mlsds0Fy1bLQpEXIWJNvSFC-IZ2zukpeAfqlLueM6s351kGQ/s1600/CAD_layout%25281%2529.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="487" data-original-width="1450" height="131" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjqc_-QR3lqnnw5dlpkjjTyJsqJOUGw7oHNqpVubumZ5E5IFJslt7LmruWcqTyyP5AEYfcG6iKTXHY5uztgaqo2t-mlsds0Fy1bLQpEXIWJNvSFC-IZ2zukpeAfqlLueM6s351kGQ/s400/CAD_layout%25281%2529.jpg" width="400" /></a></div>
<div class="" style="clear: both; text-align: left;">
The ARE RoboFab has four major subsystems: up to three 3D printer(s), an organ bank, an assembly fixture and a centrally positioned robot arm (multi-axis manipulator). The purpose of each of these subsystems is outlined as follows:</div>
<div class="separator" style="clear: both;">
</div>
<ul>
<li>The 3D printers are used to print the evolved robot’s skeleton, which might be a single part, or several. With more than one 3D printer we can speed up the process by 3D printing skeletons for several different evolved robots in parallel, or – for robots with multi-part skeletons – each part can be printed in parallel.</li>
<li>The organ bank contains a set of pre-fabricated organs, organised so that the robot arm can pick organs ready for placing within the part-built robot. For more on the organs see <a href="http://alanwinfield.blogspot.com/2019/02/first-automated-robot-assembly.html" target="_blank">previous blog post(s)</a>.</li>
<li>The assembly fixture is designed to hold (and if necessary rotate) the robot’s core skeleton while organs are placed and wired up.</li>
<li>The robot arm is the engine of RoboFab. Fitted with special gripper the robot arm is responsible for assembling the complete robot.</li>
</ul>
And here is the Bristol RoboFab (there is a second identical RoboFab in York):<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEimSiZ_DCagTuEGMFV1iHvXARUg_HPkuknkH7ZG70vIDPZgAvq0OedEpq2A4bQPDNWiilhsL_vMRZiI0wM656uShg8CSDST1YJm6yveCgw45Q50naJ-y4h0wEs4K9JuZp2o_oZQ6A/s1600/RoboFab.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em; text-align: center;"><img border="0" data-original-height="1200" data-original-width="1600" height="300" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEimSiZ_DCagTuEGMFV1iHvXARUg_HPkuknkH7ZG70vIDPZgAvq0OedEpq2A4bQPDNWiilhsL_vMRZiI0wM656uShg8CSDST1YJm6yveCgw45Q50naJ-y4h0wEs4K9JuZp2o_oZQ6A/s400/RoboFab.jpg" width="400" /></a></div>
<div class="" style="clear: both; text-align: left;">
<br />
Note that the assembly fixture is mounted upside down at the top front of the RoboFab. This has the advantage that there is a reasonable volume of clear space for assembly of the robot under the fixture, which is reachable by the robot arm.</div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="" style="clear: both;">
The fabrication and assembly sequence has six stages:</div>
<div class="separator" style="clear: both;">
</div>
<ol>
<li>RoboFab receives the required coordinates of the organs and one or more mesh file(s) of the shape of the skeleton.</li>
<li>The skeleton is 3D printed.</li>
<li>The robot arm fetches the core ‘brain’ organ from the organ bank and clips it into the skeleton on the print bed. This is a strong locking clip.</li>
<li>The robot arm then lifts the core organ and skeleton assemblage off the print bed, and attaches it to the assembly fixture. The core organ has metal disks on its underside which are used to secure the assemblage to the fixture with electromagnets.</li>
<li>The robot arm then picks and places the required organs from the organ bank, clipping them into place on the skeleton.</li>
<li>Finally the robot arm wires each organ to the core organ, to complete the robot.</li>
</ol>
<ol>
</ol>
<div>
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEidhoBnXl_2dLoGBHx228E-woxrkNByADx8Uv_BZtZL6_P7LsM8NRmQc5zj_MVIK0AMG-TtNuAgLKWlF3uqoKl5nxUXAFqcShe9JEejJF2ANLEHI6BImZshT3YjWQGrp3xgVXcfjA/s1600/Assembled_Robot_jpg.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><br /></a><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEidhoBnXl_2dLoGBHx228E-woxrkNByADx8Uv_BZtZL6_P7LsM8NRmQc5zj_MVIK0AMG-TtNuAgLKWlF3uqoKl5nxUXAFqcShe9JEejJF2ANLEHI6BImZshT3YjWQGrp3xgVXcfjA/s1600/Assembled_Robot_jpg.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="1318" data-original-width="1600" height="263" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEidhoBnXl_2dLoGBHx228E-woxrkNByADx8Uv_BZtZL6_P7LsM8NRmQc5zj_MVIK0AMG-TtNuAgLKWlF3uqoKl5nxUXAFqcShe9JEejJF2ANLEHI6BImZshT3YjWQGrp3xgVXcfjA/s320/Assembled_Robot_jpg.jpg" width="320" /></a></div>
<br />
<br />
Here is a complete robot, fabricated, assembled and wired by the RoboFab. This evolved robot has a total of three organs: the core ‘brain’ organ, and two wheel organs.<br />
<div class="" style="clear: both; text-align: left;">
Note especially the wires connecting the wheel organs to the core organ. My colleague Matt has come up with an ingenious design in which a coiled cable is contained within the organ. After the organs have been attached to the skeleton (stage 5), the robot arm in turn grabs each organ's jack plug and pulls the cable to plug into the core organ (stage 6). This design minimises the previously encountered problem of the robot gripper getting tangled in dangling loose wires during stage 6.<br />
<br />
And here is a video clip of the complete process:</div>
<br />
<iframe allowfullscreen="" frameborder="0" height="270" src="https://www.youtube.com/embed/mWjZya9PJQg" width="480"></iframe>
<br />
<hr />
Credits<br />
<br />
The work described here has been led by my brilliant colleague Matt Hale, very ably supported by York colleagues Edgar Buchanan and Mike Angus. The only credit I can take is that I came up with some of the ideas and co-wrote the bid that secured the EPSRC funding for the project.<br />
<br />
References<br />
<br />
For a much more detailed account of the RoboFab see this paper, which was presented at ALife 2019 last summer in Newcastle: <a href="https://www.mitpressjournals.org/doi/pdf/10.1162/isal_a_00147" target="_blank">The ARE Robot Fabricator: How to (Re)produce Robots that Can Evolve in the Real World.</a><br />
<br />
Related blog posts<br />
<br />
<a href="http://alanwinfield.blogspot.com/search/label/Autonomous%20Robot%20Evolution" target="_blank">First automated robot assembly</a> (February 2019)<br />
<a href="http://alanwinfield.blogspot.com/2018/07/autonomous-robot-evolution-from-cradle.html" target="_blank">Autonomous Robot Evolution: from cradle to grave</a> (July 2018)<br />
<a href="http://alanwinfield.blogspot.com/2018/10/autonomous-robot-evolution-first-steps.html" target="_blank">Autonomous Robot Evolution: first challenges</a> (Oct 2018)Alan Winfieldhttp://www.blogger.com/profile/08263812573346115168noreply@blogger.com0tag:blogger.com,1999:blog-20402273.post-19509678766497378672019-09-17T22:05:00.001+01:002019-11-21T16:38:51.159+00:00What's the worst that could happen? Why we need robot/AI accident investigation.Robots. What could possibly go wrong?<br />
<br />
<i>Imagine that your elderly mother, or grandmother, has an assisted living robot to help her live independently at home. The robot is capable of fetching her drinks, reminding her to take her medicine and keeping in touch with family. Then one afternoon you get a call from a neighbour who has called round and sees your grandmother collapsed on the floor. When the paramedics arrive they find the robot wandering around apparently aimlessly. One of its functions is to call for help if your grandmother stops moving, but it seems that the robot failed to do this. </i><br />
<i><br /></i>
<i>Fortunately your grandmother recovers but the doctors find bruising on her legs, consistent with the robot running into them. Not surprisingly you want to know what happened: did the robot cause the accident? Or maybe it didn't but made matters worse, and why did it fail to raise the alarm? </i><br />
<i><br /></i>
Although this is a fictional scenario it could happen today. If it did you would be totally reliant on the goodwill of the robot manufacturer to discover what went wrong. Even then you might not get the answers you seek; it's entirely possible the robot and the company that made it are just not equipped with the tools and processes to undertake an investigation.<br />
<br />
Right now there are <u>no established processes for robot accident investigation</u>.<i> </i><br />
<br />
Of course accidents happen, and that just as true for robots as any other machinery [1].<br />
<br />
Finding statistics is tough. But <a href="https://www.osha.gov/pls/imis/AccidentSearch.search?acc_keyword=%22Robot%22&keyword_list=on" target="_blank">this web page</a> shows serious accidents with industrial robots in the US since the mid 1980s. Driverless car fatalities of course make the headlines. There have been <a href="https://en.wikipedia.org/wiki/List_of_self-driving_car_fatalities" target="_blank">five (that we know about) since 2016</a>. But we have next to no data on accidents in human robot interaction (HRI); that is for robots designed to interact directly with humans. Here is one - a security robot - that <a href="https://www.cnbc.com/2016/07/14/investigation-begins-on-robot-security-after-child-is-hurt.html" target="_blank">happened to be reported</a>.<br />
<br />
But a Responsible Roboticist must be interested in *all* accidents, whether serious or not. We should also be very interested in near misses; these are taken *very* seriously in aviation [2], and there is good evidence that <a href="https://www.haspod.com/blog/management/examples-near-miss-reporting-stop-accidents" target="_blank">reporting near misses improves safety</a>.<br />
<br />
So I am very excited to introduce our 5-year EPSRC funded project <a href="https://www.robotips.co.uk/" target="_blank">RoboTIPS – responsible robots for the digital economy</a>. Led by Professor <a href="https://www.cs.ox.ac.uk/people/marina.jirotka/" target="_blank">Marina Jirotka</a> at the University of Oxford, we believe RoboTIPS to be the first project with the aim of systematically studying the question of how to investigate accidents with social robots.<br />
<br />
So what are we doing in RoboTIPS..?<br />
<div>
<br /></div>
<div>
<div>
First we will look at the technology needed to support accident investigation.</div>
<div>
<br /></div>
<div>
In a paper published 2 years ago Marina and I argued the case for an <a href="https://alanwinfield.blogspot.com/2017/08/the-case-for-ethical-black-box.html" target="_blank">Ethical Black Box</a> (EBB) [3]. Our proposition is very simple: that all robots (and some AIs) should be equipped by law with a standard device which continuously records a time stamped log of the internal state of the system, key decisions, and sampled input or sensor data (in effect the robot equivalent of an aircraft flight data recorder). Without such a device finding out what the robot was doing, and why, in the moments leading up to an accident is more or less impossible. In RoboTIPS we will be developing and testing a model EBB for social robots.</div>
<div>
<br /></div>
<div>
But accident investigation is a human process of discovery and reconstruction. So in this project we will be designing and running three staged (mock) accidents, each covering a different application domain: </div>
</div>
<div>
<div>
<ul>
<li>assisted living robots, </li>
<li>educational (toy) robots, and </li>
<li>driverless cars.</li>
</ul>
</div>
<div>
In these scenarios we will be using real robots and will be seeking human volunteers to act in three roles, as the </div>
<div>
<ul>
<li>subject(s) of the accident, </li>
<li>witnesses to the accident, and as </li>
<li>members of the accident investigation team.</li>
</ul>
</div>
</div>
<div>
Thus we aim to develop and demonstrate both technologies and processes (and ultimately policy recommendations) for robot accident investigation. And the whole project will be conducted within the framework of <a href="https://www.cs.ox.ac.uk/activities/responsible_research/" target="_blank">Responsible Research and Innovation</a>; it will, in effect, be a case study in Responsible Robotics.</div>
<hr />
<div>
The text above is the script for a very short (10 minute) TED-style talk I gave at the conference <a href="https://innovation.ox.ac.uk/innovation-news/events/aioxford-conference/conference-agenda/" target="_blank">AI@Oxford</a> today in the Impact of Trust in AI session, and here below are the slides.<br />
<br />
<iframe allowfullscreen="true" frameborder="0" height="389" mozallowfullscreen="true" src="https://docs.google.com/presentation/d/e/2PACX-1vRTZbM6br6ZfUW_wYllBZELwnCD6YrcfzOXYfWe9WzX_LJlhIY2dM1_U43BfsgZHQ/embed?start=true&loop=true&delayms=3000" webkitallowfullscreen="true" width="480"></iframe>
<br />
<hr />
References:<br />
<br />
[1] Dhillon BS (1991) <a href="https://link.springer.com/chapter/10.1007/978-1-4612-3148-6_4" target="_blank">Robot Accidents. In: Robot Reliability and Safety</a>. Springer, New York, NY<br />
[2] Macrae C (2014) Close Calls: Managing risk and resilience in Airline flight safety, Palgrave macmillan.<br />
[3] Winfield AFT and Jirotka M (2017) <a href="https://www.researchgate.net/publication/318277040_The_Case_for_an_Ethical_Black_Box" target="_blank">The Case for an Ethical Black Box</a>. In: Gao Y, Fallah S, Jin Y, Lekakou C (eds) Towards Autonomous Robotic Systems. TAROS 2017. Lecture Notes in Computer Science, vol 10454. Springer, Cham.</div>
Alan Winfieldhttp://www.blogger.com/profile/08263812573346115168noreply@blogger.com1tag:blogger.com,1999:blog-20402273.post-42710034969048251262019-07-31T14:54:00.001+01:002019-07-31T19:01:22.354+01:00On the simulation (and energy costs) of human intelligence, the singularity and simulationismFor many researchers the Holy Grail of robotics and AI is the creation of artificial persons: artefacts with equivalent general competencies as humans. Such artefacts would literally be simulations of humans. Some researchers are motivated by the utility of AGI; others have an almost religious faith in the transhumanist promise of the technological singularity. Others, like myself, are driven only by scientific curiosity. Simulations of intelligence provide us with working models of (elements of) natural intelligence. As Richard Feynman famously said ‘What I cannot create, I do not understand’. Used in this way simulations are like microscopes for the study of intelligence; they are scientific instruments.<br />
<br />
Like all scientific instruments simulation needs to be used with great care; simulations need to be calibrated, validated and – most importantly – their limitations understood. Without that understanding any claims to new insights into the nature of intelligence – or for the quality and fidelity of an artificial intelligence as a model of some aspect of natural intelligence – should be regarded with suspicion.<br />
<br />
In <a href="https://www.researchgate.net/profile/Alan_Winfield/publication/332496888_On_the_simulation_and_energy_costs_of_human_intelligence_the_singularity_and_simulationism/links/5cb85610a6fdcc1d499cc288/On-the-simulation-and-energy-costs-of-human-intelligence-the-singularity-and-simulationism.pdf" target="_blank">this essay</a> I have critically reflected on some of the predictions for human-equivalent AI (AGI); the paths to AGI (and especially via artificial evolution); the technological singularity, and the idea that we are ourselves simulations in a simulated universe (simulationism). The quest for human-equivalent AI clearly faces many challenges. One (perhaps stating the obvious) is that it is a very hard problem. Another, as I have argued in this essay, is that the energy costs are likely to limit progress.<br />
<br />
However, I believe that the task is made even more difficult for two further reasons. The first is – as hinted above – that we have failed to recognize simulations of intelligence (which all AIs and robots are) as scientific instruments, which need to be designed, operated and results interpreted, with no less care than we would a particle collider or the Hubble telescope.<br />
<br />
The second, and more general observation, is that we lack a general (mathematical) theory of intelligence. This lack of theory means that a significant proportion of AI research is not hypothesis driven, but incrementalist and ad-hoc. Of course such an approach can and is leading to interesting and (commercially) valuable advances in narrow AI. But without strong theoretical foundations, the grand challenge of human-equivalent AI seems rather like trying to build particle accelerators to understand the nature of matter, without the Standard Model of particle physics.<br />
<br />
The text above is the concluding discussion of my essay <a href="https://www.researchgate.net/profile/Alan_Winfield/publication/332496888_On_the_simulation_and_energy_costs_of_human_intelligence_the_singularity_and_simulationism/links/5cb85610a6fdcc1d499cc288/On-the-simulation-and-energy-costs-of-human-intelligence-the-singularity-and-simulationism.pdf" target="_blank">On the simulation (and energy costs) of human intelligence, the singularity and simulationism</a>, which appears in an edited collection of essays in a book called <a href="https://link.springer.com/book/10.1007/978-3-030-15792-0" target="_blank">From Astrophysics to Unconventional Computation</a>. Published in April 2019, the book marks the 60th birthday of astrophysicist, computer scientist and all round genius, <a href="https://www-users.cs.york.ac.uk/susan/" target="_blank">Susan Stepney</a>.<br />
<br />
Note: regular visitors to the blog will recognise themes covered in several previous blog posts, brought together in I hope a coherent and interesting way.Alan Winfieldhttp://www.blogger.com/profile/08263812573346115168noreply@blogger.com2tag:blogger.com,1999:blog-20402273.post-36657626691226763682019-07-29T22:30:00.000+01:002019-07-30T12:28:47.342+01:00Ethical Standards in Robotics and AI: what they are and why they matterHere are the slides for my keynote, presented this morning at the <a href="https://www.icres2019.org/" target="_blank">International Conference on Robot Ethics and Standards (ICRES 2019)</a>. The talk is based on my paper <a href="https://www.nature.com/articles/s41928-019-0213-6.epdf" target="_blank">Ethical Standards in Robotics and AI</a> published in Nature Electronics a few months ago (<a href="https://www.researchgate.net/profile/Alan_Winfield/publication/331138667_Ethical_standards_in_robotics_and_AI/links/5c8686f1a6fdcc068187e95e/Ethical-standards-in-robotics-and-AI.pdf" target="_blank">here is a pre-print</a>).<br />
<br />
<iframe allowfullscreen="true" frameborder="0" height="389" mozallowfullscreen="true" src="https://docs.google.com/presentation/d/e/2PACX-1vQPK7yG01HlePXLnHO9nKtGFFLoZ0gMygTpj4sIexQLnMxZH1SQ4UeNOot8Ne1bMw/embed?start=true&loop=true&delayms=3000" webkitallowfullscreen="true" width="480"></iframe><br />
<br />
To see the speaker notes click on the options button on the google slides toolbar above.Alan Winfieldhttp://www.blogger.com/profile/08263812573346115168noreply@blogger.com1tag:blogger.com,1999:blog-20402273.post-70455405285547601262019-06-28T12:03:00.001+01:002021-05-15T19:41:06.897+01:00Energy and Exploitation: AIs dirty secretsA couple of days ago I gave a short 15 minute talk at an excellent <a href="http://bristol.5x15.com/" target="_blank">5x15 event in Bristol</a>. The talk I actually gave was different to the one I'd originally suggested. Two things prompted the switch: one was seeing the amazing line up of speakers on the programme - all covering more or less controversial topics - and the other was my increasing anger in recent months over the energy and human costs of AI. So it was that I wrote a completely new talk the day before this event.<br />
<br />
But before I get to my talk I must mention the amazing other speakers: we heard <a href="https://twitter.com/Philippa_Perry" target="_blank">Phillipa Perry</a> speaking on child parent relationships, <a href="https://twitter.com/HallieRubenhold" target="_blank">Hallie Rubenhold</a> on the truth about Jack the Ripper's victims, Jenny Riley speaking very movingly about <a href="https://twitter.com/One25Charity" target="_blank">One25</a>'s support for Bristol's (often homeless) sex workers, and Amy Sinclair introducing her activism with <span id="goog_1018889647"></span><a href="https://twitter.com/ExtinctionR" target="_blank">Extinction Rebellion</a><span id="goog_1018889648"></span>.<br />
<br />
Here is the script for my talk (for the slides go to the end of this blog post).<br />
<br />
<hr />
Artificial Intelligence and Machine Learning are often presented as bright clean new technologies with the potential to solve many of humanity's most pressing problems.<br />
<br />
We already enjoy the benefit of truly remarkable AI technology, like machine translation and smart maps. Driverless cars might help us get around before too long, and DeepMind's <a href="https://www.theverge.com/2018/8/13/17670156/deepmind-ai-eye-disease-doctor-moorfields" target="_blank">diagnostic AI can detect eye diseases</a> from retinal scans as accurately as a doctor.<br />
<br />
Before getting into the ethics of AI I need to give you a quick tutorial on machine learning. The most powerful and exciting AI today is based on <a href="https://en.wikipedia.org/wiki/Artificial_neural_network" target="_blank">Artificial Neural Networks</a>. Here [slide 3] is a simplified diagram of a Deep Learning network for recognizing images. Each small circle is a *very* simplified mathematical model of biological neurons, and the outputs of each layer of artificial neurons feed the inputs of the next layer. In order to be able to recognise images the network must first be trained with images that are already labelled - in this case my dog Lola.<br />
<br />
But in order to reliably recognise Lola the network needs to be trained not with one picture of Lola but many. This set of images is called the training data set and without a good data set the network will not work at all or will be biased. (In reality there will need to be not 4 but hundreds of images of Lola).<br />
<br />
So what does an AI ethicist do? Well, the short answer is worry. I worry about the ethical and societal impact of AI on individuals, society and the environment. Here are some keywords on ethics [slide 4], reflecting that we must work toward AI that respects Human Rights, diversity and dignity, is unbiased and sustainable, transparent, accountable and socially responsible.<br />
<br />
But I do more than just worry. I also take practical steps like drafting ethical principles, and helping to write ethical standards for the <a href="http://www.machinebuilding.net/ta/t1028.htm" target="_blank">British Standards Institute</a> and the <a href="https://ethicsinaction.ieee.org/" target="_blank">IEEE Standards Association</a>. I lead <a href="http://alanwinfield.blogspot.com/2017/01/the-infrastructure-of-life-2.html" target="_blank">P7001</a>: a new standard on transparency in of autonomous systems based on the simple ethical principle that it should always be possible to find out why an AI made a particular decision. I have given evidence in parliament several times, and recently took part in a <a href="https://topol.hee.nhs.uk/" target="_blank">study of AI and robotics in healthcare</a> and what this means for the workforce of the NHS.<br />
<br />
Now I want to share two serious new worries with you.<br />
<br />
The first is about the energy cost of AI. In 2016 Go champion <a href="https://en.wikipedia.org/wiki/AlphaGo_versus_Lee_Sedol" target="_blank">Lee Sedol was famously defeated</a> by <a href="https://deepmind.com/research/alphago/" target="_blank">DeepMind's AlphaGo</a>. It was a remarkable achievement for AI. But consider the energy cost. In a single two hour match Sedol burned around 170 kcals: roughly the amount of energy you would get from an egg sandwich. Or about the power of an LED night light - 1 Watt. In the same two hours the AlphaGo machine <a href="https://jacquesmattheij.com/another-way-of-looking-at-lee-sedol-vs-alphago/" target="_blank">reportedly consumed 50,000 times more energy than Sedol</a>. Equivalent to a 50 kW generator for industrial lighting. And that's not taking account of the energy used to train AlphaGo.<br />
<br />
Now some people think we can make human equivalent AI by simulating the human brain. But the most complex animal brain so far simulated is that of c-elegans – the nematode worm. It has 302 neurons and about 5000 synapses - these are the connections between neurons. A couple of years ago I worked out that simulating a neural network for a simple robot with only a 10th the number of neurons of c-elegans <a href="http://alanwinfield.blogspot.com/2014/07/estimating-energy-cost-of-evolution.html" target="_blank">costs 2000 times more energy than the whole worm</a>.<br />
<br />
In a <a href="https://arxiv.org/pdf/1906.02243.pdf" target="_blank">new paper</a> that came out just a few days ago we have for the first time estimates of the carbon cost of training large AI models for natural language processing such as machine translation [1]. The carbon cost of simple models is quite modest, but with tuning and experimentation the carbon cost leaps to 7 times the carbon footprint of an average human in one year (or 2 times if you're an American).<br />
<br />
And the energy cost of optimising the biggest model is a staggering 5 times the carbon cost of a car over its whole lifetime, including manufacturing it in the first place. The dollar cost of that amount of energy is estimated at between one and 3 million US$. (Something that only companies with very deep pockets can afford.)<br />
<br />
These energy costs seem completely at odds with the urgent need to <a href="https://phys.org/news/2018-09-halving-greenhouse-gas-emissions-roadmap.html" target="_blank">halve carbon dioxide emissions by 2030</a>. At the very least AI companies need to be honest about the huge energy costs of machine learning.<br />
<br />
Now I want to turn to the human cost of AI. It is often said that one of the biggest fears around AI is the loss of jobs. In fact the opposite is happening. Many new jobs are being created, but the tragedy is that they are not great jobs, to say the least. Let me introduce you to two of these new kinds of jobs.<br />
<br />
The first is AI tagging. This is manually labelling objects in images to, for instance, generate training data sets for driverless car AIs. Better (and safer) AI needs huge training data sets and a whole new outsourced industry has sprung up all over the world to meet this need. Here [slide 9] is an <a href="https://www.scmp.com/tech/article/2166655/ai-promises-jobs-revolution-first-it-needs-old-fashioned-manual-labour-china" target="_blank">AI tagging factory in China</a>.<br />
<br />
Conversational AI or chat bots also need human help. Amazon for instance employs thousands of both full-time employees and contract workers to <a href="https://www.bloomberg.com/news/articles/2019-04-10/is-anyone-listening-to-you-on-alexa-a-global-team-reviews-audio" target="_blank">listen to and annotate speech</a>. The tagged speech is then fed back to Alexa to improve its comprehension. And last month <a href="https://www.theguardian.com/technology/2019/may/28/a-white-collar-sweatshop-google-assistant-contractors-allege-wage-theft" target="_blank">the Guardian reported</a> that Google employs around 100,000 temps, vendors and contractors: literally an army of linguists working in "white collar sweatshops" to create the handcrafted data sets required for Google translate to learn dozens of languages. Not surprisingly there is a huge disparity between the wages and working conditions of these workers and Google's full time employees.<br />
<br />
AI tagging jobs are dull, repetitive and in the case of the linguists highly skilled. But by far the worst kind of new white collar job in the AI industry is content moderation.<br />
<br />
These tens of thousands of people, employed by third-party contractors, are required to watch and vet offensive content: hate speech, violent pornography, cruelty and sometimes murder of both animals and humans for Facebook, YouTube and other media platforms [2]. These jobs are not just dull and repetitive they are positively dangerous. <a href="https://www.newyorker.com/tech/annals-of-technology/the-human-toll-of-protecting-the-internet-from-the-worst-of-humanity" target="_blank">Harrowing</a> <a href="https://www.theverge.com/2019/2/25/18229714/cognizant-facebook-content-moderator-interviews-trauma-working-conditions-arizona" target="_blank">reports</a> tell of PTSD-like trauma symptoms, panic attacks and burnout after one year, alongside micromanagement, poor working conditions and ineffective counselling. And very poor pay - typically $28,800 a year. Compare this with average annual salaries at Facebook of ~$240,000.<br />
<br />
The big revelation to me over the past few months is the extent to which AI has a <i>human supply chain</i>, and I am an AI insider! The genius designers of this amazing tech rely on both huge amounts of energy and a hidden army of what <a href="https://www.theverge.com/2019/5/13/18563284/mary-gray-ghost-work-microwork-labor-silicon-valley-automation-employment-interview" target="_blank">Mary Gray and Siddhartha Suri call Ghost Workers</a>.<br />
<br />
I would like to leave you with a question: <u>how can we, as ethical consumers, justify continuing to make use of unsustainable and unethical AI technologies</u>?<br />
<br />
<br />
<iframe allowfullscreen="true" frameborder="0" height="389" mozallowfullscreen="true" src="https://docs.google.com/presentation/d/e/2PACX-1vRdtfidsL4pTAgHmqDsEcA8QS3QbTAeHwHV5SzFvAzPSOXP7X6v4HsPDtXdyZ3O7Q/embed?start=true&loop=true&delayms=3000" webkitallowfullscreen="true" width="480"></iframe>
<br />
<br />
<hr />
References:<br />
<br />
[1] Emma Strubell, Ananya Ganesh, Andrew McCallum (2019) <a href="https://arxiv.org/abs/1906.02243" target="_blank">Energy and Policy Considerations for Deep Learning in NLP</a>, arXiv:1906.02243<br />
[2] Sarah Roberts (2016) <a href="https://ir.lib.uwo.ca/cgi/viewcontent.cgi?article=1014&context=commpub" target="_blank">Digital Refuse: Canadian Garbage, Commercial Content Moderation and the Global Circulation of Social Media’s Waste</a>, Media Studies Publications. 14.Alan Winfieldhttp://www.blogger.com/profile/08263812573346115168noreply@blogger.com2tag:blogger.com,1999:blog-20402273.post-53612874186644475812019-05-30T00:13:00.002+01:002019-07-08T07:36:48.122+01:00My top three policy and governance issues in AI/ML<!--[if gte mso 9]><xml>
<o:OfficeDocumentSettings>
<o:AllowPNG/>
</o:OfficeDocumentSettings>
</xml><![endif]--><!--[if gte mso 9]><xml>
<w:WordDocument>
<w:Zoom>0</w:Zoom>
<w:TrackMoves>false</w:TrackMoves>
<w:TrackFormatting/>
<w:PunctuationKerning/>
<w:DrawingGridHorizontalSpacing>18 pt</w:DrawingGridHorizontalSpacing>
<w:DrawingGridVerticalSpacing>18 pt</w:DrawingGridVerticalSpacing>
<w:DisplayHorizontalDrawingGridEvery>0</w:DisplayHorizontalDrawingGridEvery>
<w:DisplayVerticalDrawingGridEvery>0</w:DisplayVerticalDrawingGridEvery>
<w:ValidateAgainstSchemas/>
<w:SaveIfXMLInvalid>false</w:SaveIfXMLInvalid>
<w:IgnoreMixedContent>false</w:IgnoreMixedContent>
<w:AlwaysShowPlaceholderText>false</w:AlwaysShowPlaceholderText>
<w:Compatibility>
<w:BreakWrappedTables/>
<w:DontGrowAutofit/>
<w:DontAutofitConstrainedTables/>
<w:DontVertAlignInTxbx/>
</w:Compatibility>
</w:WordDocument>
</xml><![endif]--><!--[if gte mso 9]><xml>
<w:LatentStyles DefLockedState="false" LatentStyleCount="276">
</w:LatentStyles>
</xml><![endif]-->
<style>
<!--
/* Font Definitions */
@font-face
{font-family:Arial;
panose-1:2 11 6 4 2 2 2 2 2 4;
mso-font-charset:0;
mso-generic-font-family:auto;
mso-font-pitch:variable;
mso-font-signature:3 0 0 0 1 0;}
@font-face
{font-family:Times;
panose-1:2 0 5 0 0 0 0 0 0 0;
mso-font-charset:0;
mso-generic-font-family:auto;
mso-font-pitch:variable;
mso-font-signature:3 0 0 0 1 0;}
@font-face
{font-family:Cambria;
panose-1:2 4 5 3 5 4 6 3 2 4;
mso-font-charset:0;
mso-generic-font-family:auto;
mso-font-pitch:variable;
mso-font-signature:3 0 0 0 1 0;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{mso-style-parent:"";
margin:0cm;
margin-bottom:.0001pt;
mso-pagination:widow-orphan;
font-size:12.0pt;
font-family:"Times New Roman";
mso-ascii-font-family:Cambria;
mso-ascii-theme-font:minor-latin;
mso-fareast-font-family:Cambria;
mso-fareast-theme-font:minor-latin;
mso-hansi-font-family:Cambria;
mso-hansi-theme-font:minor-latin;
mso-bidi-font-family:"Times New Roman";
mso-bidi-theme-font:minor-bidi;}
p
{margin:0cm;
margin-bottom:.0001pt;
mso-pagination:widow-orphan;
font-size:10.0pt;
font-family:"Times New Roman";
mso-ascii-font-family:Times;
mso-fareast-font-family:Cambria;
mso-fareast-theme-font:minor-latin;
mso-hansi-font-family:Times;
mso-bidi-font-family:"Times New Roman";}
@page Section1
{size:612.0pt 792.0pt;
margin:72.0pt 90.0pt 72.0pt 90.0pt;
mso-header-margin:36.0pt;
mso-footer-margin:36.0pt;
mso-paper-source:0;}
div.Section1
{page:Section1;}
/* List Definitions */
@list l0
{mso-list-id:438717914;
mso-list-type:hybrid;
mso-list-template-ids:-2049268010 67698703 67698713 67698715 67698703 67698713 67698715 67698703 67698713 67698715;}
@list l0:level1
{mso-level-tab-stop:none;
mso-level-number-position:left;
text-indent:-18.0pt;}
ol
{margin-bottom:0cm;}
ul
{margin-bottom:0cm;}
</style>
<br />
<span style="font-family: "arial";">In preparation for a </span><a href="https://www.technologyreview.com/s/613589/the-world-economic-forum-wants-to-develop-global-rules-for-ai/" style="font-family: arial;" target="_blank">meeting of the WEF global AI council today</a><span style="font-family: "arial";">, we were asked the question:</span><br />
<div style="margin-bottom: .1pt; margin-left: 0cm; margin-right: 0cm; margin-top: .1pt;">
<i style="mso-bidi-font-style: normal;"><span style="font-family: "arial";"><br /></span></i></div>
<div style="margin-bottom: .1pt; margin-left: 0cm; margin-right: 0cm; margin-top: .1pt;">
<i style="mso-bidi-font-style: normal;"><span style="font-family: "arial";">What do you
think are the top three policy and governance issues that face AI/ML currently? </span></i></div>
<div style="margin-bottom: .1pt; margin-left: 0cm; margin-right: 0cm; margin-top: .1pt;">
<i style="mso-bidi-font-style: normal;"><span style="font-family: "arial";"><br /></span></i>
<span style="mso-bidi-font-style: normal;"><span style="font-family: "arial";">Here are my answers.</span></span><br />
<i style="mso-bidi-font-style: normal;"><span style="font-family: "arial";"><br /></span></i></div>
<div style="margin-bottom: .1pt; margin-left: 18.0pt; margin-right: 0cm; margin-top: .1pt; mso-list: l0 level1 lfo1; text-indent: -18.0pt;">
<!--[if !supportLists]--><span style="font-family: "arial"; mso-bidi-font-family: Arial; mso-fareast-font-family: Arial;"><span style="mso-list: Ignore;">1.<span style="font: 7.0pt "Times New Roman";">
</span></span></span><!--[endif]--><span style="font-family: "arial";">For me the
biggest governance issue facing AI/ML ethics is the <b style="mso-bidi-font-weight: normal;">gap
between principles and practice</b>. The hard problem the industry faces is
turning good intentions into demonstrably good behaviour. In the last 2.5 years
there has been a gold rush of new ethical principles in AI. Since Jan 2017 <a href="http://alanwinfield.blogspot.com/2019/04/an-updated-round-up-of-ethical.html" target="_blank">at least 22 sets of ethical principles have been published</a>, including principles
from Google, IBM, Microsoft and Intel. Yet any evidence that these principles
are making a difference within those companies is hard to find – leading to a
justifiable accusation of <b style="mso-bidi-font-weight: normal;"><a href="https://www.theverge.com/2019/4/3/18293410/ai-artificial-intelligence-ethics-boards-charters-problem-big-tech" target="_blank">ethics-washing</a></b>
– and if anything the reputations of some leading AI companies are looking
increasingly tarnished.<o:p></o:p></span></div>
<div style="margin-bottom: .1pt; margin-left: 0cm; margin-right: 0cm; margin-top: .1pt;">
<br /></div>
<div style="margin-bottom: .1pt; margin-left: 18.0pt; margin-right: 0cm; margin-top: .1pt; mso-list: l0 level1 lfo1; text-indent: -18.0pt;">
<!--[if !supportLists]--><span style="font-family: "arial"; mso-bidi-font-family: Arial; mso-fareast-font-family: Arial;"><span style="mso-list: Ignore;">2.<span style="font: 7.0pt "Times New Roman";">
</span></span></span><!--[endif]--><span style="font-family: "arial";">Like others I am
deeply concerned by the acute <b style="mso-bidi-font-weight: normal;">gender
imbalance</b> in AI (estimates of the proportion of women in AI vary between
~<a href="https://medium.com/element-ai-research-lab/estimating-the-gender-ratio-of-ai-researchers-around-the-world-81d2b8dbe9c3" target="_blank">12%</a> and ~<a href="https://futurism.com/ai-gender-gap-artificial-intelligence" target="_blank">22%</a>). This is not just unfair, I believe it too be positively
dangerous, since it is resulting in AI products and services that reflect the
values and ambitions of (young, predominantly white) men. This makes it a governance issue. I cannot help
wondering if the deeply troubling rise of <b style="mso-bidi-font-weight: normal;">surveillance
capitalism</b> is not, at least in part, a consequence of male values. <o:p></o:p></span></div>
<div style="margin-bottom: .1pt; margin-left: 0cm; margin-right: 0cm; margin-top: .1pt;">
<br /></div>
<div style="margin-bottom: .1pt; margin-left: 18.0pt; margin-right: 0cm; margin-top: .1pt; mso-list: l0 level1 lfo1; text-indent: -18.0pt;">
<!--[if !supportLists]--><span style="font-family: "arial"; mso-bidi-font-family: Arial; mso-fareast-font-family: Arial;"><span style="mso-list: Ignore;">3.<span style="font: 7.0pt "Times New Roman";">
</span></span></span><!--[endif]--><span style="font-family: "arial";">A major policy concern
is the apparently <b style="mso-bidi-font-weight: normal;">very poor quality of
many of the jobs created by the large AI/ML companies</b>. Of course the AI/ML
engineers are paid exceptionally well, but it seems that there is a very large
number of very poorly paid workers who, in effect, compensate for the fact that
AI is not (yet) capable of <a href="https://www.theverge.com/2019/2/25/18229714/cognizant-facebook-content-moderator-interviews-trauma-working-conditions-arizona" target="_blank">identifying offensive content</a>, nor is it able to
learn without training data generated from large quantities of <a href="https://www.scmp.com/tech/article/2166655/ai-promises-jobs-revolution-first-it-needs-old-fashioned-manual-labour-china" target="_blank">manually tagged objects in images</a>, nor can <a href="https://www.theguardian.com/technology/2019/may/28/a-white-collar-sweatshop-google-assistant-contractors-allege-wage-theft" target="_blank">conversational AI manage all queries</a> that might be
presented to it. This hidden army of piece workers, employed in developing
countries by third party sub contractors and paid very poorly, are undertaking
work that is at best extremely tedious (you might say robotic) and at worst
psychologically very harmful; this has been called AI’s dirty little secret and
should not – in my view – go unaddressed.</span></div>
Alan Winfieldhttp://www.blogger.com/profile/08263812573346115168noreply@blogger.com1tag:blogger.com,1999:blog-20402273.post-35979356061983093662019-04-18T12:35:00.001+01:002019-07-07T11:05:43.776+01:00An Updated Round Up of Ethical Principles of Robotics and AI<span style="font-family: inherit;">This blogpost is an updated round up of the various sets of ethical principles of robotics and AI that have been proposed to date, ordered by date of first publication.</span><br />
<span style="font-family: inherit;"><br /></span>
<span style="font-family: inherit;">I previously listed <a href="http://alanwinfield.blogspot.com/2017/12/a-round-up-of-robotics-and-ai-ethics.html">principles published before December 2017 here</a>; this blogpost appends those principles drafted since January 2018 (plus one in October 2017 I had missed). The principles are listed here (in full or abridged) with links, notes and references but without critique.</span><br />
<span style="font-family: inherit;"><br /></span>
<span style="font-family: inherit;">Scroll down to the next horizontal line for the updates.</span><br />
<span style="font-family: inherit;"><br /></span>
<span style="font-family: inherit;">If there any (prominent) ones I’ve missed please let me know.</span><br />
<div>
<span style="font-family: inherit;"><br /></span></div>
<div>
<span style="color: red; font-family: inherit;"><b>Asimov’s three laws of Robotics (1950)</b></span><br />
<div>
<ol>
<li><span style="font-family: inherit;">A robot may not injure a human being or, through inaction, allow a human being to come to harm. </span></li>
<li><span style="font-family: inherit;">A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. </span></li>
<li><span style="font-family: inherit;">A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.</span></li>
</ol>
<span style="font-family: inherit;">I have included these to explicitly acknowledge, firstly, that Asimov undoubtedly established the principle that robots (and by extension AIs) should be governed by principles, and secondly that many subsequent principles have been drafted as a direct response. The three laws first appeared in Asimov’s short story Runaround [1]. <a href="https://en.wikipedia.org/wiki/Three_Laws_of_Robotics">This wikipedia article</a> provides a very good account of the three laws and their many (fictional) extensions.</span></div>
<div>
<span style="font-family: inherit;"><br /></span></div>
<div>
<span style="color: red; font-family: inherit;"><b>Murphy and Wood’s three laws of Responsible Robotics (2009)</b></span></div>
<div>
<ol>
<li><span style="font-family: inherit; text-align: justify;">A human may not deploy a robot without the human-robot work system meeting the highest legal and professional standards of safety and ethics. </span></li>
<li><span style="font-family: inherit; text-align: justify;">A robot must respond to humans as appropriate for their roles. </span></li>
<li><span style="font-family: inherit; text-align: justify;">A robot must be endowed with sufficient situated autonomy to protect its own existence as long as such protection provides smooth transfer of control which does not conflict with the First and Second Laws. </span></li>
</ol>
</div>
<div>
<span style="font-family: inherit;"><span style="box-sizing: border-box; text-align: justify;">These were proposed in Robin Murphy and David Wood’s paper </span><a href="http://www.inf.ufrgs.br/~prestes/Courses/Robotics/beyond%20asimov.pdf" style="box-sizing: border-box; text-align: justify;">Beyond Asimov: The Three Laws of Responsible Robotics</a><span style="box-sizing: border-box; text-align: justify;"> [2].</span></span></div>
<div>
<b style="text-align: justify;"><span style="box-sizing: border-box; color: red;"><span style="color: #777777; font-family: "arial" , "helvetica" , sans-serif;"><span style="background-color: white; box-sizing: border-box; font-family: inherit; font-size: 1.1em; font-weight: normal;"><br /></span></span></span></b></div>
<div>
<span style="color: red; font-family: inherit;"><b>EPSRC Principles of Robotics (2010) </b></span></div>
<div>
<ol>
<li><span style="font-family: inherit;">Robots are multi-use tools. Robots should not be designed solely or primarily to kill or harm humans, except in the interests of national security. </span></li>
<li><span style="font-family: inherit;">Humans, not Robots, are responsible agents. Robots should be designed and operated as far as practicable to comply with existing laws, fundamental rights and freedoms, including privacy. </span></li>
<li><span style="font-family: inherit;">Robots are products. They should be designed using processes which assure their safety and security. </span></li>
<li><span style="font-family: inherit;">Robots are manufactured artefacts. They should not be designed in a deceptive way to exploit vulnerable users; instead their machine nature should be transparent. </span></li>
<li><span style="font-family: inherit;">The person with legal responsibility for a robot should be attributed.</span></li>
</ol>
<span style="font-family: inherit;">These principles were drafted in 2010 <a href="https://www.epsrc.ac.uk/research/ourportfolio/themes/engineering/activities/principlesofrobotics/">and published online in 2011</a>, but not <a href="http://www.tandfonline.com/doi/ref/10.1080/09540091.2016.1271400">formally published until 2017</a> [3] as part of a two-part <a href="http://www.tandfonline.com/toc/ccos20/29/2">special issue of Connection Science</a> on the principles, edited by Tony Prescott & Michael Szollosy [4]. An accessible <a href="https://www.researchgate.net/publication/258540278_Roboethics_-_for_humans">introduction to the EPSRC principles</a> was published in New Scientist in 2011.</span><br />
<span style="font-family: inherit;"><br /></span>
<span style="color: red; font-family: inherit;"><b>Future of Life Institute <a href="https://futureoflife.org/ai-principles/">Asilomar principles for beneficial AI</a> (Jan 2017)</b></span><br />
<span style="font-family: inherit;"><br /></span>
<span style="font-family: inherit;">I will not list all 23 principles but extract just a few to compare and contrast with the others listed here:</span><br />
<blockquote class="tr_bq">
<span style="font-family: inherit;">6. Safety: AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.</span><br />
<span style="font-family: inherit;">7. Failure Transparency: If an AI system causes harm, it should be possible to ascertain why.</span><br />
<span style="font-family: inherit;">8. Judicial Transparency: Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.</span><br />
<span style="font-family: inherit;">9. Responsibility: Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.</span><br />
<span style="font-family: inherit;">10. Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.</span></blockquote>
<blockquote class="tr_bq">
<span style="font-family: inherit;">11. Human Values: AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.</span><br />
<span style="font-family: inherit;">12. Personal Privacy: People should have the right to access, manage and control the data they generate, given AI systems’ power to analyze and utilize that data.</span><br />
<span style="font-family: inherit;">13. Liberty and Privacy: The application of AI to personal data must not unreasonably curtail people’s real or perceived liberty.</span><br />
<span style="font-family: inherit;">14. Shared Benefit: AI technologies should benefit and empower as many people as possible.</span><br />
<span style="font-family: inherit;">15. Shared Prosperity: The economic prosperity created by AI should be shared broadly, to benefit all of humanity.</span></blockquote>
<span style="font-family: inherit;">An account of the <a href="https://futureoflife.org/2017/01/17/principled-ai-discussion-asilomar/">development of the Asilomar principles can be found here</a>.</span><br />
<span style="font-family: inherit;"><br /></span>
<span style="color: red; font-family: inherit;"><b>The ACM US Public Policy Council <a href="https://www.acm.org/binaries/content/assets/public-policy/2017_usacm_statement_algorithms.pdf">Principles for Algorithmic Transparency and Accountability</a> (Jan 2017) </b></span></div>
<div>
<ol>
<li><span style="font-family: inherit;">Awareness: Owners, designers, builders, users, and other stakeholders of analytic systems should be aware of the possible biases involved in their design, implementation, and use and the potential harm that biases can cause to individuals and society. </span></li>
<li><span style="font-family: inherit;">Access and redress: Regulators should encourage the adoption of mechanisms that enable questioning and redress for individuals and groups that are adversely affected by algorithmically informed decisions. </span></li>
<li><span style="font-family: inherit;">Accountability: Institutions should be held responsible for decisions made by the algorithms that they use, even if it is not feasible to explain in detail how the algorithms produce their results. </span></li>
<li><span style="font-family: inherit;">Explanation: Systems and institutions that use algorithmic decision-making are encouraged to produce explanations regarding both the procedures followed by the algorithm and the specific decisions that are made. This is particularly important in public policy contexts. </span></li>
<li><span style="font-family: inherit;">Data Provenance: A description of the way in which the training data was collected should be maintained by the builders of the algorithms, accompanied by an exploration of the potential biases induced by the human or algorithmic data-gathering process. </span></li>
<li><span style="font-family: inherit;">Auditability: Models, algorithms, data, and decisions should be recorded so that they can be audited in cases where harm is suspected. </span></li>
<li><span style="font-family: inherit;">Validation and Testing: Institutions should use rigorous methods to validate their models and document those methods and results.</span></li>
</ol>
<span style="font-family: inherit;">See the ACM <a href="https://www.acm.org/media-center/2017/january/usacm-statement-on-algorithmic-accountability">announcement of these principles here</a>. The principles form part of the ACM’s updated <a href="https://ethics.acm.org/">code of ethics</a>.</span><br />
<span style="font-family: inherit;"><br /></span>
<span style="color: red; font-family: inherit;"><b>Japanese Society for Artificial Intelligence (JSAI) <a href="http://ai-elsi.org/wp-content/uploads/2017/05/JSAI-Ethical-Guidelines-1.pdf">Ethical Guidelines</a> (Feb 2017)</b></span><br />
<ol>
<li><span style="font-family: inherit;">Contribution to humanity Members of the JSAI will contribute to the peace, safety, welfare, and public interest of humanity. </span></li>
<li><span style="font-family: inherit;">Abidance of laws and regulations Members of the JSAI must respect laws and regulations relating to research and development, intellectual property, as well as any other relevant contractual agreements. Members of the JSAI must not use AI with the intention of harming others, be it directly or indirectly. </span></li>
<li><span style="font-family: inherit;">Respect for the privacy of others Members of the JSAI will respect the privacy of others with regards to their research and development of AI. Members of the JSAI have the duty to treat personal information appropriately and in accordance with relevant laws and regulations. </span></li>
<li><span style="font-family: inherit;">Fairness Members of the JSAI will always be fair. Members of the JSAI will acknowledge that the use of AI may bring about additional inequality and discrimination in society which did not exist before, and will not be biased when developing AI. </span></li>
<li><span style="font-family: inherit;">Security As specialists, members of the JSAI shall recognize the need for AI to be safe and acknowledge their responsibility in keeping AI under control. </span></li>
<li><span style="font-family: inherit;">Act with integrity Members of the JSAI are to acknowledge the significant impact which AI can have on society. </span></li>
<li><span style="font-family: inherit;">Accountability and Social Responsibility Members of the JSAI must verify the performance and resulting impact of AI technologies they have researched and developed. </span></li>
<li><span style="font-family: inherit;">Communication with society and self-development Members of the JSAI must aim to improve and enhance society’s understanding of AI. </span></li>
<li><span style="font-family: inherit;">Abidance of ethics guidelines by AI AI must abide by the policies described above in the same manner as the members of the JSAI in order to become a member or a quasi-member of society.</span></li>
</ol>
<span style="font-family: inherit;">An explanation of the <a href="http://ai-elsi.org/archives/514">background and aims of these ethical guidelines can be found here</a>, together with a <a href="http://ai-elsi.org/wp-content/uploads/2017/05/JSAI-Ethical-Guidelines-1.pdf">link to the full principles</a> (which are shown abridged above).</span><br />
<span style="font-family: inherit;"><br /></span>
<span style="color: red; font-family: inherit;"><b>Draft principles of The Future Society’s Science, Law and Society Initiative (Oct 2017)</b></span><br />
<ol>
<li><span style="font-family: inherit;">AI should advance the well-being of humanity, its societies, and its natural environment. </span></li>
<li><span style="font-family: inherit;">AI should be transparent. </span></li>
<li><span style="font-family: inherit;">Manufacturers and operators of AI should be accountable. </span></li>
<li><span style="font-family: inherit;">AI’s effectiveness should be measurable in the real-world applications for which it is intended. </span></li>
<li><span style="font-family: inherit;">Operators of AI systems should have appropriate competencies. </span></li>
<li><span style="font-family: inherit;">The norms of delegation of decisions to AI systems should be codified through thoughtful, inclusive dialogue with civil society.</span></li>
</ol>
<span style="font-family: inherit;">This <a href="http://www.abajournal.com/legalrebels/article/a_principled_artificial_intelligence_could_improve_justice">article by Nicolas Economou</a> explains the 6 principles with a full commentary on each one.</span><br />
<span style="font-family: inherit;"><br /></span>
<span style="color: red; font-family: inherit;"><b>Montréal Declaration for Responsible AI draft principles (Nov 2017)</b></span><br />
<ol>
<li><span style="font-family: inherit;">Well-being The development of AI should ultimately promote the well-being of all sentient creatures. </span></li>
<li><span style="font-family: inherit;">Autonomy The development of AI should promote the autonomy of all human beings and control, in a responsible way, the autonomy of computer systems. </span></li>
<li><span style="font-family: inherit;">Justice The development of AI should promote justice and seek to eliminate all types of discrimination, notably those linked to gender, age, mental / physical abilities, sexual orientation, ethnic/social origins and religious beliefs. </span></li>
<li><span style="font-family: inherit;">Privacy The development of AI should offer guarantees respecting personal privacy and allowing people who use it to access their personal data as well as the kinds of information that any algorithm might use. </span></li>
<li><span style="font-family: inherit;">Knowledge The development of AI should promote critical thinking and protect us from propaganda and manipulation. </span></li>
<li><span style="font-family: inherit;">Democracy The development of AI should promote informed participation in public life, cooperation and democratic debate. </span></li>
<li><span style="font-family: inherit;">Responsibility The various players in the development of AI should assume their responsibility by working against the risks arising from their technological innovations.</span></li>
</ol>
<span style="font-family: inherit;">The <a href="https://www.montrealdeclaration-responsibleai.com/">Montréal Declaration for Responsible AI</a> proposes the 7 values and draft principles above (<a href="https://www.montrealdeclaration-responsibleai.com/the-declaration">here in full with preamble, questions and definitions</a>).</span><br />
<span style="font-family: inherit;"><br /></span>
<span style="font-family: inherit;"><span style="color: red;"><b>I</b></span><span style="color: red; font-family: "arial" , "helvetica" , sans-serif;"><b>EEE General Principles of Ethical Autonomous and Intelligent Systems (Dec 2017)</b></span></span><br />
<ol>
<li><span style="font-family: inherit;">How can we ensure that A/IS do not infringe human rights? </span></li>
<li><span style="font-family: inherit;">Traditional metrics of prosperity do not take into account the full effect of A/IS technologies on human well-being. </span></li>
<li><span style="font-family: inherit;">How can we assure that designers, manufacturers, owners and operators of A/IS are responsible and accountable? </span></li>
<li><span style="font-family: inherit;">How can we ensure that A/IS are transparent? </span></li>
<li><span style="font-family: inherit;">How can we extend the benefits and minimize the risks of AI/AS technology being misused?</span></li>
</ol>
<span style="font-family: inherit;">These 5 general principles appear in <a href="https://ethicsinaction.ieee.org/">Ethically Aligned Design v2</a>, a discussion document drafted and published by the IEEE Standards Association <a href="http://standards.ieee.org/develop/indconn/ec/autonomous_systems.html">Global Initiative on Ethics of Autonomous and Intelligent Systems</a>. The principles are expressed not as rules but instead as questions, or concerns, <a href="http://standards.ieee.org/develop/indconn/ec/ead_general_principles_v2.pdf">together with background and candidate recommendations</a>.</span><br />
<span style="font-family: inherit;"><br /></span>
<span style="font-family: inherit;">A short article co-authored with IEEE general principles co-chair Mark Halverson <a href="http://sites.ieee.org/futuredirections/tech-policy-ethics/september-2017/artificial-intelligence-and-autonomous-systems-why-principles-matter/">Why Principles Matter</a> explains the link between principles and standards, together with further commentary and references.</span><br />
<span style="font-family: inherit;"><br /></span>
<span style="font-family: inherit;">Note that these principles have been revised and extended, in March 2019 (see below).</span><br />
<span style="font-family: inherit;"><br /></span>
<b><span style="color: red; font-family: inherit;">UNI Global Union Top 10 Principles for Ethical AI (Dec 2017)</span></b><br />
<ol>
<li><span style="font-family: inherit;">Demand That AI Systems Are Transparent </span></li>
<li><span style="font-family: inherit;">Equip AI Systems With an Ethical Black Box </span></li>
<li><span style="font-family: inherit;">Make AI Serve People and Planet </span></li>
<li><span style="font-family: inherit;">Adopt a Human-In-Command Approach </span></li>
<li><span style="font-family: inherit;">Ensure a Genderless, Unbiased AI </span></li>
<li><span style="font-family: inherit;">Share the Benefits of AI Systems </span></li>
<li><span style="font-family: inherit;">Secure a Just Transition and Ensuring Support for Fundamental Freedoms and Rights </span></li>
<li><span style="font-family: inherit;">Establish Global Governance Mechanisms </span></li>
<li><span style="font-family: inherit;">Ban the Attribution of Responsibility to Robots </span></li>
<li><span style="font-family: inherit;">Ban AI Arms Race</span></li>
</ol>
<span style="font-family: inherit;">Drafted by <a href="http://www.uniglobalunion.org/">UNI Global Union</a>‘s <a href="http://www.thefutureworldofwork.org/">Future World of Work</a> these <a href="http://www.thefutureworldofwork.org/media/35420/uni_ethical_ai.pdf">10 principles for Ethical AI (set out here with full commentary)</a> “provide unions, shop stewards and workers with a set of concrete demands to the transparency, and application of AI”.</span><br />
<hr style="background-color: white; box-sizing: border-box; color: #777777; font-family: Arial, Helvetica, sans-serif; font-size: 12.32px; text-align: justify;" />
<b>Updated principles</b><br />
<br />
<b><span style="color: red;">Intel’s recommendation for Public Policy Principles on AI (October 2017)</span></b><br />
<ol>
<li>Foster Innovation and Open Development – To better understand the impact of AI and explore the broad diversity of AI implementations, public policy should encourage investment in AI R&D. Governments should support the controlled testing of AI systems to help industry, academia, and other stakeholders improve the technology. </li>
<li>Create New Human Employment Opportunities and Protect People’s Welfare – AI will change the way people work. Public policy in support of adding skills to the workforce and promoting employment across different sectors should enhance employment opportunities while also protecting people’s welfare. </li>
<li>Liberate Data Responsibly – AI is powered by access to data. Machine learning algorithms improve by analyzing more data over time; data access is imperative to achieve more enhanced AI model development and training. Removing barriers to the access of data will help machine learning and deep learning reach their full potential. </li>
<li>Rethink Privacy – Privacy approaches like The Fair Information Practice Principles and Privacy by Design have withstood the test of time and the evolution of new technology. But with innovation, we have had to “rethink” how we apply these models to new technology. </li>
<li>Require Accountability for Ethical Design and Implementation – The social implications of computing have grown and will continue to expand as more people have access to implementations of AI. Public policy should work to identify and mitigate discrimination caused by the use of AI and encourage designing in protections against these harms. </li>
</ol>
These principles were <a href="https://blogs.intel.com/policy/2017/10/18/naveen-rao-announces-intel-ai-public-policy/">announced in a blog post by Naveen Rao (Intel VP AI) here</a>.<br />
<br />
<b><span style="color: red;">Lords Select Committee 5 core principles to keep AI ethical (April 2018) </span></b></div>
<div>
<ol>
<li>Artificial intelligence should be developed for the common good and benefit of humanity.</li>
<li>Artificial intelligence should operate on principles of intelligibility and fairness.</li>
<li>Artificial intelligence should not be used to diminish the data rights or privacy of individuals, families or communities.</li>
<li>All citizens have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence.</li>
<li>The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence.</li>
</ol>
These principles appear in the UK House of Lords Select Committee on Artificial Intelligence report <a href="https://publications.parliament.uk/pa/ld201719/ldselect/ldai/100/100.pdf">AI in the UK: ready, willing and able?</a> published in April 2019. The WEF <a href="https://www.weforum.org/agenda/2018/04/keep-calm-and-make-ai-ethical/">published a summary and commentary here</a>.<br />
<br />
<span style="color: red;"><b>AI UX: 7 Principles of Designing Good AI Products (April 2018) </b></span><br />
<ol>
<li>Differentiate AI content visually – let people know if an algorithm has generated a piece of content so they can decide for themselves whether to trust it or not. </li>
<li>Explain how machines think – helping people understand how machines work so they can use them better </li>
<li>Set the right expectations – especially in a world full of sensational, superficial news about new AI technologies. </li>
<li>Find and handle weird edge cases – spend more time testing and finding weird, funny, or even disturbing or unpleasant edge cases. </li>
<li>User testing for AI products (default methods won’t work here). </li>
<li>Provide an opportunity to give feedback.</li>
</ol>
<a href="https://uxstudioteam.com/ux-blog/ai-ux/">These principles</a>, focussed on the design of the User Interface (UI) and User Experience (UX), are from Budapest based company <a href="https://uxstudioteam.com/">UX Studio</a>.<br />
<br />
<b><span style="color: red;">The Toronto Declaration on equality and non-discrimination in machine learning systems (May 2018)</span></b><a href="https://www.accessnow.org/cms/assets/uploads/2018/08/The-Toronto-Declaration_ENG_08-2018.pdf">The Toronto Declaration: Protecting the right to equality and non-discrimination in machine learning systems</a> does not succinctly articulate ethical principles but instead presents arguments under the following headings to address concerns “about the capability of [machine learning] systems to facilitate intentional or inadvertent discrimination against certain individuals or groups of people”. <br />
<ol>
<li>Using the framework of international human rights law The right to equality and non-discrimination; Preventing discrimination, and Protecting the rights of all individuals and groups: promoting diversity and inclusion </li>
<li>Duties of states: human rights obligations State use of machine learning systems; Promoting equality, and Holding private sector actors to account </li>
<li>Responsibilities of private sector actors human rights due diligence </li>
<li>The right to an effective remedy</li>
</ol>
<b><br /></b>
<b><span style="color: red;"><a href="https://ai.google/principles/">Google AI Principles</a> (June 2018) </span></b><br />
<ol>
<li>Be socially beneficial. </li>
<li>Avoid creating or reinforcing unfair bias. </li>
<li>Be built and tested for safety. </li>
<li>Be accountable to people. </li>
<li>Incorporate privacy design principles. </li>
<li>Uphold high standards of scientific excellence. </li>
<li>Be made available for uses that accord with these principles. </li>
</ol>
These principles were launched with a <a href="https://www.blog.google/technology/ai/ai-principles/">blog post and commentary by Google CEO Sundar Pichai here.</a><br />
<br />
<span style="color: red;"><b>IBM’s 5 ethical AI principles (September 2018) </b></span><br />
<ol>
<li>Accountability: AI designers and developers are responsible for considering AI design, development, decision processes, and outcomes. </li>
<li>Value alignment: AI should be designed to align with the norms and values of your user group in mind. </li>
<li>Explainability: AI should be designed for humans to easily perceive, detect, and understand its decision process, and the predictions/recommendations. This is also, at times, referred to as interpretability of AI. Simply speaking, users have all rights to ask the details on the predictions made by AI models such as which features contributed to the predictions by what extent. Each of the predictions made by AI models should be able to be reviewed. </li>
<li>Fairness: AI must be designed to minimize bias and promote inclusive representation. </li>
<li>User data rights: AI must be designed to protect user data and preserve the user’s power over access and uses </li>
</ol>
For a full account read <a href="https://www.ibm.com/watson/assets/duo/pdf/everydayethics.pdf">IBM’s Everyday Ethics for Artificial Intelligence</a> here.<br />
<br />
<span style="color: red;"><b>Microsoft Responsible bots: 10 guidelines for developers of conversational AI (November 2018) </b></span><br />
<ol>
<li>Articulate the purpose of your bot and take special care if your bot will support consequential use cases. </li>
<li>Be transparent about the fact that you use bots as part of your product or service. </li>
<li>Ensure a seamless hand-off to a human where the human-bot exchange leads to interactions that exceed the bot’s competence. </li>
<li>Design your bot so that it respects relevant cultural norms and guards against misuse. </li>
<li>Ensure your bot is reliable. </li>
<li>Ensure your bot treats people fairly. </li>
<li>Ensure your bot respects user privacy. </li>
<li>Ensure your bot handles data securely. </li>
<li>Ensure your bot is accessible. </li>
<li>Accept responsibility.</li>
</ol>
Microsoft’s guidelines for the ethical design of ‘bots’ (chatbots or conversational AIs) are <a href="https://www.microsoft.com/en-us/research/uploads/prod/2018/11/Bot_Guidelines_Nov_2018.pdf">fully described here</a>.<br />
<br />
<span style="color: red;"><b>CEPEJ European Ethical Charter on the use of artificial intelligence (AI) in judicial systems and their environment, 5 principles (February 2019) </b></span><br />
<ol>
<li>Principle of respect of fundamental rights: ensuring that the design and implementation of artificial intelligence tools and services are compatible with fundamental rights. </li>
<li>Principle of non-discrimination: specifically preventing the development or intensification of any discrimination between individuals or groups of individuals. </li>
<li>Principle of quality and security: with regard to the processing of judicial decisions and data, using certified sources and intangible data with models conceived in a multi-disciplinary manner, in a secure technological environment. </li>
<li>Principle of transparency, impartiality and fairness: making data processing methods accessible and understandable, authorising external audits. </li>
<li>Principle “under user control”: precluding a prescriptive approach and ensuring that users are informed actors and in control of their choices. </li>
</ol>
The Council of Europe <a href="https://www.coe.int/en/web/cepej/cepej-european-ethical-charter-on-the-use-of-artificial-intelligence-ai-in-judicial-systems-and-their-environment">ethical charter principles are outlined here</a>, with a link to the ethical charter istelf.<br />
<br />
<span style="color: red;"><b>Women Leading in AI (WLinAI) 10 recommendations (February 2019) </b></span><br />
<ol>
<li>Introduce a regulatory approach governing the deployment of AI which mirrors that used for the pharmaceutical sector. </li>
<li>Establish an AI regulatory function working alongside the Information Commissioner’s Office and Centre for Data Ethics – to audit algorithms, investigate complaints by individuals,issue notices and fines for breaches of GDPR and equality and human rights law, give wider guidance, spread best practice and ensure algorithms must be fully explained to users and open to public scrutiny. </li>
<li>Introduce a new Certificate of Fairness for AI systems alongside a ‘kite mark’ type scheme to display it. Criteria to be defined at industry level, similarly to food labelling regulations. </li>
<li>Introduce mandatory AIAs (Algorithm Impact Assessments) for organisations employing AI systems that have a significant effect on individuals. </li>
<li>Introduce a mandatory requirement for public sector organisations using AI for particular purposes to inform citizens that decisions are made by machines, explain how the decision is reached and what would need to change for individuals to get a different outcome. </li>
<li>Introduce a ‘reduced liability’ incentive for companies that have obtained a Certificate of Fairness to foster innovation and competitiveness. </li>
<li>To compel companies and other organisations to bring their workforce with them – by publishing the impact of AI on their workforce and offering retraining programmes for employees whose jobs are being automated. </li>
<li>Where no redeployment is possible, to compel companies to make a contribution towards a digital skills fund for those employees </li>
<li>To carry out a skills audit to identify the wide range of skills required to embrace the AI revolution. </li>
<li>To establish an education and training programme to meet the needs identified by the skills audit, including content on data ethics and social responsibility. As part of that, we recommend the set up of a solid, courageous and rigorous programme to encourage young women and other underrepresented groups into technology. </li>
</ol>
Presented by the <a href="http://womenleadinginai.org/">Women Leading in AI group</a> at a meeting in parliament in February 2019, t<a href="https://www.forbes.com/sites/noelsharkey/2019/02/07/women-stand-against-social-injustice-in-ai/">his report in Forbes by Noel Sharkey</a> outlines both the group, their recommendations, and the meeting.<br />
<br />
<span style="color: red;"><b>The NHS’s 10 Principles for AI + Data (February 2019) </b></span><br />
<ol>
<li>Understand users, their needs and the context </li>
<li>Define the outcome and how the technology will contribute to it </li>
<li>Use data that is in line with appropriate guidelines for the purpose for which it is being used </li>
<li>Be fair, transparent and accountable about what data is being used </li>
<li>Make use of open standards </li>
<li>Be transparent about the limitations of the data used and algorithms deployed </li>
<li>Show what type of algorithm is being developed or deployed, the ethical examination of how the data is used, how its performance will be validated and how it will be integrated into health and care provision </li>
<li>Generate evidence of effectiveness for the intended use and value for money </li>
<li>Make security integral to the design </li>
<li>Define the commercial strategy </li>
</ol>
These principles are set out with <a href="https://www.artificiallawyer.com/2019/02/22/the-nhss-10-principles-for-ai-data-a-new-benchmark-for-lawyers/">full commentary and elaboration on Artificial Lawyer here</a>.<br />
<br />
<span style="color: red;"><b>I<span style="color: red;">EEE General Principles of Ethical Autonomous and Intelligent Systems (A/IS) (March 2019) </span></b></span><br />
<ol>
<li>Human Rights: A/IS shall be created and operated to respect, promote, and protect internationally recognized human rights. </li>
<li>Well-being: A/IS creators shall adopt increased human well-being as a primary success criterion for development. </li>
<li>Data Agency: A/IS creators shall empower individuals with the ability to access and securely share their data to maintain people’s capacity to have control over their identity. </li>
<li>Effectiveness: A/IS creators and operators shall provide evidence of the effectiveness and fitness for purpose of A/IS. </li>
<li>Transparency: the basis of a particular A/IS decision should always be discoverable. </li>
<li>Accountability: A/IS shall be created and operated to provide an unambiguous rationale for all decisions made. </li>
<li>Awareness of Misuse: A/IS creators shall guard against all potential misuses and risks of A/IS in operation. </li>
<li>Competence: A/IS creators shall specify and operators shall adhere to the knowledge and skill required for safe and effective operation. </li>
</ol>
These amended and extended general principles form part of <a href="https://ethicsinaction.ieee.org/">Ethical Aligned Design 1st edition</a>, published in March 2019. For an <a href="https://standards.ieee.org/content/dam/ieee-standards/standards/web/documents/other/ead1e-overview.pdf">overview see pdf here</a>.<br />
<br />
<span style="color: red;"><b> Ethical issues arising from the police use of live facial recognition technology (March 2019) </b></span></div>
<div>
9 ethical principles relate to: public interest, effectiveness, the avoidance of bias and algorithmic justice, impartiality and deployment, necessity, proportionality, impartiality, accountability, oversight, and the construction of watchlists, public trust, and cost effectiveness.<br />
<br />
<a href="https://www.biometricupdate.com/201902/uk-oversight-group-publishes-ethics-framework-for-police-use-of-live-facial-recognition">Reported here</a> the UK government’s independent Biometrics and Forensics Ethics Group (BFEG) published an interim report outlining nine ethical principles forming a framework to guide policy on police facial recognition systems.<br />
<br />
<b><span style="color: red;">Floridi and Clement Jones’ five principles key to any ethical framework for AI (March 2019) </span></b><br />
<ol>
<li>AI must be beneficial to humanity. </li>
<li>AI must also not infringe on privacy or undermine security. </li>
<li>AI must protect and enhance our autonomy and ability to take decisions and choose between alternatives. </li>
<li>AI must promote prosperity and solidarity, in a fight against inequality, discrimination, and unfairness </li>
<li>We cannot achieve all this unless we have AI systems that are understandable in terms of how they work (transparency) and explainable in terms of how and why they reach the conclusions they do (accountability). </li>
</ol>
Luciano Floridi and Lord Tim Clement Jones set out, <a href="https://tech.newstatesman.com/policy/ai-ethics-framework">here in the New Statesman</a>, these 5 general ethical principles for AI, with additional commentary.<br />
<br />
<span style="color: red;"><b>The European Commission’s High Level Expert Group on AI Ethics Guidelines for Trustworthy AI (April 2019) </b></span><br />
<ol>
<li><a href="https://ec.europa.eu/futurium/en/ai-alliance-consultation/guidelines/1#Human%20agency">Human agency and oversight</a> AI systems should support human autonomy and decision-making, as prescribed by the principle of respect for human autonomy. </li>
<li><a href="https://ec.europa.eu/futurium/en/ai-alliance-consultation/guidelines/1#Robustness">Technical robustness and safety</a> A crucial component of achieving Trustworthy AI is technical robustness, which is closely linked to the principle of prevention of harm. </li>
<li><a href="https://ec.europa.eu/futurium/en/ai-alliance-consultation/guidelines/1#privacy">Privacy and Data governance</a> Closely linked to the principle of prevention of harm is privacy, a fundamental right particularly affected by AI systems. </li>
<li><a href="https://ec.europa.eu/futurium/en/ai-alliance-consultation/guidelines/1#Transparency">Transparency</a> This requirement is closely linked with the principle of explicability and encompasses transparency of elements relevant to an AI system: the data, the system and the business models. </li>
<li><a href="https://ec.europa.eu/futurium/en/ai-alliance-consultation/guidelines/1#Diversity">Diversity, non-discrimination and fairness</a> In order to achieve Trustworthy AI, we must enable inclusion and diversity throughout the entire AI system’s life cycle. </li>
<li><a href="https://ec.europa.eu/futurium/en/ai-alliance-consultation/guidelines/1#well-being">Societal and environmental well-being </a>In line with the principles of fairness and prevention of harm, the broader society, other sentient beings and the environment should be also considered as stakeholders throughout the AI system’s life cycle. </li>
<li><a href="https://ec.europa.eu/futurium/en/ai-alliance-consultation/guidelines/1#Accountability">Accountability</a> The requirement of accountability complements the above requirements, and is closely linked to the principle of fairness </li>
</ol>
For more detail on each of these principles follow the links above.<br />
<br />
Published on 8 April 2019, the EU HLEG <a href="https://ec.europa.eu/futurium/en/ai-alliance-consultation/guidelines/1">AI ethics guidelines for trustworthy AI are detailed in full here</a>.<br />
<br />
<b><span style="color: red;">Draft core principles of Australia’s Ethics Framework for AI (April 2019) </span></b><br />
<ol>
<li>Generates net-benefits. The AI system must generate benefits for people that are greater than the costs. </li>
<li>Do no harm. Civilian AI systems must not be designed to harm or deceive people and should be implemented in ways that minimise any negative outcomes. </li>
<li>Regulatory and legal compliance. The AI system must comply with all relevant international, Australian Local, State/Territory and Federal government obligations, regulations and laws. </li>
<li>Privacy protection. Any system, including AI systems, must ensure people’s private data is protected and kept confidential plus prevent data breaches which could cause reputational, psychological, financial, professional or other types of harm. </li>
<li>Fairness. The development or use of the AI system must not result in unfair discrimination against individuals, communities or groups. This requires particular attention to ensure the “training data” is free from bias or characteristics which may cause the algorithm to behave unfairly. </li>
<li>Transparency & Explainability. People must be informed when an algorithm is being used that impacts them and they should be provided with information about what information the algorithm uses to make decisions. </li>
<li>Contestability. When an algorithm impacts a person there must be an efficient process to allow that person to challenge the use or output of the algorithm. </li>
<li>Accountability. People and organisations responsible for the creation and implementation of AI algorithms should be identifiable and accountable for the impacts of that algorithm, even if the impacts are unintended. </li>
</ol>
These draft principles are detailed in <a href="https://consult.industry.gov.au/strategic-policy/artificial-intelligence-ethics-framework/supporting_documents/ArtificialIntelligenceethicsframeworkdiscussionpaper.pdf">Artificial Intelligence Australia’s Ethics Framework A Discussion Paper</a>. This comprehensive paper includes detailed summaries of many of the frameworks and initiatives listed above, together with some very useful case studies.<br />
<br />
<b>References</b><br />
<br />
[1] Asimov, Isaac (1950): Runaround, in I, Robot, (The Isaac Asimov Collection ed.) Doubleday. ISBN 0-385-42304-7.<br />
[2] Murphy, Robin; Woods, David D. (2009): <a href="https://www.researchgate.net/publication/224567023_Beyond_Asimov_The_Three_Laws_of_Responsible_Robotics">Beyond Asimov: The Three Laws of Responsible Robotics</a>. IEEE Intelligent systems. 24 (4): 14–20.<br />
[3] Margaret Boden et al (2017): <a href="https://www.tandfonline.com/doi/full/10.1080/09540091.2016.1271400">Principles of robotics: regulating robots in the real world</a><br />
Connection Science. 29 (2): 124:129.<br />
[4] Tony Prescott and Michael Szollosy (eds.) (2017): <a href="https://www.tandfonline.com/toc/ccos20/29/2?nav=tocList">Ethical Principles of Robotics, Connection Science. 29 (2)</a> and <a href="https://www.tandfonline.com/toc/ccos20/29/3?nav=tocList">29 (3)</a>.</div>
</div>
Alan Winfieldhttp://www.blogger.com/profile/08263812573346115168noreply@blogger.com14tag:blogger.com,1999:blog-20402273.post-90357543219041119312019-03-20T11:25:00.001+00:002022-08-25T11:51:18.661+01:00The Tea test of robot intelligenceHere's a test for a general purpose robot: to pass the test it must go into someone's kitchen (previously unseen) and make them a cup of tea. When I give talks I surprise people by explaining that despite remarkable progress there isn't a robot on the planet that could walk (or roll) into your kitchen and make you a cup of tea. It's my <i>Tea test of robot intelligence</i>; no robot would pass the test (and I'll wager) will not for some time. It seems like such a straightforward thing; most humans over the age of 12 can do it.<br />
<div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiNkSlHc4wvnMBkNFvlvhaCi9rvoNilK367fBdAWGTXh4PVcFGEmObDQGgih7oxD0HjuwodHsbbeCfC3YJ1VONJF-NuVKdvkFSHX16ZqHPh9Gp46OIKAS6x5pcZKr6HhxYMBg10hA/s1600/469318-softbank-pepper-robot.jpg" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="456" data-original-width="810" height="180" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiNkSlHc4wvnMBkNFvlvhaCi9rvoNilK367fBdAWGTXh4PVcFGEmObDQGgih7oxD0HjuwodHsbbeCfC3YJ1VONJF-NuVKdvkFSHX16ZqHPh9Gp46OIKAS6x5pcZKr6HhxYMBg10hA/s320/469318-softbank-pepper-robot.jpg" width="320" /></a></div>
Of course there are quite a few YouTube videos of robots making tea. But like <a href="https://www.youtube.com/watch?v=RcttVn2y6v0" target="_blank">this one</a> from 30 years ago, they pretty much all require the everything to be in exactly the right place for the robot.</div>
<div>
<div>
<br />
<br />
<br />
So why is it so hard?</div>
<div>
<br /></div>
<div>
To understand, imagine yourself in a house you've never been in before. Maybe it's a neighbour's house and she's unwell - so you call round to help out. Perhaps she's ill in bed. Let's assume that you know where the kitchen is.</div>
<div>
<br /></div>
<div>
The first thing you need to do is locate the kettle. Not so hard for a human because you know in general what kettles look like, and even if you've never seen your neighbour's kettle before there's a pretty good chance you will find it. Of course you have other important prior knowledge - you know that (at least in British kitchens) kettles are used all the time and generally kept on a work surface - not hidden away in a cupboard.</div>
<div>
<br /></div>
<div>
While looking for the kettle you will have also found the sink, and next you do something really difficult (for a robot): you pick up and take the kettle to the sink, open its lid, position it under the cold water tap, then turn on the tap. You don't leave the tap running for long because you don't want to overfill the kettle, but luckily you're a pretty good judge of how much water is enough.</div>
<div>
<br /></div>
<div>
Then while waiting for the kettle to boil you will do something even more remarkable: you will look for a cup and a tea bag. Again your prior knowledge helps here. You know what cups look like and generally where they are stored - there may even be one on the draining board by the sink. You also know what kind of jar or packaging tea bags are found in, and you have the considerable dexterity to take one tea bag and place it in the cup near the kettle.</div>
<div>
<br /></div>
<div>
Breaking the task down like this really brings home the point that this is an extraordinarily difficult thing for a robot to do. And of course there's more: the robot must safely and carefully pour the boiling water from the kettle into the cup - and importantly sense when the cup is full enough to leave room for milk*.</div>
<div>
<br /></div>
<div>
And even then the task is by no means complete. A robot would then need to locate the fridge, open its door, identify and take out the milk (which might be either a carton, glass or plastic bottle) and bring the milk to the work surface. The robot must then judge how long to leave the tea bag in (you of course will have asked your neighbour whether she prefers her tea weak or strong). </div>
<div>
<br /></div>
<div>
The robot then needs to do another difficult thing: find and pick up a tea spoon and carefully extract the tea bag from the cup. Then add the milk - which of course requires first opening whatever the milk is contained in - especially hard is a cardboard carton where the robot might have to make an opening if it isn't already open (sometimes not so easy for humans). And unscrewing the top of a plastic carton isn't much easier. Pouring a dash of milk isn't easy either.</div>
<div>
<br /></div>
<div>
I could go on and explain the difficulty a robot then faces of picking up and carrying the tea to your neighbour.<br />
<br />
One of the hard lessons of Artificial Intelligence that the things we thought would be difficult 60 years ago have turned out to be (relatively) easy, like playing chess, but the things we thought would be easy - like making a cup of tea - are still far from solved. Chess playing AIs are examples of narrow (single function) AI, whereas making a cup of tea in an unknown kitchen requires a wider set of general skills, that we lump together as 'common sense'.</div>
<div>
<br /></div>
<div>
<div>
*I am British after all!</div>
</div>
</div>
Alan Winfieldhttp://www.blogger.com/profile/08263812573346115168noreply@blogger.com0tag:blogger.com,1999:blog-20402273.post-35676634630440253522019-02-21T12:35:00.000+00:002019-03-31T18:19:17.971+01:00First automated robot assemblyThis month saw the first important milestone toward <a href="https://www.york.ac.uk/robot-lab/are/" target="_blank">Autonomous Robot Evolution</a>: the Bristol and York team demonstrated automated assembly of a complete working robot, from evolved and 3D printed parts. In essence we demonstrated one robot assembling another.<br />
<br />
Our evolved robots consist of 3 elements:<br />
<br />
* pre-designed modules which we call <b>organs</b> (for sensors, actuators, controllers, etc),<br />
* an evolved and 3D printed <b>skeleton</b>, and<br />
* cables (with 3.5mm jack plugs) to connect the organs and the controller.<br />
<br />
Note that the organs are not evolved but hand designed; the rationale for this approach is <a href="http://alanwinfield.blogspot.com/2018/10/autonomous-robot-evolution-first-steps.html" target="_blank">outlined here</a>.<br />
<br />
Here are 3 basic organs:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh2_4OSp72W8M1wFg_mVxI8qxxfnm7IoIics8N3g2Pteeb7p6bK0Cl6sM9u7diswd1e-XH_ls0LHiuwrY4PBXq5y7guzVnt1Z51G1ZSwZytuBBR6oje83dkiyT0lsV4t1g-GHCKMg/s1600/PhysicalOrgans.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="563" data-original-width="1000" height="225" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh2_4OSp72W8M1wFg_mVxI8qxxfnm7IoIics8N3g2Pteeb7p6bK0Cl6sM9u7diswd1e-XH_ls0LHiuwrY4PBXq5y7guzVnt1Z51G1ZSwZytuBBR6oje83dkiyT0lsV4t1g-GHCKMg/s400/PhysicalOrgans.jpg" width="400" /></a></div>
On the left is a sensor, in the middle a controller and on the right a motor + wheel assembly.<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
And here are screenshots from the video showing the steps involved:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjGoMAr8Hff6HfU-TRhL6MpJLGBOCOKv2XryLO2_Nv29BOsH_FHlr1XnrUGSOrUYhjP8I-78oCpQNtrqiTSb2r_uoxm6DUl-9Y1e4boAOr3tYDYafgBlRqIuiv6a4pnCd72xuiZNg/s1600/video_screenshots.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="385" data-original-width="1070" height="142" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjGoMAr8Hff6HfU-TRhL6MpJLGBOCOKv2XryLO2_Nv29BOsH_FHlr1XnrUGSOrUYhjP8I-78oCpQNtrqiTSb2r_uoxm6DUl-9Y1e4boAOr3tYDYafgBlRqIuiv6a4pnCd72xuiZNg/s400/video_screenshots.jpg" width="400" /></a></div>
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
Step 1 shows the skeleton in the process of 3D printing. In step 2 the skeleton has been manually moved from the print bed onto the assembly area: note the organ and cable bank at the back of the assembly area. Step 3 shows the robot arm inserting the organs into the skeleton. Step 4 shows the robot arm connecting the cables. Step 5 shows the wheels being manually added, and in step 6 the robot is complete. Step 7 shows the assembled robot powered and running.<br />
<br />
And here is the complete video:<br />
<br />
<iframe allowfullscreen="" frameborder="0" height="270" src="https://www.youtube.com/embed/ASEWVsSdKzE" width="480"></iframe>
<br />
<br />
Our aim is of course to automate the whole process and right now the team are working on the two problems of (1) how to remove the 3D printed skeleton from the print bed ready for transfer to the assembly area, and (2) how best to secure the skeleton in the assembly area ready for the processes outlined above.<br />
<br />
<br />
<hr />
Related blog posts:<br />
<br />
<a href="http://alanwinfield.blogspot.com/2018/07/autonomous-robot-evolution-from-cradle.html" target="_blank">Autonomous Robot Evolution: from cradle to grave</a> (July 2018)<br />
<a href="http://alanwinfield.blogspot.com/2018/10/autonomous-robot-evolution-first-steps.html" target="_blank">Autonomous Robot Evolution: first challenges</a> (Oct 2018)Alan Winfieldhttp://www.blogger.com/profile/08263812573346115168noreply@blogger.com0