Sunday, June 17, 2018

What is Artificial Intelligence? (Or, can machines think?)

Here are the slides from my York Festival of Ideas keynote yesterday, which introduced the festival focus day Artificial Intelligence: Promises and Perils.



I start the keynote with Alan Turing's famous question: Can a Machine Think? and explain that thinking is not just the conscious reflection of Rodin's Thinker but also the largely unconscious thinking required to make a pot of tea. I note that at the dawn of AI 60 years ago we believed the former kind of thinking would be really difficult to emulate artificially and the latter easy. In fact it has turned out to be the other way round: we've had computers that can expertly play chess for over 20 years, but we can't yet build a robot that could go into your kitchen and make you a cup of tea (see also the Wozniak coffee test).

In slides 5 and 6 I suggest that we all assume a cat is smarter than a crocodile, which is smarter than a cockroach, on a linear scale of intelligence from not very intelligent to human intelligence. I ask where would a robot vacuum cleaner be on this scale and propose that such a robot is about as smart as an e-coli (single celled organism). I then illustrate the difficulty of placing the Actroid robot on this scale because, although it may look convincingly human (from a distance), in reality the robot is not very much smarter than a washing machine (and I hint that this is an ethical problem).

In slide 7 I show how apparently intelligent behaviour doesn't require a brain, with the Solarbot. This robot is an example of a Braitenberg machine. It has two solar panels (which look a bit like wings) acting as both sensors and power sources; the left hand panel is connected to the right hand wheel and vice versa. These direct connections mean that Solarbot can move towards the light and even navigate its way through obstacles, thus showing that intelligent behaviour is an emergent property of the interactions between body and environment.

In slide 8 I ask the question: What is the most advanced AI in the world today? (A question I am often asked.) Is it for example David Hanson's robot Sophia (which some press reports have claimed as the world's most advanced)? I argue it is not, since it is a chatbot AI - with a limited conversational repertoire - with a physical body (imagine Alexa with a humanoid head). Is it the DeepMind AI AlphaGo which famously beat the world's best Go player in 2016? Although very impressive I again argue no since AlphaGo cannot do anything other than play Go. Instead I suggest that everyday Google might well be the world's most advanced AI (on this I agree with my friend Joanna Bryson). Google is in effect a librarian able to find a book from an immense library for you - on the basis of your ill formed query - more or less instantly! (And this librarian is poly lingual too.)

In slides 9 I make the point that intelligence is not one thing that animals, robots and AIs have more or less of (in other words the linear scale shown on slides 5 and 6 is wrong). Then in slides 10 - 13 I propose four distinct categories of intelligence: morphological, swarm, individual and social intelligence. I suggest in slides 14 - 16 that if we express these as four axes of a graph then we can (very approximately) compare the intelligence of different organisms, including humans. In slide 17 I show some robots and argue that this graph shows why robots are so unintelligent; it is because robots generally only have two of the four kinds of intelligence whereas animals typically have three or sometimes all four. A detailed account of these ideas can be found in my paper How intelligent is your intelligent robot?

In the next segment, slides 18-20 I ask: how do we make Artificial General Intelligence (AGI)? I suggest that the key difference between current narrow AI and AGI is the ability - which comes very naturally to humans - to generalise knowledge learned in one context to a completely different context. This I think is the basis of human creativity. Using Data from Star Trek the next generation as a SF example of an AGI with human-equivalent intelligence as what we might be aiming for in the quest for AGI I explain that there are 3 approaches to getting there: by design, using artificial evolution or by reverse engineering animals. I offer the opinion that the gap between where we are now and Data like AGI is about the same as the gap between current space craft engine technology and warp drive technology. In other words not any time soon.

In the fourth segment of the talk (slides 21-24) I give a very brief account of evolutionary robotics - a method for breeding robots in much the same way farmers have artificially selected new varieties of plants and animals for thousands of years. I illustrate this with the wonderful Golem project which, for the first time, evolved simple creatures then 3D printed the most successful ones. I then introduce our new four year EPSRC funded project Autonomous Robot Evolution: from cradle to grave. In a radical new approach we aim to co-evolve robot bodies and brains in real-time and real-space. Using techniques from 3D printing new robot designs will literally be printed, before being trained in a nursery, then fitness tested in a target environment. With this approach we hope to be able to evolve robots for extreme environments, however because the energy costs are so high I do not think evolution is a route to truly thinking machines.

In the final segment (slides 25-35) I return to the approach of trying to design rather than evolve thinking machines. I introduce the idea of embedding a simulation of a robot in that robot, so that it has the ability to internally model itself. The first example I give is the amazing anthropomimetic robot invented by my old friend Owen Holland, called ECCEROBOT. Eccerobot is able to learn how to control it's own very complicated and hard-to-control body by trying out possible movement sequences in its internal model (Owen calls this a 'functional imagination'). I then outline our own work to use the same principle - a simulation based internal model - to demonstrate simple ethical behaviours, first with e-puck robots, then with NAO robots. These experiments are described in detail here and here. I suggest that these robots - with their ability to model and predict the consequences of their own and others' actions, in other words anticipate the future - may represent the first small steps toward thinking machines.


Related blog posts:
60 years of asking can robot think?
How intelligent are intelligent robots?
Robot bodies and how to evolve them

Wednesday, May 30, 2018

Simulation-based internal models for safer robots

Readers of this blog will know that I've become very excited by the potential of robots with simulation-based internal models in recent years. So far we've demonstrated their potential in simple ethical robots and as the basis for rational imitation. Our most recent publication instead examines the potential of robots with simulation-based internal models for safety. Of course it's not hard to see why the ability to model and predict the consequences of both your own and others' actions can help you to navigate the world more safely than without that ability.

Our paper Simulation-Based Internal Models for Safer Robots demonstrates the value of anticipation in what we call the corridor experiment. Here a smart robot (equipped with a simulation based internal model which we call a consequence engine) must navigate to the end of a corridor while maintaining a safe space around it at all times despite five other robots moving randomly in the corridor - in much the same way you and I might have to navigate down a busy office corridor while others are coming in the opposite direction.

Here is the abstract from our paper:
In this paper, we explore the potential of mobile robots with simulation-based internal models for safety in highly dynamic environments. We propose a robot with a simulation of itself, other dynamic actors and its environment, inside itself. Operating in real time, this simulation-based internal model is able to look ahead and predict the consequences of both the robot’s own actions and those of the other dynamic actors in its vicinity. Hence, the robot continuously modifies its own actions in order to actively maintain its own safety while also achieving its goal. Inspired by the problem of how mobile robots could move quickly and safely through crowds of moving humans, we present experimental results which compare the performance of our internal simulation-based controller with a purely reactive approach as a proof-of-concept study for the practical use of simulation-based internal models.
So, does it work? Thanks to some brilliant experimental work by Christian Blum the answer is a resounding yes. The best way to understand what's going on is with this wonderful gif animation of one experimental run below. The smart robot (blue) starts at the left and has the goal of safely reaching the right hand end of the corridor – its actual path is also shown in blue. Meanwhile 5 (red) robots are moving randomly (including bouncing off walls) and their actual paths are also shown in red; these robots are equipped only with simple obstacle avoidance behaviours. The larger blue circle shows blue's 'attention radius' – to reduce computational effort blue will only model red robots within this radius. The yellow paths in front of the red robots in blue's attention radius show blue's predictions of how those robots will move (taking into account collisions with the corridor walls and with blue and each other). The light blue projection in front of blue shows which of the 34 next possible actions of blue that is internally modelled is actually chosen as the next action (which, as you will see, sometimes includes standing still).


What do the results show us? Christian ran lots of trials – 88 simulations and 54 real robot experiments – over four experiments: (1) the baseline in simulation – in which the blue robot has only a simple reactive collision avoidance behaviour, (2) the baseline with real robots, (3) using the consequence engine (CE) in the blue robot in simulation, and (4) using the consequence engine in the blue robot with real robots. In the results below (a) shows the time taken for the blue robot to reach the end of the corridor, (b) shows the distance that the blue robot covers while reaching the end of the corridor, (c) shows the “danger ratio” experienced by the blue robot, and (d) shows the number of consequence engine runs per timestep in the blue robot. The danger ratio is the percentage of the run time that anther robot is within the blue robot’s safety radius.


For a relatively small cost in additional run time and distance covered, panels (a) and (b), the danger ratio is very significantly reduced from a mean value of ~20% to a mean value of zero, panel (c). Of course there is a computational cost, and this is reflected in panel (d); the baseline experiment has no consequence engine and hence runs no simulations, whereas the smart robot runs an average of between 8 and 10 simulations per time-step. This is exactly what we would expect: predicting the future clearly incurs a computational overhead.


Full paper reference:
Blum C, Winfield AFT and Hafner VV (2018) Simulation-Based Internal Models for Safer Robots. Front. Robot. AI 4:74. doi: 10.3389/frobt.2017.00074

Acknowledgements:
I am indebted to Christian Blum who programmed the robots, set up the experiment and obtained the results outlined here. Christian lead authored the paper, which was also co-authored by my friend and research collaborator Verena Hafner, who was Christian's PhD advisor.

Saturday, February 03, 2018

Why ethical robots might not be such a good idea after all

This week my colleague Dieter Vanderelst presented our paper: The Dark Side of Ethical Robots at AIES 2018 in New Orleans.

I blogged about Dieter's very elegant experiment here, but let me summarise. With two NAO robots he set up a demonstration of an ethical robot helping another robot acting as a proxy human, then showed that with a very simple alteration of the ethical robot's logic it is transformed into a distinctly unethical robot - behaving either competitively or aggressively toward the proxy human.

Here are our paper's key conclusions:

The ease of transformation from ethical to unethical robot is hardly surprising. It is a straightforward consequence of the fact that both ethical and unethical behaviours require the same cognitive machinery with – in our implementation – only a subtle difference in the way a single value is calculated. In fact, the difference between an ethical (i.e. seeking the most desirable outcomes for the human) robot and an aggressive (i.e. seeking the least desirable outcomes for the human) robot is a simple negation of this value.

On the face of it, given that we can (at least in principle) build explicitly ethical machines* then it would seem that we have a moral imperative to do so; it would appear to be unethical not to build ethical machines when we have that option. But the findings of our paper call this assumption into serious doubt. Let us examine the risks associated with ethical robots and if, and how, they might be mitigated. There are three.
  1. First there is the risk that an unscrupulous manufacturer might insert some unethical behaviours into their robots in order to exploit naive or vulnerable users for financial gain, or perhaps to gain some market advantage (here the VW diesel emissions scandal of 2015 comes to mind). There are no technical steps that would mitigate this risk, but the reputational damage from being found out is undoubtedly a significant disincentive. Compliance with ethical standards such as BS 8611 guide to the ethical design and application of robots and robotic systems, or emerging new IEEE P700X ‘human’ standards would also support manufacturers in the ethical application of ethical robots. 
  2. Perhaps more serious is the risk arising from robots that have user adjustable ethics settings. Here the danger arises from the possibility that either the user or a technical support engineer mistakenly, or deliberately, chooses settings that move the robot’s behaviours outside an ‘ethical envelope’. Much depends of course on how the robot’s ethics are coded, but one can imagine the robot’s ethical rules expressed in a user-accessible format, for example, an XML like script. No doubt the best way to guard against this risk is for robots to have no user adjustable ethics settings, so that the robot’s ethics are hard-coded and not accessible to either users or support engineers. 
  3. But even hard-coded ethics would not guard against undoubtedly the most serious risk of all, which arises when those ethical rules are vulnerable to malicious hacking. Given that cases of white-hat hacking of cars have already been reported, it's not difficult to envisage a nightmare scenario in which the ethics settings for an entire fleet of driverless cars are hacked, transforming those vehicles into lethal weapons. Of course, driverless cars (or robots in general) without explicit ethics are also vulnerable to hacking, but weaponising such robots is far more challenging for the attacker. Explicitly ethical robots focus the robot’s behaviours to a small number of rules which make them, we think, uniquely vulnerable to cyber-attack.
Ok, taking the most serious of these risks: hacking, we can envisage several technical approaches to mitigating the risk of malicious hacking of a robot’s ethical rules. One would be to place those ethical rules behind strong encryption. Another would require a robot to authenticate its ethical rules by first connecting to a secure server. An authentication failure would disable those ethics, so that the robot defaults to operating without explicit ethical behaviours. Although feasible, these approaches would be unlikely to deter the most determined hackers, especially those who are prepared to resort to stealing encryption or authentication keys.

It is very clear that guaranteeing the security of ethical robots is beyond the scope of engineering and will need regulatory and legislative efforts. Considering the ethical, legal and societal implications of robots, it becomes obvious that robots themselves are not where responsibility lies. Robots are simply smart machines of various kinds and the responsibility to ensure they behave well must always lie with human beings. In other words, we require ethical governance, and this is equally true for robots with or without explicit ethical behaviours.

Two years ago I thought the benefits of ethical robots outweighed the risks. Now I'm not so sure. I now believe that - even with strong ethical governance - the risks that a robot’s ethics might be compromised by unscrupulous actors are so great as to raise very serious doubts over the wisdom of embedding ethical decision making in real-world safety critical robots, such as driverless cars. Ethical robots might not be such a good idea after all.

*As a footnote let me explain what I mean by explicitly ethical robots: these are robots that select behaviours on the basis of ethical rules - in a sense they can be said to reason about ethics (in our case by evaluating the ethical consequences of several possible actions). Here I'm using the terminology of James Moor, who proposed four kinds of ethical agents, as I explain here. Moor shows in his classification that all robots (and AIs) are ethical agents in the sense that they can all have an ethical impact.

Thus, even though we're calling into question the wisdom of explicitly ethical robots, that doesn't change the fact that we absolutely must design all robots to minimise the likelihood of ethical harms, in other words we should be designing implicitly ethical robots within Moor's schema.

Here is the full reference to our paper:

Vanderelst D and Winfield AFT (2018), The Dark Side of Ethical Robots, AAAI/ACM Conf. on AI Ethics and Society (AIES 2018), New Orleans.

Related blog posts:
The Dark side of Ethical Robots
Could we make a moral machine?
How ethical is your ethical robot?
Towards ethical robots: an update
Towards an Ethical Robot

Thursday, February 01, 2018

Ethical Governance: what is it and who's doing it?

These days I often find myself talking about ethical governance. Not just talking about but advocating: for instance in written evidence to the 2016 parliamentary select committee on robots and AI I made the link between ethical governance and trust. I believe that without transparent ethical governance robotics and AI will not win public trust, and without trust we will not see the societal benefits of robots and AI that we all hope for.

But what exactly is ethical governance and who is doing it, and perhaps more importantly, who in robotics and AI is doing it well?

In a draft paper on the subject I define ethical governance as
a set of processes, procedures, cultures and values designed to ensure the highest standards of behaviour. Ethical governance thus goes beyond simply good (i.e. effective) governance, in that it inculcates ethical behaviours. Normative ethical governance is seen as an important pillar of responsible research and innovation (RRI), which “entails an approach, rather than a mechanism, so it seeks to deal with ethical issues as or before they arise in a principled manner rather than waiting until a problem surfaces and dealing with it in an ad hoc way [1]” 
The link I make here between ethical governance and responsible research and innovation is I think really important. Ethical governance is a key part of RRI. They are not the same thing but it would be hard to imagine good ethical governance without RRI, and vice versa.

So what would I expect of companies or organisations who claim to be ethical? As a starting point for discussion here are five things that ethical companies should do:
  • Have an ethical code of conduct, so that everyone in the company understands what is expected of them. This should sit alongside a mechanism which allows employees to be able to raise ethical concerns, if necessary in confidence, without fear of displeasing a manager.
  • Provide ethics training for everyone, without exception. Ethics, like quality, is not something you can do as as add-on; simply appointing an ethics manager, while not a bad idea, is not enough. Ethical governance needs to become part of a company's culture and DNA, not just in product development but in management, finance, HR and marketing too.
  • Undertake ethical risk assessments of all new products, and act upon the findings of those assessments. A toolkit, or method, for ethical risk assessment of robots and robotic systems exists in British Standard BS 8611, which - alongside much else - sets out 20 ethical risks and hazards together with recommendations on how to mitigate these and verify that they have been addressed.
  • Be transparent about your ethical governance. Of course your robots and AIs must be transparent too, but here I mean transparency of process, not product. It's not enough to claim to be ethical, you need to show how you are ethical. That means publishing your ethical code of conduct, membership of your ethics board if you have one (and its terms of reference), and ideally case studies showing how you have conducted ethical risk assessments.
  • Really value ethical governance.  Even if you have the four processes above in place you also needs to be sincere about ethical governance; that ethical governance is one of your core values, and just not a smokescreen for what you really value, like maximising shareholder returns.
My final point about really valuing ethical governance is of course hard to evidence. But, like trust, confidence in a company's claim to be ethical has to be earned and - as we've seen - can easily be damaged.

This brings me to my second question: who is doing ethical governance? And are there any examples of best practice? A week or so ago I asked Twitter this question. I've had quite a few nominations but haven't yet looked into them all. When I have, I will complete this blog post.


[1] Rainey, S., and Goujon, P. (2011). Toward a Normative Ethical of Governance of Technology. Contextual Pragmatism and Ethical Governance. In Ren von Schomberg (ed.) Towards Responsible Research and Innovation in the Information and Communication Technologies and Security Technologies Fields, Report of the European Commission-DG Research and Innovation.

Saturday, December 23, 2017

A Round Up of Robotics and AI ethics: part 1 Principles

This blogpost is a round up of the various sets of ethical principles of robotics and AI that have been proposed to date, ordered by date of first publication. The principles are presented here (in full or abridged) with notes and references but without commentary. If there any (prominent) ones I've missed please let me know.

Asimov's three laws of Robotics (1950)
  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. 
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. 
I have included these to explicitly acknowledge, firstly, that Asimov undoubtedly established the principle that robots (and by extension AIs) should be governed by principles, and secondly that many subsequent principles have been drafted as a direct response. The three laws first appeared in Asimov's short story Runaround [1]. This wikipedia article provides a very good account of the three laws and their many (fictional) extensions.

Murphy and Wood's three laws of Responsible Robotics (2009)
  1. A human may not deploy a robot without the human-robot work system meeting the highest legal and professional standards of safety and ethics. 
  2. A robot must respond to humans as appropriate for their roles. 
  3. A robot must be endowed with sufficient situated autonomy to protect its own existence as long as such protection provides smooth transfer of control which does not conflict with the First and Second Laws. 
These were proposed in Robin Murphy and David Wood's paper Beyond Asimov: The Three Laws of Responsible Robotics [2].

EPSRC Principles of Robotics (2010)
  1. Robots are multi-use tools. Robots should not be designed solely or primarily to kill or harm humans, except in the interests of national security. 
  2. Humans, not Robots, are responsible agents. Robots should be designed and operated as far as practicable to comply with existing laws, fundamental rights and freedoms, including privacy. 
  3. Robots are products. They should be designed using processes which assure their safety and security. 
  4. Robots are manufactured artefacts. They should not be designed in a deceptive way to exploit vulnerable users; instead their machine nature should be transparent. 
  5. The person with legal responsibility for a robot should be attributed. 
These principles were drafted in 2010 and published online in 2011, but not formally published until 2017 [3] as part of a two-part special issue of Connection Science on the principles, edited by Tony Prescott & Michael Szollosy [4]. An accessible introduction to the EPSRC principles was published in New Scientist in 2011.

Future of Life Institute Asilomar principles for beneficial AI (Jan 2017)

I will not list all 23 principles but extract just a few to compare and contrast with the others listed here:
6. Safety: AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.
7. Failure Transparency: If an AI system causes harm, it should be possible to ascertain why.
8. Judicial Transparency: Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.
9. Responsibility: Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.
10. Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.
11. Human Values: AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.
12. Personal Privacy: People should have the right to access, manage and control the data they generate, given AI systems’ power to analyze and utilize that data.
13. Liberty and Privacy: The application of AI to personal data must not unreasonably curtail people’s real or perceived liberty.
14. Shared Benefit: AI technologies should benefit and empower as many people as possible.
15. Shared Prosperity: The economic prosperity created by AI should be shared broadly, to benefit all of humanity.
An account of the development of the Asilomar principles can be found here.

The ACM US Public Policy Council Principles for Algorithmic Transparency and Accountability (Jan 2017)
  1. Awareness: Owners, designers, builders, users, and other stakeholders of analytic systems should be aware of the possible biases involved in their design, implementation, and use and the potential harm that biases can cause to individuals and society.
  2. Access and redress: Regulators should encourage the adoption of mechanisms that enable questioning and redress for individuals and groups that are adversely affected by algorithmically informed decisions.
  3. Accountability: Institutions should be held responsible for decisions made by the algorithms that they use, even if it is not feasible to explain in detail how the algorithms produce their results.
  4. Explanation: Systems and institutions that use algorithmic decision-making are encouraged to produce explanations regarding both the procedures followed by the algorithm and the specific decisions that are made. This is particularly important in public policy contexts.
  5. Data Provenance: A description of the way in which the training data was collected should be maintained by the builders of the algorithms, accompanied by an exploration of the potential biases induced by the human or algorithmic data-gathering process.
  6. Auditability: Models, algorithms, data, and decisions should be recorded so that they can be audited in cases where harm is suspected.
  7. Validation and Testing: Institutions should use rigorous methods to validate their models and document those methods and results. 
See the ACM announcement of these principles here. The principles form part of the ACM's updated code of ethics.

Japanese Society for Artificial Intelligence (JSAI) Ethical Guidelines (Feb 2017)
  1. Contribution to humanity Members of the JSAI will contribute to the peace, safety, welfare, and public interest of humanity. 
  2. Abidance of laws and regulations Members of the JSAI must respect laws and regulations relating to research and development, intellectual property, as well as any other relevant contractual agreements. Members of the JSAI must not use AI with the intention of harming others, be it directly or indirectly.
  3. Respect for the privacy of others Members of the JSAI will respect the privacy of others with regards to their research and development of AI. Members of the JSAI have the duty to treat personal information appropriately and in accordance with relevant laws and regulations.
  4. Fairness Members of the JSAI will always be fair. Members of the JSAI will acknowledge that the use of AI may bring about additional inequality and discrimination in society which did not exist before, and will not be biased when developing AI. 
  5. Security As specialists, members of the JSAI shall recognize the need for AI to be safe and acknowledge their responsibility in keeping AI under control. 
  6. Act with integrity Members of the JSAI are to acknowledge the significant impact which AI can have on society. 
  7. Accountability and Social Responsibility Members of the JSAI must verify the performance and resulting impact of AI technologies they have researched and developed. 
  8. Communication with society and self-development Members of the JSAI must aim to improve and enhance society’s understanding of AI.
  9. Abidance of ethics guidelines by AI AI must abide by the policies described above in the same manner as the members of the JSAI in order to become a member or a quasi-member of society.
An explanation of the background and aims of these ethical guidelines can be found here, together with a link to the full principles (which are shown abridged above).

Draft principles of The Future Society's Science, Law and Society Initiative (Oct 2017)
  1. AI should advance the well-being of humanity, its societies, and its natural environment. 
  2. AI should be transparent
  3. Manufacturers and operators of AI should be accountable
  4. AI’s effectiveness should be measurable in the real-world applications for which it is intended. 
  5. Operators of AI systems should have appropriate competencies
  6. The norms of delegation of decisions to AI systems should be codified through thoughtful, inclusive dialogue with civil society.
This article by Nicolas Economou explains the 6 principles with a full commentary on each one.

MontrĂ©al Declaration for Responsible AI draft principles (Nov 2017)
  1. Well-being The development of AI should ultimately promote the well-being of all sentient creatures.
  2. Autonomy The development of AI should promote the autonomy of all human beings and control, in a responsible way, the autonomy of computer systems.
  3. Justice The development of AI should promote justice and seek to eliminate all types of discrimination, notably those linked to gender, age, mental / physical abilities, sexual orientation, ethnic/social origins and religious beliefs.
  4. Privacy The development of AI should offer guarantees respecting personal privacy and allowing people who use it to access their personal data as well as the kinds of information that any algorithm might use.
  5. Knowledge The development of AI should promote critical thinking and protect us from propaganda and manipulation.
  6. Democracy The development of AI should promote informed participation in public life, cooperation and democratic debate.
  7. Responsibility The various players in the development of AI should assume their responsibility by working against the risks arising from their technological innovations.
The Montréal Declaration for Responsible AI proposes the 7 values and draft principles above (here in full with preamble, questions and definitions).

IEEE General Principles of Ethical Autonomous and Intelligent Systems (Dec 2017)
  1. How can we ensure that A/IS do not infringe human rights
  2. Traditional metrics of prosperity do not take into account the full effect of A/IS technologies on human well-being
  3. How can we assure that designers, manufacturers, owners and operators of A/IS are responsible and accountable
  4. How can we ensure that A/IS are transparent
  5. How can we extend the benefits and minimize the risks of AI/AS technology being misused
These 5 general principles appear in Ethically Aligned Design v2, a discussion document drafted and published by the IEEE Standards Association Global Initiative on Ethics of Autonomous and Intelligent Systems. The principles are expressed not as rules but instead as questions, or concerns, together with background and candidate recommendations.

A short article co-authored with IEEE general principles co-chair Mark Halverson Why Principles Matter explains the link between principles and standards, together with further commentary and references.

UNI Global Union Top 10 Principles for Ethical AI (Dec 2017)
  1. Demand That AI Systems Are Transparent
  2. Equip AI Systems With an “Ethical Black Box”
  3. Make AI Serve People and Planet 
  4. Adopt a Human-In-Command Approach
  5. Ensure a Genderless, Unbiased AI
  6. Share the Benefits of AI Systems
  7. Secure a Just Transition and Ensuring Support for Fundamental Freedoms and Rights
  8. Establish Global Governance Mechanisms
  9. Ban the Attribution of Responsibility to Robots
  10. Ban AI Arms Race
Drafted by UNI Global Union's Future World of Work these 10 principles for Ethical AI (set out here with full commentary) “provide unions, shop stewards and workers with a set of concrete demands to the transparency, and application of AI”.


References
[1] Asimov, Isaac (1950): Runaround,  in I, Robot, (The Isaac Asimov Collection ed.) Doubleday. ISBN 0-385-42304-7.
[2] Murphy, Robin; Woods, David D. (2009): Beyond Asimov: The Three Laws of Responsible Robotics. IEEE Intelligent systems. 24 (4): 14–20.
[3] Margaret Boden et al (2017): Principles of robotics: regulating robots in the real world
Connection Science. 29 (2): 124:129.
[4] Tony Prescott and Michael Szollosy (eds.) (2017): Ethical Principles of Robotics, Connection Science. 29 (2) and 29 (3).

Wednesday, October 11, 2017

Some reflections on ERL Emergency 2017

It has now been a couple of weeks since the ERL Emergency robotics competition in Piombino, Italy, so I've had time to wind down and reflect a little on the event. The competition and associated events was a great success. A total of 16 teams from 9 countries were organised into 8 multi-domain (air, land and sea) groups for the competition - see the ERL Emergency programme here for details.

From a technical point of view what interested me the most was to see how those teams improved their performances since euRathlon 2015.  Of course a precise comparison is not possible for several reasons: first, not all teams participated in both 2015 and 2017 competitions - and of those that did both personnel and robots had been refreshed, and second, since this is an outdoor competition - conditions (weather, wind and especially sea) were inevitably different.

However, the fact that euRathlon 2015 and ERL Emergency 2017 were held at the same location, with updated but broadly similar competition scenarios means that general scenario (task) level comparisons are possible. In fact we also carried forward some of the functional benchmarks from the 2015 competition, which will allow detailed analysis across both competitions (but not in this blog post).

Instead I will here give a few general (and rather subjective) comments comparing 2015 and 2017 performance.

Communications continued to be a problem for teams, with all but one team in 2017 (as in 2015) choosing to communicate with their land robots via WiFi. Now any communications engineer (as I used to be) will tell you that WiFi is a hopelessly bad choice for outdoor communications. The image above shows the waypoint positions for ground robots from the start position W1 to W6 in front of the building. The control tent was located near to W1 close to the trees above the beach - about 112m from the front of the building - and with no line of sight to W6 because of the uneven terrain. And to make matters worse the land robots need to enter the building and locate the machine room - about 10m from the entrance and again with no line of sight. Despite the obvious drawbacks of WiFi those teams that did use it came up with workarounds, including using robots as mobile repeaters and ingenious systems in which robots deployed a succession of fixed repeaters. There was clearly progress in communications from 2015 to 2017 because, in 2015 - when it became clear that no team could communicate successfully with the machine room (room #3) - we relocated the machine room from the rear of the building to the front (room #1), whereas in 2017 no such relocation was necessary; those teams that reached the machine room at the rear of the building were able to communicate with their robots. See the floorplan here.

Human-robot interfaces were critical to success. In 2015 we saw some interfaces that made it extremely difficult for teams to remotely tele-operate their robots, when operators struggled with postage-stamp sized windows showing the live video feed from the robot's cameras (especially difficult when bright sunshine most days meant that light levels are very high inside the tent). In 2017 we saw not only much improved HRIs but integration between autonomous and tele-operated functions so that, for instance, operators were able to drag and drop the next waypoint then monitor the robot's autonomous progress to that waypoint, then - when at the waypoint - make use of smart machine vision to identify objects of potential interest (OPIs).

Effective human-human communication was also a critical success factor, underlining the fact that ERL Emergency tests not just robots but human robot teams or - to be more accurate - human-human-robot-robot teams. Given that typically a team's aerial, underwater and land robot operators were in separate control tents, establishing exactly how and when these operators would communicate with each other was very important. In this regard we (the judges) didn't mind how intra-team communication was organised - they could use WiFi, mobile phones, or even a runner. In 2015 the weaker teams clearly had obviously not thought about this at all and suffered as a result. Again in 2017 we saw a big improvement, with very effective intra-team communication in the most successful teams.

The full results listings are shown here for euRathlon 2015, and here for ERL Emergency 2017.

Here are a few images from the 2017 competition:




Tuesday, August 15, 2017

The case for an Ethical Black Box

Last month we presented our paper The Case for an Ethical Black Box at Towards Autonomous Robotic Systems (TAROS 2017), University of Surrey. The paper makes a very simple proposition: all robots should be fitted, as standard, with the equivalent of an aircraft Flight Data Recorder. We argue that without such a device - which we call an ethical black box - it will be impossible to properly investigate robot accidents. Ian Sample covered our paper in the Guardian here.

Here is the paper abstract:
This paper proposes that robots and autonomous systems should be equipped with the equivalent of a Flight Data Recorder to continuously record sensor and relevant internal status data. We call this an ethical black box. We argue that an ethical black box will be critical to the process of discovering why and how a robot caused an accident, and thus an essential part of establishing accountability and responsibility. We also argue that without the transparency afforded by an ethical black box, robots and autonomous systems are unlikely to win public trust.
And here are the presentation slides from TAROS:



The full paper can be downloaded from here. Comments and feedback welcome.


The full paper reference:

Winfield A.F.T., Jirotka M. (2017) The Case for an Ethical Black Box. In: Gao Y., Fallah S., Jin Y., Lekakou C. (eds) Towards Autonomous Robotic Systems. TAROS 2017. Lecture Notes in Computer Science, vol 10454. Springer, Cham.

Related blog posts:
The infrastructure of life 2 - Transparency

Friday, July 14, 2017

Three stories about Robot Stories

Here are the slides I gave yesterday morning as member of panel Sci-Fi Dreams: How visions of the future are shaping the development of intelligent technology, at the Centre for the Future of Intelligence 2017 conference. I presented three short stories about robot stories.




Slide 2:
The FP7 TRUCE Project invited a number of scientists - mostly within the field of Artificial Life - to suggest ideas for short stories. Those stories were then sent to a panel of writers, who chose one of the stories. I submitted an idea called The feeling of what it is like to be a robot and was delighted when Lucy Caldwell contacted me. Following a visit to the lab Lucy drafted a beautiful story called The Familiar which - following some iteration - appeared in the collected volume Beta Life.

Slide 3:
More recently the EU Human Brain Project Foresight Lab brought three Sci Fi writers - Allen Ashley, Jule Owen and Stephen Oram - to visit the lab. Inspired by what they saw they then wrote three wonderful short stories, which were read at the 2016 Bristol Literature Festival. The readings were followed by a panel discussion which included myself and BRL colleagues Antonia Tzemanaki and Marta Palau Franco. The three stories are published in the volume Versions of the Future. Stephen Oram went on to publish a collection called Eating Robots.

Slide 4:
My first two stories were about people telling stories about robots. Now I turn to the possibility of robots themselves telling stories. Some years ago I speculated on the idea on the idea of robots telling each other stories (directly inspired by a conversation with Richard Gregory). That idea has now turned into a current project, with the aim of building an embodied computational model of storytelling. For a full description see this paper, currently in press.

Wednesday, June 21, 2017

CogX: Emerging ethical principles, toolkits and standards for AI

Here are the slides I presented at the CogX session on Precision Ethics this afternoon. My intention with these slides was to give a 10 minute helicopter overview of emerging ethical principles, toolkits and ethical standards for AI, including Responsible Research and Innovation.