Thursday, April 18, 2019

An Updated Round Up of Ethical Principles of Robotics and AI

This blogpost is an updated round up of the various sets of ethical principles of robotics and AI that have been proposed to date, ordered by date of first publication. I previously listed principles published before December 2017 here; this blogpost appends those principles drafted since January 2018 (plus one in October 2017 I had missed). The principles are listed here (in full or abridged) with links, notes and references but without critique.

Scroll down to the next horizontal line for the updates.

If there any (prominent) ones I've missed please let me know.

Asimov's three laws of Robotics (1950)
  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. 
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. 
I have included these to explicitly acknowledge, firstly, that Asimov undoubtedly established the principle that robots (and by extension AIs) should be governed by principles, and secondly that many subsequent principles have been drafted as a direct response. The three laws first appeared in Asimov's short story Runaround [1]. This wikipedia article provides a very good account of the three laws and their many (fictional) extensions.

Murphy and Wood's three laws of Responsible Robotics (2009)
  1. A human may not deploy a robot without the human-robot work system meeting the highest legal and professional standards of safety and ethics. 
  2. A robot must respond to humans as appropriate for their roles. 
  3. A robot must be endowed with sufficient situated autonomy to protect its own existence as long as such protection provides smooth transfer of control which does not conflict with the First and Second Laws. 
These were proposed in Robin Murphy and David Wood's paper Beyond Asimov: The Three Laws of Responsible Robotics [2].

EPSRC Principles of Robotics (2010)
  1. Robots are multi-use tools. Robots should not be designed solely or primarily to kill or harm humans, except in the interests of national security. 
  2. Humans, not Robots, are responsible agents. Robots should be designed and operated as far as practicable to comply with existing laws, fundamental rights and freedoms, including privacy. 
  3. Robots are products. They should be designed using processes which assure their safety and security. 
  4. Robots are manufactured artefacts. They should not be designed in a deceptive way to exploit vulnerable users; instead their machine nature should be transparent. 
  5. The person with legal responsibility for a robot should be attributed. 
These principles were drafted in 2010 and published online in 2011, but not formally published until 2017 [3] as part of a two-part special issue of Connection Science on the principles, edited by Tony Prescott & Michael Szollosy [4]. An accessible introduction to the EPSRC principles was published in New Scientist in 2011.

Future of Life Institute Asilomar principles for beneficial AI (Jan 2017)

I will not list all 23 principles but extract just a few to compare and contrast with the others listed here:
6. Safety: AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.
7. Failure Transparency: If an AI system causes harm, it should be possible to ascertain why.
8. Judicial Transparency: Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.
9. Responsibility: Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.
10. Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.
11. Human Values: AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.
12. Personal Privacy: People should have the right to access, manage and control the data they generate, given AI systems’ power to analyze and utilize that data.
13. Liberty and Privacy: The application of AI to personal data must not unreasonably curtail people’s real or perceived liberty.
14. Shared Benefit: AI technologies should benefit and empower as many people as possible.
15. Shared Prosperity: The economic prosperity created by AI should be shared broadly, to benefit all of humanity.
An account of the development of the Asilomar principles can be found here.

The ACM US Public Policy Council Principles for Algorithmic Transparency and Accountability (Jan 2017)
  1. Awareness: Owners, designers, builders, users, and other stakeholders of analytic systems should be aware of the possible biases involved in their design, implementation, and use and the potential harm that biases can cause to individuals and society.
  2. Access and redress: Regulators should encourage the adoption of mechanisms that enable questioning and redress for individuals and groups that are adversely affected by algorithmically informed decisions.
  3. Accountability: Institutions should be held responsible for decisions made by the algorithms that they use, even if it is not feasible to explain in detail how the algorithms produce their results.
  4. Explanation: Systems and institutions that use algorithmic decision-making are encouraged to produce explanations regarding both the procedures followed by the algorithm and the specific decisions that are made. This is particularly important in public policy contexts.
  5. Data Provenance: A description of the way in which the training data was collected should be maintained by the builders of the algorithms, accompanied by an exploration of the potential biases induced by the human or algorithmic data-gathering process.
  6. Auditability: Models, algorithms, data, and decisions should be recorded so that they can be audited in cases where harm is suspected.
  7. Validation and Testing: Institutions should use rigorous methods to validate their models and document those methods and results. 
See the ACM announcement of these principles here. The principles form part of the ACM's updated code of ethics.

Japanese Society for Artificial Intelligence (JSAI) Ethical Guidelines (Feb 2017)
  1. Contribution to humanity Members of the JSAI will contribute to the peace, safety, welfare, and public interest of humanity. 
  2. Abidance of laws and regulations Members of the JSAI must respect laws and regulations relating to research and development, intellectual property, as well as any other relevant contractual agreements. Members of the JSAI must not use AI with the intention of harming others, be it directly or indirectly.
  3. Respect for the privacy of others Members of the JSAI will respect the privacy of others with regards to their research and development of AI. Members of the JSAI have the duty to treat personal information appropriately and in accordance with relevant laws and regulations.
  4. Fairness Members of the JSAI will always be fair. Members of the JSAI will acknowledge that the use of AI may bring about additional inequality and discrimination in society which did not exist before, and will not be biased when developing AI. 
  5. Security As specialists, members of the JSAI shall recognize the need for AI to be safe and acknowledge their responsibility in keeping AI under control. 
  6. Act with integrity Members of the JSAI are to acknowledge the significant impact which AI can have on society. 
  7. Accountability and Social Responsibility Members of the JSAI must verify the performance and resulting impact of AI technologies they have researched and developed. 
  8. Communication with society and self-development Members of the JSAI must aim to improve and enhance society’s understanding of AI.
  9. Abidance of ethics guidelines by AI AI must abide by the policies described above in the same manner as the members of the JSAI in order to become a member or a quasi-member of society.
An explanation of the background and aims of these ethical guidelines can be found here, together with a link to the full principles (which are shown abridged above).

Draft principles of The Future Society's Science, Law and Society Initiative (Oct 2017)
  1. AI should advance the well-being of humanity, its societies, and its natural environment. 
  2. AI should be transparent
  3. Manufacturers and operators of AI should be accountable
  4. AI’s effectiveness should be measurable in the real-world applications for which it is intended. 
  5. Operators of AI systems should have appropriate competencies
  6. The norms of delegation of decisions to AI systems should be codified through thoughtful, inclusive dialogue with civil society.
This article by Nicolas Economou explains the 6 principles with a full commentary on each one.

MontrĂ©al Declaration for Responsible AI draft principles (Nov 2017)
  1. Well-being The development of AI should ultimately promote the well-being of all sentient creatures.
  2. Autonomy The development of AI should promote the autonomy of all human beings and control, in a responsible way, the autonomy of computer systems.
  3. Justice The development of AI should promote justice and seek to eliminate all types of discrimination, notably those linked to gender, age, mental / physical abilities, sexual orientation, ethnic/social origins and religious beliefs.
  4. Privacy The development of AI should offer guarantees respecting personal privacy and allowing people who use it to access their personal data as well as the kinds of information that any algorithm might use.
  5. Knowledge The development of AI should promote critical thinking and protect us from propaganda and manipulation.
  6. Democracy The development of AI should promote informed participation in public life, cooperation and democratic debate.
  7. Responsibility The various players in the development of AI should assume their responsibility by working against the risks arising from their technological innovations.
The MontrĂ©al Declaration for Responsible AI proposes the 7 values and draft principles above (here in full with preamble, questions and definitions).

IEEE General Principles of Ethical Autonomous and Intelligent Systems (Dec 2017)
  1. How can we ensure that A/IS do not infringe human rights
  2. Traditional metrics of prosperity do not take into account the full effect of A/IS technologies on human well-being
  3. How can we assure that designers, manufacturers, owners and operators of A/IS are responsible and accountable
  4. How can we ensure that A/IS are transparent
  5. How can we extend the benefits and minimize the risks of AI/AS technology being misused
These 5 general principles appear in Ethically Aligned Design v2, a discussion document drafted and published by the IEEE Standards Association Global Initiative on Ethics of Autonomous and Intelligent Systems. The principles are expressed not as rules but instead as questions, or concerns, together with background and candidate recommendations.

A short article co-authored with IEEE general principles co-chair Mark Halverson Why Principles Matter explains the link between principles and standards, together with further commentary and references.

Note that these principles have been revised and extended, in March 2019 (see below).

UNI Global Union Top 10 Principles for Ethical AI (Dec 2017)
  1. Demand That AI Systems Are Transparent
  2. Equip AI Systems With an Ethical Black Box
  3. Make AI Serve People and Planet 
  4. Adopt a Human-In-Command Approach
  5. Ensure a Genderless, Unbiased AI
  6. Share the Benefits of AI Systems
  7. Secure a Just Transition and Ensuring Support for Fundamental Freedoms and Rights
  8. Establish Global Governance Mechanisms
  9. Ban the Attribution of Responsibility to Robots
  10. Ban AI Arms Race
Drafted by UNI Global Union's Future World of Work these 10 principles for Ethical AI (set out here with full commentary) “provide unions, shop stewards and workers with a set of concrete demands to the transparency, and application of AI”.


Updated principles...

Intel's recommendation for Public Policy Principles on AI (October 2017)
  1. Foster Innovation and Open Development – To better understand the impact of AI and explore the broad diversity of AI implementations, public policy should encourage investment in AI R&D. Governments should support the controlled testing of AI systems to help industry, academia, and other stakeholders improve the technology.
  2. Create New Human Employment Opportunities and Protect People’s Welfare – AI will change the way people work. Public policy in support of adding skills to the workforce and promoting employment across different sectors should enhance employment opportunities while also protecting people’s welfare.
  3. Liberate Data Responsibly – AI is powered by access to data. Machine learning algorithms improve by analyzing more data over time; data access is imperative to achieve more enhanced AI model development and training. Removing barriers to the access of data will help machine learning and deep learning reach their full potential.
  4. Rethink Privacy – Privacy approaches like The Fair Information Practice Principles and Privacy by Design have withstood the test of time and the evolution of new technology. But with innovation, we have had to “rethink” how we apply these models to new technology.
  5. Require Accountability for Ethical Design and Implementation – The social implications of computing have grown and will continue to expand as more people have access to implementations of AI. Public policy should work to identify and mitigate discrimination caused by the use of AI and encourage designing in protections against these harms.
These principles were announced in a blog post by Naveen Rao (Intel VP AI) here.

Lords Select Committee 5 core principles to keep AI ethical (April 2018)
  1. Artificial intelligence should be developed for the common good and benefit of humanity. 
  2. Artificial intelligence should operate on principles of intelligibility and fairness
  3. Artificial intelligence should not be used to diminish the data rights or privacy of individuals, families or communities. 
  4. All citizens have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence. 
  5. The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence.
These principles appear in the UK House of Lords Select Committee on Artificial Intelligence report AI in the UK: ready, willing and able? published in April 2019. The WEF published a summary and commentary here.

AI UX: 7 Principles of Designing Good AI Products (April 2018)
  1. Differentiate AI content visually - let people know if an algorithm has generated a piece of content so they can decide for themselves whether to trust it or not.
  2. Explain how machines think - helping people understand how machines work so they can use them better
  3. Set the right expectations - set the right expectations, especially in a world full of sensational, superficial news about new AI technologies.
  4. Find and handle weird edge cases - spend more time testing and finding weird, funny, or even disturbing or unpleasant edge cases.
  5. User testing for AI products (default methods won’t work here).
  6. Provide an opportunity to give feedback.
These principles, focussed on the design of the User Interface (UI) and User Experience (UX), are from Budapest based company UX Studio

The Toronto Declaration on equality and non-discrimination in machine learning systems (May 2018)

The Toronto Declaration: Protecting the right to equality and non-discrimination in machine learning systems does not succinctly articulate ethical principles but instead presents arguments under the following headings to address concerns "...about the capability of [machine learning] systems to facilitate intentional or inadvertent discrimination against certain individuals or groups of people".
  1. Using the framework of international human rights law The right to equality and non-discrimination; Preventing discrimination, and Protecting the rights of all individuals and groups: promoting diversity and inclusion
  2. Duties of states: human rights obligations State use of machine learning systems; Promoting equality, and Holding private sector actors to account
  3. Responsibilities of private sector actors human rights due diligence
  4. The right to an effective remedy
Google AI Principles (June 2018) 
  1. Be socially beneficial
  2. Avoid creating or reinforcing unfair bias.
  3. Be built and tested for safety.
  4. Be accountable to people.
  5. Incorporate privacy design principles.
  6. Uphold high standards of scientific excellence.
  7. Be made available for uses that accord with these principles. 
These principles were launched with a blog post and commentary by Google CEO Sundar Pichai here.

IBM's 5 ethical AI principles (September 2018)
  1. Accountability: AI designers and developers are responsible for considering AI design, development, decision processes, and outcomes.
  2. Value alignment: AI should be designed to align with the norms and values of your user group in mind.
  3. Explainability: AI should be designed for humans to easily perceive, detect, and understand its decision process, and the predictions/recommendations. This is also, at times, referred to as interpretability of AI. Simply speaking, users have all rights to ask the details on the predictions made by AI models such as which features contributed to the predictions by what extent. Each of the predictions made by AI models should be able to be reviewed.
  4. Fairness: AI must be designed to minimize bias and promote inclusive representation.
  5. User data rights: AI must be designed to protect user data and preserve the user’s power over access and uses
For a full account read IBM's Everyday Ethics for Artificial Intelligence here.

Microsoft Responsible bots: 10 guidelines for developers of conversational AI (November 2018)
  1. Articulate the purpose of your bot and take special care if your bot will support consequential use cases.
  2. Be transparent about the fact that you use bots as part of your product or service.
  3. Ensure a seamless hand-off to a human where the human-bot exchange leads to interactions that exceed the bot’s competence.
  4. Design your bot so that it respects relevant cultural norms and guards against misuse.
  5. Ensure your bot is reliable.
  6. Ensure your bot treats people fairly.
  7. Ensure your bot respects user privacy.
  8. Ensure your bot handles data securely.
  9. Ensure your bot is accessible.
  10. Accept responsibility.
Microsoft's guidelines for the ethical design of 'bots' (chatbots or conversational AIs) are fully described here.

CEPEJ European Ethical Charter on the use of artificial intelligence (AI) in judicial systems and their environment, 5 principles (February 2019)
  1. Principle of respect of fundamental rights: ensuring that the design and implementation of artificial intelligence tools and services are compatible with fundamental rights.
  2. Principle of non-discrimination: specifically preventing the development or intensification of any discrimination between individuals or groups of individuals.
  3. Principle of quality and security: with regard to the processing of judicial decisions and data, using certified sources and intangible data with models conceived in a multi-disciplinary manner, in a secure technological environment.
  4. Principle of transparency, impartiality and fairness: making data processing methods accessible and understandable, authorising external audits.
  5. Principle “under user control”: precluding a prescriptive approach and ensuring that users are informed actors and in control of their choices.
The Council of Europe ethical charter principles are outlined here, with a link to the ethical charter istelf.

Women Leading in AI (WLinAI) 10 recommendations (February 2019)
  1. Introduce a regulatory approach governing the deployment of AI which mirrors that used for the pharmaceutical sector.
  2. Establish an AI regulatory function working alongside the Information Commissioner’s Office and Centre for Data Ethics – to audit algorithms, investigate complaints by individuals,issue notices and fines for breaches of GDPR and equality and human rights law, give wider guidance, spread best practice and ensure algorithms must be fully explained to users and open to public scrutiny.
  3. Introduce a new Certificate of Fairness for AI systems alongside a ‘kite mark’ type scheme to display it. Criteria to be defined at industry level, similarly to food labelling regulations.
  4. Introduce mandatory AIAs (Algorithm Impact Assessments) for organisations employing AI systems that have a significant effect on individuals.
  5. Introduce a mandatory requirement for public sector organisations using AI for particular purposes to inform citizens that decisions are made by machines, explain how the decision is reached and what would need to change for individuals to get a different outcome.
  6. Introduce a ‘reduced liability’ incentive for companies that have obtained a Certificate of Fairness to foster innovation and competitiveness.
  7. To compel companies and other organisations to bring their workforce with them – by publishing the impact of AI on their workforce and offering retraining programmes for employees whose jobs are being automated.
  8. Where no redeployment is possible, to compel companies to make a contribution towards a digital skills fund for those employees
  9. To carry out a skills audit to identify the wide range of skills required to embrace the AI revolution.
  10. To establish an education and training programme to meet the needs identified by the skills audit, including content on data ethics and social responsibility. As part of that, we recommend the set up of a solid, courageous and rigorous programme to encourage young women and other underrepresented groups into technology.
Presented by the Women Leading in AI group at a meeting in parliament in February 2019, this report in Forbes by Noel Sharkey outlines both the group, their recommendations, and the meeting.

The NHS’s 10 Principles for AI + Data (February 2019)
  1. Understand users, their needs and the context
  2. Define the outcome and how the technology will contribute to it
  3. Use data that is in line with appropriate guidelines for the purpose for which it is being used
  4. Be fair, transparent and accountable about what data is being used
  5. Make use of open standards
  6. Be transparent about the limitations of the data used and algorithms deployed
  7. Show what type of algorithm is being developed or deployed, the ethical examination of how the data is used, how its performance will be validated and how it will be integrated into health and care provision
  8. Generate evidence of effectiveness for the intended use and value for money
  9. Make security integral to the design
  10. Define the commercial strategy
These principles are set out with full commentary and elaboration on Artificial Lawyer here.

IEEE General Principles of Ethical Autonomous and Intelligent Systems (A/IS) (March 2019)
  1. Human Rights: A/IS shall be created and operated to respect, promote, and protect internationally recognized human rights.
  2. Well-being: A/IS creators shall adopt increased human well-being as a primary success criterion for development.
  3. Data Agency: A/IS creators shall empower individuals with the ability to access and securely share their data to maintain people’s capacity to have control over their identity.
  4. Effectiveness: A/IS creators and operators shall provide evidence of the effectiveness and fitness for purpose of A/IS.
  5. Transparency: the basis of a particular A/IS decision should always be discoverable.
  6. Accountability: A/IS shall be created and operated to provide an unambiguous rationale for all decisions made.
  7. Awareness of Misuse: A/IS creators shall guard against all potential misuses and risks of A/IS in operation.
  8. Competence: A/IS creators shall specify and operators shall adhere to the knowledge and skill required for safe and effective operation.
These amended and extended general principles form part of Ethical Aligned Design 1st edition, published in March 2019. For an overview see pdf here.

Ethical issues arising from the police use of live facial recognition technology (March 2019) 

9 ethical principles relate to: public interest, effectiveness, the avoidance of bias and algorithmic justice, impartiality and deployment, necessity, proportionality, impartiality, accountability, oversight, and the construction of watchlists, public trust, and cost effectiveness.

Reported here the UK government’s independent Biometrics and Forensics Ethics Group (BFEG) published an interim report outlining nine ethical principles forming a framework to guide policy on police facial recognition systems.

Floridi and Clement Jones' five principles key to any ethical framework for AI  (March 2019)
  1. AI must be beneficial to humanity.
  2. AI must also not infringe on privacy or undermine security
  3. AI must protect and enhance our autonomy and ability to take decisions and choose between alternatives. 
  4. AI must promote prosperity and solidarity, in a fight against inequality, discrimination, and unfairness
  5. We cannot achieve all this unless we have AI systems that are understandable in terms of how they work (transparency) and explainable in terms of how and why they reach the conclusions they do (accountability).
Luciano Floridi and Lord Tim Clement Jones set out, here in the New Statesman, these 5 general ethical principles for AI, with additional commentary.

The European Commission's High Level Expert Group on AI Ethics Guidelines for Trustworthy AI (April 2019)
  1. Human agency and oversight AI systems should support human autonomy and decision-making, as prescribed by the principle of respect for human autonomy. 
  2. Technical robustness and safety A crucial component of achieving Trustworthy AI is technical robustness, which is closely linked to the principle of prevention of harm.
  3. Privacy and Data governance Closely linked to the principle of prevention of harm is privacy, a fundamental right particularly affected by AI systems.
  4. Transparency This requirement is closely linked with the principle of explicability and encompasses transparency of elements relevant to an AI system: the data, the system and the business models.
  5. Diversity, non-discrimination and fairness In order to achieve Trustworthy AI, we must enable inclusion and diversity throughout the entire AI system’s life cycle. 
  6. Societal and environmental well-being In line with the principles of fairness and prevention of harm, the broader society, other sentient beings and the environment should be also considered as stakeholders throughout the AI system’s life cycle. 
  7. Accountability The requirement of accountability complements the above requirements, and is closely linked to the principle of fairness
For more detail on each of these principles follow the links above.

Published on 8 April 2019, the EU HLEG AI ethics guidelines for trustworthy AI are detailed in full here.

Draft core principles of Australia's Ethics Framework for AI (April 2019)
  1. Generates net-benefits. The AI system must generate benefits for people that are greater than the costs.
  2. Do no harm. Civilian AI systems must not be designed to harm or deceive people and should be implemented in ways that minimise any negative outcomes. 
  3. Regulatory and legal compliance. The AI system must comply with all relevant international, Australian Local, State/Territory and Federal government obligations, regulations and laws.
  4. Privacy protection. Any system, including AI systems, must ensure people’s private data is protected and kept confidential plus prevent data breaches which could cause reputational, psychological, financial, professional or other types of harm.
  5. Fairness. The development or use of the AI system must not result in unfair discrimination against individuals, communities or groups. This requires particular attention to ensure the “training data” is free from bias or characteristics which may cause the algorithm to behave unfairly.
  6. Transparency & Explainability. People must be informed when an algorithm is being used that impacts them and they should be provided with information about what information the algorithm uses to make decisions.
  7. Contestability. When an algorithm impacts a person there must be an efficient process to allow that person to challenge the use or output of the algorithm.
  8. Accountability. People and organisations responsible for the creation and implementation of AI algorithms should be identifiable and accountable for the impacts of that algorithm, even if the impacts are unintended.
These draft principles are detailed in Artificial Intelligence Australia’s Ethics Framework A Discussion Paper. This comprehensive paper includes detailed summaries of many of the frameworks and initiatives listed above, together with some very useful case studies.


References
[1] Asimov, Isaac (1950): Runaround,  in I, Robot, (The Isaac Asimov Collection ed.) Doubleday. ISBN 0-385-42304-7.
[2] Murphy, Robin; Woods, David D. (2009): Beyond Asimov: The Three Laws of Responsible Robotics. IEEE Intelligent systems. 24 (4): 14–20.
[3] Margaret Boden et al (2017): Principles of robotics: regulating robots in the real world
Connection Science. 29 (2): 124:129.
[4] Tony Prescott and Michael Szollosy (eds.) (2017): Ethical Principles of Robotics, Connection Science. 29 (2) and 29 (3).

Thursday, February 21, 2019

First automated robot assembly

This month saw the first important milestone toward Autonomous Robot Evolution: the Bristol and York team demonstrated automated assembly of a complete working robot, from evolved and 3D printed parts. In essence we demonstrated one robot assembling another.

Our evolved robots consist of 3 elements:

* pre-designed modules which we call organs (for sensors, actuators, controllers, etc),
* an evolved and 3D printed skeleton, and
* cables (with 3.5mm jack plugs) to connect the organs and the controller.

Note that the organs are not evolved but hand designed; the rationale for this approach is outlined here.

Here are 3 basic organs:

On the left is a sensor, in the middle a controller and on the right a motor + wheel assembly.









And here are screenshots from the video showing the steps involved:











Step 1 shows the skeleton in the process of 3D printing. In step 2 the skeleton has been manually moved from the print bed onto the assembly area: note the organ and cable bank at the back of the assembly area. Step 3 shows the robot arm inserting the organs into the skeleton. Step 4 shows the robot arm connecting the cables. Step 5 shows the wheels being manually added, and in step 6 the robot is complete. Step 7 shows the assembled robot powered and running.

And here is the complete video:



Our aim is of course to automate the whole process and right now the team are working on the two problems of (1) how to remove the 3D printed skeleton from the print bed ready for transfer to the assembly area, and (2) how best to secure the skeleton in the assembly area ready for the processes outlined above.



Related blog posts:

Autonomous Robot Evolution: from cradle to grave (July 2018)
Autonomous Robot Evolution: first challenges (Oct 2018)

Sunday, January 27, 2019

When Robots Tell Each Other Stories: The Emergence of Artificial Fiction

When I wrote about story-telling robots nearly 7 years ago I had no idea how we could actually build robots that can tell each other stories. Now I believe I do, and my paper setting out how has just been published in a new volume called Narrating Complexity. You can find a pdf online here.

The book emerged from a hugely interesting series of workshops, led by Richard Walsh and Susan Stepney, which brought together several humanities disciplines including narratology, with complexity scientists, systems biologists and a roboticist (me). It was at one of those workshops that I realised that simulation-based internal models - the focus of much of my recent work - could form the basis for story-telling.

To recap: a simulation-based internal model is a computer simulation of a robot and its environment, including other robots, inside itself. Like animals all robots have a set of next possible actions, but unlike animals (and especially humans) robots have only a small repertoire of actions. With an internal model a robot can predict what might happen (in its immediate future) for each of those next possible actions. I call this model a consequence engine because it gives the robot a powerful way of predicting the consequences of its actions, for both itself and other robots.

So, how can we use the consequence engine to make story-telling robots?

When the robot runs it's consequence engine it is asking itself a 'what if' question; 'what if I turned left?' or, 'what if I just stand here?'. Some researchers have called a simulation-based internal model a 'functional imagination' and it's not a bad metaphor. Our robot 'imagines' what might happen in different circumstances. And when the robot has imagined something it has a kind of internal narrative: 'if I turn left I will likely crash into the wall'. In a way the robot is telling itself a story about something that might happen (in Dennett's conceptual Tower-of-Generate-and-Test the robot is a Popperian creature).

Now consider the possibility that the robot converts that internal narrative into speech, and literally speaks it out loud. With current speech synthesis technology that should be relatively easy to do. Here is a diagram showing this.

The blue box on the left is a simplified version of the consequence engine; it's the cognitive machinery that allows the robot to predict the consequences of a particular action. For an outline of how it works there's a description in the paper.

Another robot (B) is equipped with exactly the same cognitive machinery as robot A, and - as shown below robot B listens to robot A's 'story' (using speech recognition), interprets that story as an action and a consequence, which it 'runs' in its consequence engine. In effect robot B 'imagines' robot A's story. It 'imagines' turning left and crashing into the wall - even though it might not be standing near a wall to its left.

The new idea here is that the listener robot (B) converts the story it has heard into a 'what if' question, then 'runs' it in its own consequence engine. In a sense A has invited B to imagine itself in A's shoes. Although compared with the stories we humans tell each other, A's story is trivial, it does I suggest have all the key elements. And of course A and B are not limited to fictional stories: A could - just as easily - recount something that has actually happened to it, like 'I turned right to avoid crashing into the wall'.

You may be wondering 'ok but where is the meaning? Surely B cannot really understand A's simple stories..?' Here I am going to stick my neck out and suggest that the process of re-imagining is what understanding is. Of course you and I can imagine a vast range of things, including situations that no human has ever (or perhaps could ever) experience; Roy Batty's famous line "I've seen things you people wouldn't believe. Attack ships on fire off the shoulder of Orion..." comes to mind.

In contrast our robots have a profoundly limited imagination; their world (both real and imagined) contains only the objects and hazards of their immediate environment and they are capable only of imagining next possible actions and the immediate consequences of those actions. And that limited imagination does have the simple physics of collisions built in. But I contend that - within the constraints of that very limited imagination - our robots can properly be said to 'understand' each other.

But perhaps I'm getting ahead of myself, given that we haven't actually run the experiments yet.


Friday, October 26, 2018

Autonomous Robot Evolution: first challenges

Just spent an exciting two days at the first 'all hands meeting' of our new EPSRC funded project: Autonomous Robot Evolution (ARE): from cradle to grave (read here for an introduction). There are eleven of us in total: 4 postdocs (one from each partner university), 2 PhD students, 1 technician, and the four seniors (co-investigators).


Much of the meeting was spent discussing the fundamental (and tough) questions of (1) how we design the genotype, and the mapping between genotype and phenotype, and (2) how exactly we will physically create the robots. 

Let me outline where we are going with these two questions.

1. Genotype-phenotype mapping. As I explained here most evolutionary robotics research has, to date, used a direct mapping approach, in which each parameter of a robot's genome specifies one feature of the real robot (phenotype). For the robot's controller those parameters might be the weights of the robot's artificial neural network, and for the robot's body they might each specify some physical characteristic of the body (such as the length of leg segments in the illustration here). Of course in biology the mapping is indirect; to put it very simply, the genome determines how an organism develops, rather than the organism itself. And because the expression of genes is affected by the environment in which the organism is developing, identical genotypes give rise to non-identical phenotypes (albeit very similar as with identical twins); this is called phenotypic plasticity.

Because we are looking for both biological plausibility and phenotypic plasticity in this project, we have decided on an indirect mapping from genotype to phenotype. Exactly how this will work is still to be figured out, but I feel sure the genotype will need to be split into two parts: one for the robot's controller and the other for its body, and I rather suspect the mapping will be different for those two parts.

2. How to create the robots. In ARE we will adopt the engineering approach 'in which the process is embodied but takes place in a kind of evolution factory'. Now, in theory we could evolve every part of a robot's hardware, listed below.


But in practice this would be impossible; evolving any one of these subsystems would be a research project in its own right, and we're not attempting to re-run the whole of evolution in this project. Instead we will be designing and fabricating discrete modules for sensing, signalling, actuation and control, that we call 'organs'. So what will we actually evolve? It will be:
  • the number, type and position of sensing, signalling and actuation subsystems, and
  • the 3D shape of the robot’s physical structure or chassis.
At this point you're probably thinking: hang on a minute - if you're designing the organs then what's left to evolve? It's a fair question, but in fact evolution will still have huge freedom to choose which and how many organs and where to position them in the body. And when we bear in mind that we will be co-evolving the robot's controller then the space of all possible phenotypes is vast. Of course we may need to introduce some constraints: for instance that there must be at least one controller. But in general we want as few constraints as possible so that evolution is free to explore the phenotypic space to find the best robots, bearing in mind that we will be breeding robots to be able to operate in challenging environments.

And I would argue that in specifying and designing organs we have not compromised on biological plausibility at all. Biological evolution is, after all, highly modular. Most of the organs (and systems of organs) in your body were evolved long before hominids: livers, hearts, eyes, noses, vascular systems, digestive systems, central nervous systems; all of those evolved in early vertebrates (with some repurposing along the way*). Architecturally humans have a huge amount in common with all mammals. My dog is not so different from me (and in some aspects superior: her senses of hearing and smell are much better); our key differences are in morphology and intelligence. These are the two properties that we will be exploring through co-evolution in this project.

So, in the coming few months we have some big mechanical and electronic engineering challenges in this part of the project. Here are just a few:
  • experiment with 3D printing materials and print heads,
  • specify, design and prototype the organs (including their packaging and interconnects),
  • decide on how to power the organs (i.e. a single central power organ, or a battery per organ) and figure out how to re-charge the batteries,
  • determine how to connect the organs with the controller and each other (i.e. with wires or wirelessly), and
  • work out the best way of picking and placing organs within the robot as it is 3D printed.
Challenging? For sure, but we have a wonderful team.

*See Neil Shubin's wonderful book Your Inner Fish.

Related blog posts:

Friday, September 28, 2018

Experiments in Artificial Theory of Mind

Since setting out my initial thoughts on robots with simulation-based internal models about 5 years ago - initially in the context of ethical robots - I've had a larger ambition for these models: that they might provide us with a way of building robots with artificial theory of mind - something I first suggested when I outlined the consequence engine 4 years ago.

Since then we've been busy experimentally applying our consequence engine in the lab, in a range of contexts including ethics, safety and imitation, giving me little time to think about theory of mind. But then, in January 2017 I was contacted by Antonio Chella, inviting me to submit a paper to a special issue on Consciousness in Humanoid Robots. After some hesitation on my part and encouragement on Antonio's I realised that this was a perfect opportunity.

Of course theory of mind is not consciousness but it is for sure deeply implicated. And, as I discovered while researching the paper, the role of theory of mind in consciousness (or, indeed of consciousness in theory of mind) is both unclear and controversial. So, this paper, written in the autumn of 2017, submitted January 2018, and - after tough review and major revisions - accepted in June 2018, is my first (somewhat tentative) contribution to the machine consciousness literature.

Experiments in Artificial Theory of Mind: From Safety to Story-Telling, advances the hypothesis that simulation-based internal models offer a powerful and realisable, theory-driven basis for artificial theory of mind.

Here is the abstract
Theory of mind is the term given by philosophers and psychologists for the ability to form a predictive model of self and others. In this paper we focus on synthetic models of theory of mind. We contend firstly that such models—especially when tested experimentally—can provide useful insights into cognition, and secondly that artificial theory of mind can provide intelligent robots with powerful new capabilities, in particular social intelligence for human-robot interaction. This paper advances the hypothesis that simulation-based internal models offer a powerful and realisable, theory-driven basis for artificial theory of mind. Proposed as a computational model of the simulation theory of mind, our simulation-based internal model equips a robot with an internal model of itself and its environment, including other dynamic actors, which can test (i.e., simulate) the robot’s next possible actions and hence anticipate the likely consequences of those actions both for itself and others. Although it falls far short of a full artificial theory of mind, our model does allow us to test several interesting scenarios: in some of these a robot equipped with the internal model interacts with other robots without an internal model, but acting as proxy humans; in others two robots each with a simulation-based internal model interact with each other. We outline a series of experiments which each demonstrate some aspect of artificial theory of mind.
For an outline of the work of the paper see the slides below, presented at the SPANNER workshop in York a few weeks ago.



In fact all of the experiments outlined here have been described in some detail in previous blog posts (although not in the context of artificial theory of mind):
  1. The Corridor experiment 
  2. The Pedestrian experiment
  3. The Ethical robot experiments: with e-puck robots and with NAO robots
  4. Experiments on rational imitation (the imitation of goals)
  5. Story-telling robots**
The thing that ties all of these experiments together is that they all make use of a simulation-based internal model (which we call a consequence engine), which allows our robot to model and hence predict the likely consequences of each of its next possible actions, both for itself and for the other dynamic actors it is interacting with. In some of the experiments those actors are robots acting as proxy humans, so those experiments (in particular the corridor and ethical robot experiments) are really concerned with human-robot interaction.

Theory of mind is the ability to form a predictive model of ourselves and others; it's the thing that allows us to infer the beliefs and intentions of others. Curiously there are two main theories of mind: the 'theory theory' and the 'simulation theory'. The theory theory (TT) holds that one intelligent agent’s understanding of another’s mind is based on innate or learned rules, sometimes known as folk psychology. In TT these hidden rules constitute a 'theory' because they can be used to both explain and make predictions about others’ intentions.  The simulation theory (ST) instead holds that “we use our own mental apparatus to form predictions and explanations of someone by putting ourselves in the shoes of another person and simulating them” (Michlmayr, 2002).

When we hold our simulation-based internal model up against the simulation theory of mind, the two appear to mirror each other remarkably well. If a robot has a simulation of itself inside itself then it can explain and predict the actions of both itself, and others like itself by using its simulation-based internal model to model them. Thus we have an embodied computational model of theory of mind, in short artificial theory of mind.

So, what properties of theory of mind (ToM) are demonstrated in our five experiments?

Well, the first thing to note is that not all experiments implement full ST. In the corridor, pedestrian and ethical robot experiments robots predict their own actions using the simulation-based internal model, i.e. ST, but use a much simpler TT to model the other robots; we use a simple ballistic model for those other robots (i.e. by assuming the robot will continue to move at the speed and direction it is currently moving). Thus I describe these experiments as ST (self) + TT (other), or just ST+TT for short. I argue that this hybrid form of artificial ToM is perfectly valid, since you and I clearly don't model strangers we are trying to avoid in a crowded corridor as anything other than people moving in a particular direction and speed. We don't need to try and intuit their state of mind, only where they are going.

The rational imitation and story-telling experiments do however, use ST for both self and other, since a simple TT will not allow an imitating robot to infer the goals of the demonstrating robot, nor is it sufficient to allow a listener robot to 'imagine' the story told by the storytelling robot.

The table below summarises these differences and highlights the different aspects of theory of mind demonstrated in each of the five experiments.

*Theory Mode: ST (self) + TT/ST (other)

An unexpected real-world use for the approach set out in this paper, is to allow robots to explain themselves. I believe explainability will be especially important for social robots, i.e. robots designed to interact with people. Let me explain by quoting two paragraphs from the paper.

A major problem with human-robot interaction is the serious asymmetry of theory of mind. Consider an elderly person and her care robot. It is likely that a reasonably sophisticated near-future care robot will have a built-in (TT) model of an elderly human (or even of a particular human). This places the robot at an advantage because the elderly person has no theory of mind at all for the robot, whereas the robot has a (likely limited) theory of mind for her. Actually the situation may be worse than this, since our elderly person may have a completely incorrect theory of mind for the robot, perhaps based on preconceptions or misunderstandings of how the robot should behave and why. Thus, when the robot actually behaves in a way that doesn’t make sense to the elderly person, her trust in the robot will be damaged and its effectiveness diminished.

The storytelling model proposed here provides us with a powerful mechanism for the robot to be able to generate explanations for its actual or possible actions. Especially important is that the robot’s user should be able to ask (or press a button to ask) the robot to explain “why did you just do that?” Or, pre-emptively, to ask the robot questions such as “what would you do if I fell down?” Assuming that the care robot is equipped with an autobiographical memory, the first of these questions would require it to re-run and narrate the most recent action sequence to be able to explain why it acted as it did, i.e., “I turned left because I didn’t want to bump into you.” The second kind of pre-emptive query requires the robot to interpret the question in such a way it can first initialize its internal model to match the situation described, run that model, then narrate the actions it predicts it would take in that situation. In this case the robot acts first as a listener, then as the narrator (see slide 18 above). In this way the robot would actively assist its human user to build a theory-of-mind for the robot.


**This one remains, for the time-being, a thought experiment.

Reference:

Michlmayr, M. (2002). Simulation Theory Versus Theory Theory: Theories Concerning the Ability to Read Minds. Master’s thesis, Leopold-Franzens- Universität Innsbruck.

Thursday, August 30, 2018

The Pedestrian Experiment

Followers of this blog will know that I have been working for some years on simulation-based internal models - demonstrating their potential for ethical robotssafer robots and imitating robots. But pretty much all of our experiments so far have involved only one robot with a simulation-based internal model while the other robots it interacts with have no internal model at all.

But some time ago we wondered what would happen if two robots, each with a simulation-based internal model, interacted with each other. Imagine two such robots approaching each other in the same way that two pedestrians approach each other on the sidewalk. Is it possible that these 'pedestrian' robots might, from time to time, engage in the kind of 'dance' that human pedestrians do when one steps to their left and the other to their right only to compound the problem of avoiding a collision with a stranger? The answer, it turns out, is yes!

The idea was taken up by Mathias Schmerling at the Humboldt University of Berlin, adapting the code developed by Christian Blum for the Corridor experiment. Chen Yang, one of my masters students, has now updated Mathias' code and has produced some very nice new results.

Most of the time the pedestrian robots pass each other without fuss but in something between 1 in 5 and 1 in 10 trials we do indeed see an interesting dance. Here are a couple of examples of the majority of trials, when the robots pass each other normally, showing the robots' trajectories. In each trial blue starts from the left and green from the right. Note that there is an element of randomness in the initial directions of each robot (which almost certainly explains the relative occurrence of normal and dance behaviours).


And here is a gif animation showing what's going on in a normal trial. The faint straight lines from each robot show the target directions for each next possible action modelled in each robot's simulation-based internal model (consequence engine); the various dotted lines show the predicted paths (and possible collisions) and the solid blue and green lines show which next action is actually selected following the internal modelling.


Here is a beautiful example of a 'dance', again showing the robot trajectories. Note that the impasse resolves itself after awhile. We're still trying to figure out exactly what mechanism enables this resolution.


And here is the gif animation of the same trial:


Notice that the impasse is not resolved until the fifth turns of each robot.

Is this the first time that pedestrians passing each other - and in particular the occasional dance that ensues - has been computationally modelled?

All of the results above were obtained in simulation (yes there really are simulations within a simulation going on here), but within the past week Chen Yang has got this experiment working with real e-puck robots. Videos will follow shortly.


Acknowledgements.

I am indebted to the brilliant experimental work of first Christian Blum (supported by Wenguo Liu), then Mathias Schmerling who adapted Christian's code for this experiment, and now Chen Yang who has developed the code further and obtained these results.

Saturday, July 07, 2018

Autonomous Robot Evolution: from cradle to grave

A few weeks ago we had the kick-off meeting, in York, of our new 4 year EPSRC funded project Autonomous Robot Evolution (ARE): cradle to grave. We - Andy Tyrrell and Jon Timmis (York), Emma Hart (Edinburgh Napier), Gusti Eiben (Free University of Amsterdam) and myself - are all super excited. We've been trying to win support for this project for five years or so, and only now succeeded. This is a project that we've been thinking, and writing about, for a long time - so to have the opportunity to try out our ideas for real is wonderful.

In ARE we aim to investigate the artificial evolution of robots for unknown or extreme environments. In a radical new approach we will co-evolve robot bodies and brains in real-time and real-space. Using techniques from 3D printing new robot designs will literally be printed, before being trained in a nursery, then fitness tested in a target environment (a mock nuclear plant). The genomes of the fittest robots will then be combined to create the next generation of ‘child' robots, so that – over successive generations – we will breed new robot designs in a process that mirrors the way farmers have artificially selected new varieties of plants and animals for thousands of years. Because evolving real robots is slow and resource hungry we will run a parallel process of simulated evolution in a virtual environment, in which the real world environment is used to calibrate the virtual world, and reduce the reality gap*. A hybrid real-virtual process under the control of an ecosystem manager will allow real and virtual robots to mate, and the child robots to be printed and tested in either the virtual or real environments.

The project will be divided into five work packages, each led by a different partner: WP1 Evolution (York), WP2 Physical Environment (UWE), WP3 Virtual Environment (York), WP4 Ecosystem Manager (Napier) and WP5 Integration and Demonstration (UWE).

Here in the Bristol Robotics Lab we will focus on work packages 2 and 5. The goal of WP 2 is the development of a purpose designed 3D printing system – which we call a birth clinic – capable of printing small mobile robots, according to a specification determined by a genome designed in WP1. The birth clinic will need to pick and place a number of pre designed and fabricated electronics, sensing and actuation modules (the robot’s ‘organs’) into the printing work area which will be over printed with hot plastic to form the complete robot. The goal of WP5 will be to integrate all components, including the real world birth clinic, nursery, and mock nuclear environment with the virtual environment (WP3) and the ecosystem manager (WP4) into a working demonstrator and undertake evaluation and analysis.

Here is an impression of what the birth clinic might look like















One of the most interesting aspects of the project is that we have no idea what the robots we breed will look like. The evolutionary process could come up with almost any body shape and structure (morphology). The same process will also determine which and how many organs (sensors, actuators, etc) are selected, and their positions and orientation within the body. Our evolved robot bodies could be very surprising indeed.

And who knows - maybe we can take a step towards Walterian Creatures?


*Anyone who uses simulation as a tool to develop robots is well aware that robots which appear to work perfectly well in a simulated virtual world often don't work very well at all when the same design is tested in the real robot. This problem is especially acute when we are artificially evolving those robots. The reason for these problems is that the model of the real world and the robot(s) in it inside our simulation is an approximation. The Reality Gap refers to the less-than-perfect fidelity of the simulation; a better (higher fidelity) simulator would reduce the reality gap.

Related materials

Article in de Volkskrant (in Dutch) De robotevolutie kan beginnen. Hoe? Moeder Natuur vervangen door virtuele kraamkamer (The robot evolution can begin. How? Replacing Mother Nature with virtual nursery), May 2018.

Eiben and Smith (2015) From evolutionary computing to the evolution of things, Nature.

Winfield and Timmis (2015) Evolvable Robot Hardware, in Evolvable Hardware, Springer.

Eiben et al. (2013) The Triangle of Life, European Conference on Artificial Life (ECAL 2013).

Sunday, June 17, 2018

What is Artificial Intelligence? (Or, can machines think?)

Here are the slides from my York Festival of Ideas keynote yesterday, which introduced the festival focus day Artificial Intelligence: Promises and Perils.



I start the keynote with Alan Turing's famous question: Can a Machine Think? and explain that thinking is not just the conscious reflection of Rodin's Thinker but also the largely unconscious thinking required to make a pot of tea. I note that at the dawn of AI 60 years ago we believed the former kind of thinking would be really difficult to emulate artificially and the latter easy. In fact it has turned out to be the other way round: we've had computers that can expertly play chess for over 20 years, but we can't yet build a robot that could go into your kitchen and make you a cup of tea (see also the Wozniak coffee test).

In slides 5 and 6 I suggest that we all assume a cat is smarter than a crocodile, which is smarter than a cockroach, on a linear scale of intelligence from not very intelligent to human intelligence. I ask where would a robot vacuum cleaner be on this scale and propose that such a robot is about as smart as an e-coli (single celled organism). I then illustrate the difficulty of placing the Actroid robot on this scale because, although it may look convincingly human (from a distance), in reality the robot is not very much smarter than a washing machine (and I hint that this is an ethical problem).

In slide 7 I show how apparently intelligent behaviour doesn't require a brain, with the Solarbot. This robot is an example of a Braitenberg machine. It has two solar panels (which look a bit like wings) acting as both sensors and power sources; the left hand panel is connected to the right hand wheel and vice versa. These direct connections mean that Solarbot can move towards the light and even navigate its way through obstacles, thus showing that intelligent behaviour is an emergent property of the interactions between body and environment.

In slide 8 I ask the question: What is the most advanced AI in the world today? (A question I am often asked.) Is it for example David Hanson's robot Sophia (which some press reports have claimed as the world's most advanced)? I argue it is not, since it is a chatbot AI - with a limited conversational repertoire - with a physical body (imagine Alexa with a humanoid head). Is it the DeepMind AI AlphaGo which famously beat the world's best Go player in 2016? Although very impressive I again argue no since AlphaGo cannot do anything other than play Go. Instead I suggest that everyday Google might well be the world's most advanced AI (on this I agree with my friend Joanna Bryson). Google is in effect a librarian able to find a book from an immense library for you - on the basis of your ill formed query - more or less instantly! (And this librarian is poly lingual too.)

In slides 9 I make the point that intelligence is not one thing that animals, robots and AIs have more or less of (in other words the linear scale shown on slides 5 and 6 is wrong). Then in slides 10 - 13 I propose four distinct categories of intelligence: morphological, swarm, individual and social intelligence. I suggest in slides 14 - 16 that if we express these as four axes of a graph then we can (very approximately) compare the intelligence of different organisms, including humans. In slide 17 I show some robots and argue that this graph shows why robots are so unintelligent; it is because robots generally only have two of the four kinds of intelligence whereas animals typically have three or sometimes all four. A detailed account of these ideas can be found in my paper How intelligent is your intelligent robot?

In the next segment, slides 18-20 I ask: how do we make Artificial General Intelligence (AGI)? I suggest that the key difference between current narrow AI and AGI is the ability - which comes very naturally to humans - to generalise knowledge learned in one context to a completely different context. This I think is the basis of human creativity. Using Data from Star Trek the next generation as a SF example of an AGI with human-equivalent intelligence as what we might be aiming for in the quest for AGI I explain that there are 3 approaches to getting there: by design, using artificial evolution or by reverse engineering animals. I offer the opinion that the gap between where we are now and Data like AGI is about the same as the gap between current space craft engine technology and warp drive technology. In other words not any time soon.

In the fourth segment of the talk (slides 21-24) I give a very brief account of evolutionary robotics - a method for breeding robots in much the same way farmers have artificially selected new varieties of plants and animals for thousands of years. I illustrate this with the wonderful Golem project which, for the first time, evolved simple creatures then 3D printed the most successful ones. I then introduce our new four year EPSRC funded project Autonomous Robot Evolution: from cradle to grave. In a radical new approach we aim to co-evolve robot bodies and brains in real-time and real-space. Using techniques from 3D printing new robot designs will literally be printed, before being trained in a nursery, then fitness tested in a target environment. With this approach we hope to be able to evolve robots for extreme environments, however because the energy costs are so high I do not think evolution is a route to truly thinking machines.

In the final segment (slides 25-35) I return to the approach of trying to design rather than evolve thinking machines. I introduce the idea of embedding a simulation of a robot in that robot, so that it has the ability to internally model itself. The first example I give is the amazing anthropomimetic robot invented by my old friend Owen Holland, called ECCEROBOT. Eccerobot is able to learn how to control it's own very complicated and hard-to-control body by trying out possible movement sequences in its internal model (Owen calls this a 'functional imagination'). I then outline our own work to use the same principle - a simulation based internal model - to demonstrate simple ethical behaviours, first with e-puck robots, then with NAO robots. These experiments are described in detail here and here. I suggest that these robots - with their ability to model and predict the consequences of their own and others' actions, in other words anticipate the future - may represent the first small steps toward thinking machines.


Related blog posts:
60 years of asking can robot think?
How intelligent are intelligent robots?
Robot bodies and how to evolve them

Wednesday, May 30, 2018

Simulation-based internal models for safer robots

Readers of this blog will know that I've become very excited by the potential of robots with simulation-based internal models in recent years. So far we've demonstrated their potential in simple ethical robots and as the basis for rational imitation. Our most recent publication instead examines the potential of robots with simulation-based internal models for safety. Of course it's not hard to see why the ability to model and predict the consequences of both your own and others' actions can help you to navigate the world more safely than without that ability.

Our paper Simulation-Based Internal Models for Safer Robots demonstrates the value of anticipation in what we call the corridor experiment. Here a smart robot (equipped with a simulation based internal model which we call a consequence engine) must navigate to the end of a corridor while maintaining a safe space around it at all times despite five other robots moving randomly in the corridor - in much the same way you and I might have to navigate down a busy office corridor while others are coming in the opposite direction.

Here is the abstract from our paper:
In this paper, we explore the potential of mobile robots with simulation-based internal models for safety in highly dynamic environments. We propose a robot with a simulation of itself, other dynamic actors and its environment, inside itself. Operating in real time, this simulation-based internal model is able to look ahead and predict the consequences of both the robot’s own actions and those of the other dynamic actors in its vicinity. Hence, the robot continuously modifies its own actions in order to actively maintain its own safety while also achieving its goal. Inspired by the problem of how mobile robots could move quickly and safely through crowds of moving humans, we present experimental results which compare the performance of our internal simulation-based controller with a purely reactive approach as a proof-of-concept study for the practical use of simulation-based internal models.
So, does it work? Thanks to some brilliant experimental work by Christian Blum the answer is a resounding yes. The best way to understand what's going on is with this wonderful gif animation of one experimental run below. The smart robot (blue) starts at the left and has the goal of safely reaching the right hand end of the corridor – its actual path is also shown in blue. Meanwhile 5 (red) robots are moving randomly (including bouncing off walls) and their actual paths are also shown in red; these robots are equipped only with simple obstacle avoidance behaviours. The larger blue circle shows blue's 'attention radius' – to reduce computational effort blue will only model red robots within this radius. The yellow paths in front of the red robots in blue's attention radius show blue's predictions of how those robots will move (taking into account collisions with the corridor walls and with blue and each other). The light blue projection in front of blue shows which of the 34 next possible actions of blue that is internally modelled is actually chosen as the next action (which, as you will see, sometimes includes standing still).


What do the results show us? Christian ran lots of trials – 88 simulations and 54 real robot experiments – over four experiments: (1) the baseline in simulation – in which the blue robot has only a simple reactive collision avoidance behaviour, (2) the baseline with real robots, (3) using the consequence engine (CE) in the blue robot in simulation, and (4) using the consequence engine in the blue robot with real robots. In the results below (a) shows the time taken for the blue robot to reach the end of the corridor, (b) shows the distance that the blue robot covers while reaching the end of the corridor, (c) shows the “danger ratio” experienced by the blue robot, and (d) shows the number of consequence engine runs per timestep in the blue robot. The danger ratio is the percentage of the run time that anther robot is within the blue robot’s safety radius.


For a relatively small cost in additional run time and distance covered, panels (a) and (b), the danger ratio is very significantly reduced from a mean value of ~20% to a mean value of zero, panel (c). Of course there is a computational cost, and this is reflected in panel (d); the baseline experiment has no consequence engine and hence runs no simulations, whereas the smart robot runs an average of between 8 and 10 simulations per time-step. This is exactly what we would expect: predicting the future clearly incurs a computational overhead.


Full paper reference:
Blum C, Winfield AFT and Hafner VV (2018) Simulation-Based Internal Models for Safer Robots. Front. Robot. AI 4:74. doi: 10.3389/frobt.2017.00074

Acknowledgements:
I am indebted to Christian Blum who programmed the robots, set up the experiment and obtained the results outlined here. Christian lead authored the paper, which was also co-authored by my friend and research collaborator Verena Hafner, who was Christian's PhD advisor.