Pages

Thursday, August 20, 2020

"Why Did You Just Do That?" Explainability and Artificial Theory of Mind for Social Robots

This week I have been attending (virtually) the excellent RoboPhilosophy conference, and this morning gave a plenary talk "Why did you just do that?" Here is the abstract:
An important aspect of transparency is enabling a user to understand what a robot might do in different circumstances. An elderly person might be very unsure about robots, so it is important that her assisted living robot is helpful, predictable – never does anything that puzzles or frightens her – and above all safe. It should be easy for her to learn what the robot does and why, in different circumstances, so that she can build a mental model of her robot. An intuitive approach would be for the robot to be able to explain itself, in natural language, in response to spoken requests such as “Robot, why did you just do that?” or “Robot, what would you do if I fell down?” In this talk I will outline current work, within project RoboTIPS, to apply recent research on artificial theory of mind to the challenge of providing social robots with the ability to explain themselves. 
And here are the slides:


Here are links to the movies:


And here are the papers referenced in the talk, with links:
  1. Jobin, A., Ienca, M. & Vayena, E. (2019) The global landscape of AI ethics guidelines. Nat Mach Intell 1, 389–399
  2. Winfield, A. Ethical standards in robotics and AI. Nature Electronics 2, 46–48 (2019).  Pre-print here.
  3. Winfield, A. F. (2018) Experiments in Artificial Theory of Mind: from safety to story telling. Front. Robot. AI 5:75.
  4. Blum, C., Winfield, A. F. and Hafner, V. V. (2018) Simulation-based internal models for safer robots. Frontiers in Robotics and AI, 4 (74). pp. 1-17.
  5. Vanderelst, D. and Winfield, A. F. (2018) An architecture for ethical robots inspired by the simulation theory of cognition. Cognitive Systems Research, 48. pp. 56-66.
  6. Winfield AFT (2018) When Robots Tell Each Other Stories: The Emergence of Artificial Fiction. In: Walsh R., Stepney S. (eds) Narrating Complexity. Springer, Cham. Preprint here.
  7. Winfield, AF and Jirotka, M. (2017) The case for an ethical black box. In: Gao, Y. et al, eds. (2017) Towards Autonomous Robot Systems. LNCS 10454, pp. 262-273, Springer. Preprint here.
  8. Winfield AFT, Katie Winkle, Helena Webb, Ulrik Lyngs, Marina Jirotka and Carl Macrae, Robot Accident Investigation: a case study in Responsible Robotics, chapter submitted to RoboSoft.
and mentioned in the Q&A:
  1. Winfield, AF, K. Michael, J. Pitt and V. Evers (2019) Machine Ethics: The Design and Governance of Ethical AI and Autonomous Systems [Scanning the Issue], in Proceedings of the IEEE, vol. 107, no. 3, pp. 509-517.
  2. Vanderelst, D. and Winfield, A. (2018), The Dark Side of Ethical Robots, AIES '18: Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society Dec 2018 Pages 317–322. 

1 comment: