Friday, June 03, 2016

Engineering Moral Agents

This has been an intense but exciting week. I've been at Schloss Dagstuhl for a seminar called: Engineering Moral Agents - from Human Morality to Artificial Morality. A Dagstuhl is a kind of science retreat in rural south-west Germany. The idea is to bring together a group of people from across several disciplines to work together and intensively focus on a particular problem. In our case the challenge of engineering ethical robots.

We had a wonderful group of scholars including computer scientists, moral, political and economic philosophers, logicians, engineers, a psychologist and a philosophical anthropologist. Our group included several pioneers of machine ethics, including Susan and Michael Anderson, and James Moor.




Our motivation was as follows:
Context-aware, autonomous, and intelligent systems are becoming a presence in our society and are increasingly involved in decisions that affect our lives. Humanity has developed formal legal and informal moral and societal norms to govern its own social interactions. There exist no similar regulatory structures that can be applied by non-human agents. Artificial morality, also called machine ethics, is an emerging discipline within artificial intelligence concerned with the problem of designing artificial agents that behave as moral agents, i.e., adhere to moral, legal, and social norms. 
Most work in artificial morality, up to the present date, has been exploratory and speculative. The hard research questions in artificial morality are yet to be identified. Some of such questions are: How to formalize, “quantify", qualify, validate, verify and modify the “ethics" of moral machines? How to build regulatory structures that address (un)ethical machine behavior? What are the wider societal, legal, and economic implications of introducing these machines into our society? 
We were especially keen to bridge the computer science/humanities/social-science divide in the study of artificial morality and in so doing address the central question of how to describe and formalise ethical rules such that they could be (1) embedded into autonomous systems, (2) understandable by users and other stakeholders such as regulators, lawyers or society at large, and (3) capable of being verified and certified as correct.

We made great progress toward these aims. Of course we will need some time to collate and write up our findings, and some of those findings identify hard research questions which will, in turn, need to be the subject of further work, but we departed the Dagstuhl with a strong sense of having moved a little closer to engineering artificial morality.