Wednesday, February 15, 2017

Thoughts on the EU's draft report on robotics

A few weeks ago I was asked to write a short op-ed on the European Parliament Law Committee's recommendations on civil law rules for robotics.

In the end the piece didn't get published, so I am posting it here.

It is a great shame that most reports of the European Parliament’s Committee for Legal Affairs’ vote last week on its Draft Report on Civil Law Rules on Robotics headlined on ‘personhood’ for robots, because the report has much else to commend it. Most important among its several recommendations is a proposed code of ethical conduct for roboticists, which explicitly asks designers to research and innovate responsibly. Some may wonder why such an invitation even needs to be made but, given that engineering and computer science education rarely includes classes on ethics (it should), it is really important that robotics engineers reflect on their ethical responsibilities to society – especially given how disruptive robot technologies are. This is not new – great frameworks for responsible research and innovation already exist. One such is the 2014 Rome Declaration on RRI, and in 2015 the Foundation for Responsible Robotics was launched.

Within the report’s draft Code of Conduct is a call for robotics funding proposals to include a risk assessment. This too is a very good idea and guidance already exists in British Standard BS 8611, published in April 2016. BS 8611 sets out a comprehensive set of ethical risks and offers guidance on how to mitigate them. It is very good also to see that the Code stresses that humans, not robots, are the responsible agents; this is something we regarded as fundamental when we drafted the Principles of Robotics in 2010.

For me transparency (or the lack of it) is an increasing worry in both robots and AI systems. Labour’s industry spokesperson Chi Onwurah is right to say, “Algorithms are part of our world, so they are subject to regulation, but because they are not transparent, it’s difficult to regulate them effectively” (and don’t forget that it is algorithms that make intelligent robots intelligent). So it is very good to see the draft Code call for robotics engineers to “guarantee transparency … and right of access to information by all stakeholders”, and then in the draft ‘Licence for Designers’: you should ensure “maximal transparency” and even more welcome “you should develop tracing tools that … facilitate accounting and explanation of robotic behaviour… for experts, operators and users”.  Within the IEEE Standards Association Global Initiative on Ethics in AI and Autonomous Systems, launched in 2016, we are working on a new standard on Transparency in Autonomous Systems.

This brings me to standards and regulation.  I am absolutely convinced that regulation, together with transparency and public engagement, builds public trust. Why is it that we trust our tech? Not just because it’s cool and convenient, but also because it’s safe (and we assume that the disgracefully maligned experts will take care of assuring that safety). One of the reasons we trust airliners is that we know they are part of a highly regulated industry with an amazing safety record. The reason commercial aircraft are so safe is not just good design, it is also the tough safety certification processes and, when things do go wrong, robust processes of air accident investigation. So the Report’s call for a European Agency for Robotics and AI to recommend standards and regulatory framework is, as far as I’m concerned, not a moment too soon. We urgently need standards for safety certification of a wide range of robots, from drones and driverless cars to robots for care and assisted living.

Like many of my robotics colleagues I am deeply worried by the potential for robotics and AI to increase levels of economic inequality in the world. Winnie Byanyima, executive director of Oxfam writes for the WEF, “We need fundamental change to our economic model. Governments must stop hiding behind ideas of market forces and technological change. They … need to steer the direction of technological development”. I think she is right – we need a serious public conversation about technological unemployment and how we ensure that the wealth created by AI and Automonous Systems is shared by all. A Universal Basic Income may or may not be the best way to do this – but it is very encouraging to see this question raised in the draft Report.

I cannot close the piece without at least mentioning artificial personhood. My own view is that personhood is the solution to a problem that doesn’t exist. I can understand why, in the context of liability, the Report raises this question for discussion, but – as the report itself later asserts in the Code of Conduct: humans, not robots are the responsible agents. Robots are, and should remain, artefacts.

1 comment:

  1. I couldn't agree more Alan, you are totally right. I quite often use Baroness Onora O'Neil's quote in this context, she said:

    “How can we restore trust?‘ is on everyone’s lips. The answer is pretty obvious. First: be trustworthy. Second: provide others with good evidence that you are trustworthy.”

    This should be the focus of governance and stakeholder involvement, as you rightly say. I was disappointed in the Royal Society's very recent Machine Learning Report that the response to worries about public concerns and a backlash against the technology, focused on better communication so that we all understood better, not building trustworthiness so that our concerns were addressed. It happens every time, in every technology and is very very annoying!

    Good luck with your work.

    ReplyDelete