tag:blogger.com,1999:blog-20402273.post2984585874544257060..comments2024-03-29T09:37:48.358+00:00Comments on Alan Winfield's Web Log: The Dark side of Ethical RobotsAlan Winfieldhttp://www.blogger.com/profile/08263812573346115168noreply@blogger.comBlogger3125tag:blogger.com,1999:blog-20402273.post-18627626673283753282018-02-14T06:44:25.182+00:002018-02-14T06:44:25.182+00:00First of all, thank you Alan and Dieter for this i...First of all, thank you Alan and Dieter for this interesting publication and starting an important discussion!<br /><br />I completely agree with Michael. And that's crucial about engineering: If one approach doesn't lead to the desired result (here ethics that are difficult to manipulate) this doesn't mean that it's a proof that the desired result can't be achieved.<br /><br />So, yes, ethics as a simple option evaluation layer might not be the best idea. (On the other hand, would that be really so easily hacked in a compiled code?) Thus let's try a next approach. For example - as I understand Michael's proposal - incorporating ethics distributedly and deep inside all the robot behaviour, like ethical safety/security features in all the low-level control and ethical considerations in all decision-making parts. <br /><br />Another approach to increase the reliability of ethics in high-level decision-making could be to combine the evaluation of ethics with the evaluation of the effectiveness of a measure. Thus, inverting the evaluation result would not only lead to an unethical intention but also to an ineffective measure regarding this intention (double negation) and thus again to an ethical behaviour or in the worst case an ineffective behaviour but not to an effective unethical behaviour.Marius Stüchelinoreply@blogger.comtag:blogger.com,1999:blog-20402273.post-45234692187771949012018-02-07T03:11:24.896+00:002018-02-07T03:11:24.896+00:00
“The ease of transformation from ethical to uneth...<br />“The ease of transformation from ethical to unethical… —in our implementation— … only a subtle difference in the way a single value is calculated … a simple negation of this value”<br /><br />I think the dashed parenthetical is key – it’s hard to imagine one would encode ethics like one would code a student’s first Prolog program, or that one would have XML configurable ethics. I’d think that a behavior such as “ethics” would involve several modalities and sub-systems, at least some of which can not be subsumed by others. For example, one of several stop-gaps could be a separate processor that does human/not-human detection in the near field that is directly wired into motor control systems that does outcome prediction based on pending motor commands – this would simply shut down effectors if any predicted outcome would harm a human. Such a system would operate completely outside processes that produce intent that initiates the action. Other such sperate systems could operate at levels closer to perception and planning, creating several “ethical firewalls” all of which must be in (at least fuzzy) agreement, such as predicting that “pressing the control button will deceive the human”. Doesn’t this imply that the “ethics components” are just as complex, if not more so, than that what backs the robot’s main function? Possibly, yes – but that’s ok, as computation is cheap, and as we have loosely coupled left and right brains, robots will have any number of nested or chained “brains” that act as the software and hardware equivalent of angles and devils on their shoulders, with the power to not only whisper, but to act.<br /><br />Michaelhttps://www.blogger.com/profile/15749315552309335775noreply@blogger.comtag:blogger.com,1999:blog-20402273.post-87452749386573848352016-08-14T02:41:25.580+01:002016-08-14T02:41:25.580+01:00I think I agree that if robots (or other intellige...I think I agree that if robots (or other intelligent agents) don't have ethics built-in then the best you can expect is accidental bias creeping in - as seems to be the case with some uses of big data - but worse is people deliberately being taken advantage of. But I also agree that once you introduce ethics you've almost certainly enabled the capacity to act unethically, as you demonstrate here. I think you also mention the need for regulation therefore in the paper?<br />The shell game involves some sort of deception where there isn't usually a correct cup.<br />But if it's a fair game, it would be the speed of moving the cups vs the eye of the observer. In which case, if the robot is cooperating with you, it's helping compete against the other player, or vice versa. So a robot might be set up to be neutral and remove the advantage of the quicker hand or the quicker eye, but in society we often seem to accept it's fair for one person to have privileged information or power over another. So what counted as ethical behaviour might depend on there being sufficiently well-defined contexts where the rules (laws) were established and generally accepted?<br />The aggressive robot sounds like a system being made to act in ways against its design so the authentication need not be of the ethics itself but only that the system hasn't been hacked.<br />So am I right in saying there would need to be verification of the adequacy of the system's ethical rules at design time and again later against the particular set of circumstances but then also of the integrity of the system before acting?Anonymousnoreply@blogger.com