tag:blogger.com,1999:blog-20402273.post6631117022042510096..comments2024-03-22T08:07:47.253+00:00Comments on Alan Winfield's Web Log: Your robot doggie could really be pleased to see youAlan Winfieldhttp://www.blogger.com/profile/08263812573346115168noreply@blogger.comBlogger6125tag:blogger.com,1999:blog-20402273.post-19479104279162628492014-08-30T15:58:21.172+01:002014-08-30T15:58:21.172+01:00Thanks Paul for your comments. I think you raise s...Thanks Paul for your comments. I think you raise some interesting and difficult areas - you're not the first to suggest that robots, in some respects, are like severely disabled animals. Some, including philosopher Thomas Metzinger, argue persuasively that building conscious machines is unethical because (and I paraphrase) you are creating an entity that is likely to experience phenomenal suffering. This is a difficult ethical question that I think will need to be addressed sooner or later.Alan Winfieldhttps://www.blogger.com/profile/08263812573346115168noreply@blogger.comtag:blogger.com,1999:blog-20402273.post-42310330760205183432014-08-30T14:23:01.891+01:002014-08-30T14:23:01.891+01:00Thank you Steve for your thoughtful comments. Your...Thank you Steve for your thoughtful comments. Your distinction between feelings and feelings-that-a-thing-is-conscious-of is really helpful. While I was writing here primarily just about regular feelings I think you are absolutely right that a conscious robot should be capable of being aware of its feelings in that way you and I are. And I 100% agree with you that artificial consciousness should be, in principle, possible. Immensely difficult of course, as well as fraught with ethical and societal worries. Alan Winfieldhttps://www.blogger.com/profile/08263812573346115168noreply@blogger.comtag:blogger.com,1999:blog-20402273.post-39688406434332893882014-08-30T14:13:44.561+01:002014-08-30T14:13:44.561+01:00Thanks Joanna. Quite agree: what are really emotio...Thanks Joanna. Quite agree: what are really emotions? There's no magic - just huge and remarkably (as well as perplexing) complexity. We're also agreed re humanoid robots - in fact I've argued that to build robots that look like humans, but only behave in very simple ways, is itself unethical. I think robot pets are a better bet.Alan Winfieldhttps://www.blogger.com/profile/08263812573346115168noreply@blogger.comtag:blogger.com,1999:blog-20402273.post-38397747972213932912014-07-07T17:48:46.492+01:002014-07-07T17:48:46.492+01:00Technology gets other things to do some work for u...Technology gets other things to do some work for us, to make life easier.<br /><br />If we want that work to be done in ever more human, less machine-like ways, then eventually we start to find ourselves exploring not only how to recreate our humanity in some alternative form but also, in the process, the extent to which we may ourselves be like machines.<br /><br />This progress could include insights from building machines that offer better speech, emotional, cultural or social recognition. Ethical bots have to have a way of judging how we expect others in our presence to behave, and so on.<br /><br />Now, through art, academic and practical study, we're heaping up piles of disjointed information and theories about how we, animal, vegetables and minerals behave and why. Pepper uses ideas from, I believe, emotional rhythms in music?<br /><br />So there are lots of possible directions that haven’t all joined up. But the history of science is that cultural evolution will make its own connections, surprising ourselves, I think, with what we sometimes come up with. <br /><br />I think we're approaching the stage where we need to balance what more we want to know about ourselves with what more we could do to improve our lives.<br /><br />Maybe we passed that stage thousands of years ago or maybe we'll never quite get there.<br /><br />I think one place human-machine interaction is heading is working on is copying ourselves, as it is found that adding ever lower-level features does make a difference to the end result.<br /><br />It might be best to accept that sooner rather than later and try to formalise not just what needs to be put in, but what we ought to leave out.<br /><br />This could involve deciding what disabilities would be acceptable in (or to) a machine and then how to engineer them so that we're (and they're?) happy with the result.<br /><br />One trick to writing a story – or a joke – is to start with a really good ending and then make sure everything fits and leads up to it.<br /><br />A couple of days ago some friends found themselves doing this, after a bit of headscratching at the start line, in order to leave clues for a bike trail through the woods.<br /><br />I think I’m sort of agreeing with Joanna but suggesting starting backwards?Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-20402273.post-44758453148528223622014-07-01T19:31:26.262+01:002014-07-01T19:31:26.262+01:00Assuming Cartesian dualism is wrong, it comes down...Assuming Cartesian dualism is wrong, it comes down to a computability question, and then not a very principled one. If we were able to simulate an organism and its environment at the molecular or sub-molecular level, based on reasonably precise real-world data, and the organism that we were modeling had feelings, then it follows that the simulation must have feelings too, unless someone can offer a convincing reason why such a simulation is impossible even in principle. All the objections I've heard so far are just Cartesian Dualism in disguise - a feeble clinging on to some kind of quantum magic or other reassuringly "uncomputable" property that makes "real" life different from a faithful simulation of it. <br /><br />I agree that there's a distinction to be made between having feelings and having "feeeeelings", in the sense of being conscious of them. Feelings at the lower level are no different from sensations; they're just internal sensations rather than environmental ones. But in both cases we can either be consciously aware of them or not. Constructing artificial emotions is easy; using them for all of their biological purposes is easy; the question is what it takes for a robot to be conscious of them. And that, too, is a biological question. Give them the right equipment and they'll feel things in the same way that we do. This equipment almost certainly involves having a model of the world that includes a model of the self. Consciousness is (surely) just what happens when such a model - a planning mechanism that can be decoupled from the sensorimotor loop - is operating. I see no reason why consciousness shouldn't be implementable on different substrates - it's an issue about architectures, not substances - and if that's true then feeeeelings are implementable too.<br /><br />Like you say, the mistake, as in most of robotics and AI, comes when people try to emulate the symptoms with no regard to the underlying causes.Steve Grandnoreply@blogger.comtag:blogger.com,1999:blog-20402273.post-20715719048186854262014-06-30T15:41:47.588+01:002014-06-30T15:41:47.588+01:00Hi Alan -- my group has been building & publis...Hi Alan -- my group has been building & publishing models of emotions as part of action selection systems for game characters (mostly, more recently robots) for almost a decade now. Here's a list of papers (not all are models) http://www.cs.bath.ac.uk/~jjb/web/ai.html#emot and there are some more recent ones due to Swen Gaudl not linked there.<br /><br />To what extent are these really emotions? To the same extent a gripper is a robot hand, or a camera a robot eye. As usual, I think the problem is to understand the lack of magic yet also the presence importance or complexity of a human attribute, and then to look at what we really need for a robot to function.<br /><br />I strongly disagree with Cynthia Brezeal's recent claims as I think you must too, that we need robots to be more human so that they treat us right. I prefer the fourth EPSRC principle of robotics – that we should avoid the humanoid and go for the transparent.Joanna Brysonhttps://www.blogger.com/profile/02968914847649268737noreply@blogger.com