A day off sick with a head cold and painful sinuses had one consolation. I had a little time this afternoon to install Leopard - the new version of Mac OS X (which interestingly arrived this morning before it had been officially launched at 6.00pm this evening).
How did the installation go? Well I'm happy to report that it was remarkable for being unremarkable. Just two minor comments: firstly, there was a very long pause (5 minutes perhaps) at the start of the install process proper, when the time remaining said 'calculating' and there was no apparent hard disk or DVD activity - I was beginning to have doubts about whether all was ok before the process sprang into life again (note to Apple: any kind of long and worrying pause like this really should still have some sort of progress indicator no matter how simple). Secondly, the time remaining calculation appeared to have difficulty making its mind up. Initially it said something over three and a half hours and then revised its estimate downwards over the next 30 minutes or so. In the end it took about an hour and a half from start to finish, and that included a long time for install disk verification.
First impression? Well it's fine. It's an operating system which means - in my view - should not be the main event but just get on and do its thing in the background while letting me get on with my work. It looks very nice of course, but so did Tiger. Cosmetically not such a big difference, especially for me since I place the dock on the left rather than at the bottom. (Ergonomically it makes more sense there because a left mouse movement to reach the dock is far easier than a down hand movement.)
The main new feature that I am immediately and gratefully using is called 'spaces'. It is basically the same thing that Linux window managers have had for years - which I have missed since switching (back) to Mac - that means I can open applications across four virtual screens and then quickly switch between them. This is great for me because when writing I like to have Firefox open for web searches, OpenOffice for drawing diagrams, Preview to read pdf papers, BibDesk and TeXShop for the actual writing. A single screen gets pretty crowded. (Of course what I'd really like is a bank of LCD displays so I can see everything at once but - for now - spaces will have to do.)
What else? Well the ability to instantly search and then - again with almost no delay - view the search results with 'cover flow' and the use 'quick look' to review what you find in more detail is terrific. The way that quick look opens everything from powerpoint presentations to movies and allows you to skip through the files with the left and right arrow keys but also scroll up and down individual files is just great. For the first time in 33 years of using computers I really think I don't need to remember filenames anymore. Given that this is still a good old fashioned traditional Unix file system underneath, Leopard is probably as close as you can get to feeling like an associative 'contents addressable' file system.
----------------------------------------------------------
*Footnote: I returned to Mac earlier this year after a 20 year separation. The first computers at APD (that we didn't design and build ourselves) were 128K Macs in 1985. Lovely machines with a proper windows OS (while the PC was still running DOS) that were used from everything from word processing and accounts to technical drawing.
Friday, October 26, 2007
Sunday, October 07, 2007
You really need to know what your bot(s) are thinking (about you)
The projected ubiquity of personal companion robots raises a range of interesting but also troubling questions.
There can be little doubt that an effective digital companion, whether embodied or not, will need to be both sensitive to the emotional state of its human partner and be able to respond sensitively. It will, in other words, need artificial empathy - such a digital companion would (need to) behave as if it has feelings. One current project at the Bristol Robotics Laboratory is developing such a robot, which will of course need some theory of mind if it is to respond appropriately. Robots with feelings takes us into new territory in human-machine interaction. We are of course used to temperamental machinery and many of us are machine junkies. We are addicted to our cars and dishwashers, our mobile phones and iPods. But what worries me about a machine with feelings (and frankly it doesn’t matter whether it really has feelings or not) is how it will change the way humans feel about the machine.
Human beings will develop genuine emotional attachments to companion bots. Recall Weizenbaum’s secretary’s sensitivity about her private conversation with ELIZA - arguably the worlds first chat-bot in the 1960s. For more recent evidence look no further than the AIBO pet owners clubs. Here is a true story from one such club to illustrate how blurred the line between pet and robot has already become. One AIBO owner complained that her robot pet kept waking her at night with its barking. She would “jump out of bed and try to calm the robo-pet, stroking its smooth metallic-gray back to ease it back to sleep”. She was saved from “going crazy” when it was suggested that she switch the dog off at night to prevent its barking.
It is inevitable that people will develop emotional attachments, even dependencies, on companion bots. This, of course, has consequences. But what interests me is if the bots acquire a shared cultural experience. Another BRL project called ‘the emergence of artificial culture in robot societies’ is investigating this possibility. Consider this scenario. Your home has a number of companion bots. Some may be embodied, others not. It is inevitable that they will be connected, networked via your home wireless LAN, and thus able to chat with each other at the speed of light. This will of course bring some benefits - the companion bots will be able to alert each other to your needs: “she’s home”, or “he needs help with getting out of the bath”. But what if your bots start talking about you?
Herein lies the problem that I wish to discuss. The bots shared culture will be quintessentially alien, in effect an exo-culture (and I don’t mean that to imply sinister). Bot culture could well be inscrutable to humans, which means that when bots start gossiping with each other about you, you will have absolutely no idea what they’re talking about because - unlike them - you have no theory of mind for your digital companions.
--------------------------------------------------------------
This is a short 'position statement' prepared for e-horizons forum Artificial Companions in Society: Perspectives on the Present and Future, 25th and 26th October, Oxford Internet Institute.
There can be little doubt that an effective digital companion, whether embodied or not, will need to be both sensitive to the emotional state of its human partner and be able to respond sensitively. It will, in other words, need artificial empathy - such a digital companion would (need to) behave as if it has feelings. One current project at the Bristol Robotics Laboratory is developing such a robot, which will of course need some theory of mind if it is to respond appropriately. Robots with feelings takes us into new territory in human-machine interaction. We are of course used to temperamental machinery and many of us are machine junkies. We are addicted to our cars and dishwashers, our mobile phones and iPods. But what worries me about a machine with feelings (and frankly it doesn’t matter whether it really has feelings or not) is how it will change the way humans feel about the machine.
Human beings will develop genuine emotional attachments to companion bots. Recall Weizenbaum’s secretary’s sensitivity about her private conversation with ELIZA - arguably the worlds first chat-bot in the 1960s. For more recent evidence look no further than the AIBO pet owners clubs. Here is a true story from one such club to illustrate how blurred the line between pet and robot has already become. One AIBO owner complained that her robot pet kept waking her at night with its barking. She would “jump out of bed and try to calm the robo-pet, stroking its smooth metallic-gray back to ease it back to sleep”. She was saved from “going crazy” when it was suggested that she switch the dog off at night to prevent its barking.
It is inevitable that people will develop emotional attachments, even dependencies, on companion bots. This, of course, has consequences. But what interests me is if the bots acquire a shared cultural experience. Another BRL project called ‘the emergence of artificial culture in robot societies’ is investigating this possibility. Consider this scenario. Your home has a number of companion bots. Some may be embodied, others not. It is inevitable that they will be connected, networked via your home wireless LAN, and thus able to chat with each other at the speed of light. This will of course bring some benefits - the companion bots will be able to alert each other to your needs: “she’s home”, or “he needs help with getting out of the bath”. But what if your bots start talking about you?
Herein lies the problem that I wish to discuss. The bots shared culture will be quintessentially alien, in effect an exo-culture (and I don’t mean that to imply sinister). Bot culture could well be inscrutable to humans, which means that when bots start gossiping with each other about you, you will have absolutely no idea what they’re talking about because - unlike them - you have no theory of mind for your digital companions.
--------------------------------------------------------------
This is a short 'position statement' prepared for e-horizons forum Artificial Companions in Society: Perspectives on the Present and Future, 25th and 26th October, Oxford Internet Institute.
Subscribe to:
Posts (Atom)