Chapter 15. Invisible Tools or Emotionally Supportive Pals?
Try juxtaposing these two thoughts:
Researchers are telling us that, emotionally and intellectually, we respond more and more to digital machines as if they were people. Such machines, they say, ought to be designed so as to be emotionally supportive. ("Good morning, John. You seem a little down today. Bummer.") Stanford social researchers B. J. Fogg and Clifford Nass propose this rule for designers of human-machine interfaces: "Simply put, computers should praise people frequently even when there may be little basis for the evaluation" (Fogg and Nass 1997). Leaving questions of sincerity and ethics aside, this is thought to be quite reasonable, since machines are obviously becoming ever more human-like in their capabilities.
The common advice from other human-computer interface experts is that we should design computers and applications so that they become invisible transparent to the various aims and activities we are pursuing. They shouldn't get in our way. For example, if we are writing, we should be able to concentrate on our writing without having to divert attention to the separate requirements of the word processor.
The two pieces of advice may not necessarily contradict each other, but their conjunction is nevertheless slightly odd. Treat machines like people, but make them invisible if possible? Combining the two ideals wouldn't say much for our view of people. It sounds as though we're traveling down two rather different tracks. And, in the context of current thinking about computers, neither of them looks particularly healthy to me. But perhaps they can help us to explore the territory, leading us eventually to a richer and more satisfactory assessment of the human-machine relationship.