You have a degree of control over making unconscious thoughts conscious, as you demonstrated when you brought the final character of your first name "into mind." You cannot deliberately make conscious thoughts unconscious, however. "Don't think about an elephant," a girl whispers to a boy, knowing that the boy cannot comply. But in a few moments, unless the conversation stays on elephants, the animal will fade into the boy's unconscious. When that happens, the boy is no longer paying attention to the thought of an elephant: The elephant is not his locus of attention.
I use the term locus because it means place, or site. The term focus, which is sometimes used in a similar connection, can be read as a verb; thus, it conveys a misimpression of how attention works. When you are awake and conscious, your locus of attention is a feature or an object in the physical world or an idea about which you are intently and actively thinking. You can see the distinction when you contemplate this phrase: "We can deliberately focus our attention on a particular locus." Whereas to focus implies volition, we cannot completely control what our locus of attention will be. If you hear a firecracker unexpectedly exploding behind you, your attention will be drawn to the source of the sound. Focus is also used to denote, among the objects on a computer display, the one that is currently selected. Your attention may or may not be on this kind of focus when you are using an interface. Of all the world that you perceive through either your senses or your imagination, you are concentrating on at most one entity. Whatever that one object, feature, memory, thought, or concept might be, it is your locus of attention. Attention, as used here, includes not only the case of actively paying attention but also the passive case of going with the flow, or just experiencing what is taking place.
You see and hear much more than whatever is the locus of your attention. If you go into a room to look for a misplaced item, what you seek may be plainly in view but remain unnoticed. We can demonstrate through optical considerations that the image of the sought object was on your retina; it might even have been within the 5-degree cone of your foveal vision. We know through experiments in neurophysiology that a signal representing the object was being generated and transmitted over the optic nerve, yet you do not notice it, because it never became your locus of attention. If I listen for them, I notice that the fluorescent lights in the hall near my office buzz annoyingly, but otherwise I do not hear them. The sound is there, as a tape recording can demonstrate, even when I am unaware of it. I most often notice the sound when I turn the lights on or off. The sudden start of the buzzing calls my attention to it; the sudden stop makes me realize amazingly, because it is after the fact that I had been hearing it. Indeed, what seems to be a full-fidelity recollection of the sound I had just been hearing suddenly becomes my locus of attention. Experiments show that direct perceptions the contents of what psychologists call perceptual memory seem to persist for a brief period: The well-known phenomenon of the persistence of vision is what makes the discrete frames of a movie appear to flow in continuous motion. In particular, visual perceptions decay in typically 200 milliseconds (200 msec), with a range of 90 msec to 1,000 msec; auditory perceptions decay in typically 1,500 msec, with a range of 900 msec to 3,500 msec (Card, Moran, and Newell 1983, pp. 29?1). I cannot now, sitting at my desk, recreate the buzz in that same vivid, immediate way as I did right after it had stopped and my attention had been directed by the sudden onset of silence to the previous presence of the sound. Now, hours later, the perception is long gone, and only a relatively pale memory one having the character more of a description than of a sensation remains of the annoying fluorescent buzz.
Perceptions do not automatically become memories. Most perceptions are lost after they decay. One implication for interface design of the rapid decay of sense perceptions is that you cannot assume that, because someone has seen or heard a particular message 5 seconds earlier, that person will remember its wording. If that particular wording is important or if there is an important detail for example, if the message is, "Report error type 39-152," with the critical detail being the particular number either you must keep the message displayed until it is no longer needed (the best strategy), or the user must be able to apply the information immediately that is, before memory of it decays. As the information becomes the locus of attention, it moves into short-term memory, which we define in Section 2-3-4; it will persist there for as long as 10 seconds.
2-3-1 Formation of Habits
Anything worth doing is worth doing badly at first.
When you perform a task repeatedly, it tends to become easier to do. Juggling, table tennis, and playing piano are everyday examples in my life; they all seemed impossible when I first attempted them. Walking is a more widely practiced example. With repetition, or practice, your competence becomes habitual, and you can do the task without having to think about it. Lewis Thomas (1974), whose writings on biology are always a joy to read, expanded lyrically on the subject.
Working a typewriter by touch, like riding a bicycle or strolling on a path, is best done by not giving it a glancing thought. Once you do, your fingers fumble and hit the wrong keys. To do things involving practiced skills, you need to turn loose the systems of muscles and nerves responsible for each maneuver, place them on their own, and stay out of it. There is no real loss of authority in this, because you get to decide whether to do the thing or not, and you can intervene and embellish the technique any time you like; if you want to ride a bicycle backward, or walk with an eccentric loping gait giving a little skip every fourth step, whistling at the same time, you can do that. But if you concentrate your attention on the details, keeping in touch with each muscle, thrusting yourself into free fall with each step and catching yourself at the last moment by sticking out the other foot in time to break the fall, you will end up immobilized, vibrating with fatigue. (p. 64)
When an observer suggested that a baseball player should think about his technique as he was hitting, baseball star Yogi Berra echoed Lewis's theme but with characteristic brevity: "How can you think and hit at the same time?" (Kaplan 1992, p. 754).
Any habit is a surrender of detail control, but habits are essential to the earth's higher life forms. At the other extreme, life is entirely possible for example, in microbes in the absence of any consciousness whatever, at least as far as we know or have any reason to believe. We also use the term habit in a pejorative sense. Despite Lewis's claim that there is no real loss of authority, bad habits do develop. Habits can be so strong as to approach addiction, sometimes reaching the point of a total loss of conscious control. (I am speaking not of physiological addictions here, such as to nicotine or opiates, but rather of undesired learned habits, such as nail biting.) Insofar as our conscious selves are who we are, I am reminded of Unamuno's observation: "To fall into a habit is to begin to cease to be" (Unamuno 1913). Unamuno was, perhaps, warning us against the pernicious aspects of habit formation; when it comes to the routine aspects of everyday life, however, you want your conscious attention to "cease to be."
You can readily imagine how difficult it would be to drive your car if you had to think, "Uhh, I want to stop. Let me see now: The engine needs to slow down, so I have to take my foot off the accelerator. Now I have to dissipate my car's kinetic energy into heat by pressing on the brake pedal...." Fortunately, as an experienced driver, you perform the operation habitually. Similarly, you have developed many small habits that help you to use your computer, watch, alarm clock, telephone, and every other device that has an interface.
Persistent use of any interface will cause you to develop habits that you will find yourself unable to avoid. Our mandate as designers is to create interfaces that do not allow habits to cause problems for the user. We must design interfaces that (1) deliberately take advantage of the human trait of habit development and (2) allow users to develop habits that smooth the flow of their work. The ideal humane interface would reduce the interface component of a user's work to benign habituation. Many of the problems that make products difficult and unpleasant to use are caused by human-machine design that fails to take into account the helpful and injurious properties of habit formation. One notable example is the tendency to provide many ways of accomplishing the same task. Having multiple options can shift your locus of attention from the task to the choice of method (a topic explored in Section 3-7).
You cannot often break a habit by an act of volition. As often or as fiercely as you tell yourself that you will not perform the habitual action, you may not always be able to stop yourself. Say that, for example, next Sunday your car will interchange the functions of the brake and accelerator pedals. A red light will illuminate on your dashboard to warn you of this change. You might manage to drive a few blocks successfully with the pedals reversed, but most of us would not get out of the driveway without making an error. As soon as your locus of attention is pulled away from the novel arrangement for example, if a child runs into the street your habitual reaction will make you stomp on the wrong pedal. The red light will be of no help at all. I emphasize that you cannot undo a habit by any single act of willpower; only a time-consuming training process can undo a habit. A designer can lay or create inadvertently a nasty trap by permitting to run, on one computer, two or more heavily used applications that differ in only a handful of often-used details. In such a circumstance, the user is guaranteed to develop habits that will cause him errors when he attempts to use in one application a method appropriate to only the other.
2-3-2 Execution of Simultaneous Tasks
In the language of cognitive psychologists, any task that you have learned to do without conscious thought has become automatic. Automaticity enables you to do more than one activity at a time: All but at most one of the tasks that you perform simultaneously are automatic. The one that is not automatic is, of course, the one that most directly involves your locus of attention. When you do two tasks simultaneously, neither of which is automatic, your performance on each task degrades a phenomenon that psychologists call interference compared to your performance on each task alone, because the two tasks compete for your attention. The more predictable, automatic, and unconscious a task becomes, the less it will degrade or compete with other tasks (Baars 1988, p. 33).
We humans apparently simulate the simultaneous accomplishment of tasks that require conscious control by alternating our attention between tasks, attending now to one, then to the others (Card, Moran, and Newell 1983, p. 42). You achieve true simultaneity when all but at most one of your tasks become automatic. For example, you can, at the same time, eat a snack without choking, walk without tripping, and think through a mathematics problem to a satisfactory conclusion. (You may also be working on another math problem unconsciously, but by the definition of the cognitive unconscious, you wouldn't notice that you were. I am claiming only that you cannot simultaneously work consciously on two different math problems.) For most people, all of the tasks, except for finding the solution to the mathematics problem, are so well learned that they undertake these tasks on autopilot. However, if you were practicing these simultaneous activities and suddenly discovered a nasty-tasting morsel in the snack, you would become conscious only of what you were eating. You would no longer be conscious of the mathematics problem.
Equally important as the fact that you cannot be conscious of more than one task at any moment is the realization that humans cannot avoid developing automatic responses. This idea is important enough to bear repetition: No amount of training can teach a user not to develop habits when she uses an interface repeatedly. That we form habits is part of our fixed mental wiring; habit formation cannot be prevented by any act of volition. If you have ever unintentionally driven toward your normal workplace on a Saturday morning when you intended to go somewhere else, you've been had by a habit that formed through repetition of a fixed sequence of actions. When you learned to read, at first you sounded out and paid attention to each letter and syllable; now (I hope) you read without conscious attention to the process of translating marks into words.
Any sequence of actions that you perform repeatedly will, eventually, become automatic. A set of actions that form a sequence also becomes clumped into a single action; once you start a sequence that takes less than 1 or 2 seconds to complete, you will not be able to stop the sequence but will continue executing the actions until you complete that clump. You also cannot interrupt sequences that take longer than a few seconds to execute unless the sequence becomes your locus of attention. Thus, after you take the wrong turn on Saturday, you may suddenly realize that you intended to drive in the opposite direction; this realization makes your navigation your locus of attention, and you can interrupt the automatic sequence of actions that would have led you to your workplace.
When you repeat a sequence of operations, making and keeping what you are doing your locus of attention is the only way to keep a habit from forming. This is very difficult to do. As expressed in a common phrase, our attention wanders.
The inevitability of habit formation has implications for interface design. For example, many of us have used computer systems that, before they will perform an irreversible act, such as deleting a file, ask, "Are you sure?" You then must type, say, a Y for yes or an N for no in response to the question. The idea is that, by making you confirm your decision, the system will give you a chance to correct an otherwise irrecoverable error. This idea is widely accepted. For example, Smith and Duell (1992), addressing a nursing environment, say, "If you inadvertently delete part of the permanent record (which is hard to do because the computer always asks if you're sure)..." (p. 86). Unfortunately, Smith and Duell are unrealistic in their assessment: You can readily make an accidental deletion even when this kind of confirmation is required. Because errors are relatively rare, you will usually type Y after giving any command that requires confirmation. Due to the continual repetition of the action, typing Y after deleting soon becomes habitual. Instead of being a separate mental operation, typing the Y becomes part of the delete-file action; that is, you do not pause, check your intentions, and then type the Y. The computer system's query, intended to serve as a safety measure, is rendered useless by habituation; it serves only to complicate the normal file-deletion process. The key idea is that any confirmation step that elicits a fixed response soon becomes useless. Designers who use such confirmations and administrators who think that the confirmations confer protection are unaware of the powerful habit-forming property of the cognitive unconscious (see Section 6-4-2).
A more effective strategy is to allow users to undo an erroneous command, even if they have performed intervening actions since issuing it. You cannot protect against a user developing a habit of confirming without reestablishing the decision as the locus of attention, even by making the required confirmation action unpredictable. For example, have the computer specify that the user must type either twice or backward that choice being presented at random a word displayed, also chosen randomly, in the dialog box.
The action that you have requested cannot be undone. It will cause permanent loss of the information in the file. If you are sure you wish to delete the information forever, type backward the tenth word in this box.
Requiring this kind of confirmation is as draconian as it is futile. Any attempt at an effective confirmation process is necessarily obnoxious because it prevents the user from forming a habitual response and from ever becoming comfortable with it. If, for legal or other reasons, a file should never be deleted by the user, it should be made impossible for such a deletion to be performed. Such measures also create a new locus of attention; the user is not attending to the correctness of their prior response, thus frustrating the purposes of both the confirmation and the user.
No method of confirming intent is perfect. Even having the user type in the reason for a deletion a technique especially useful in a situation that carries legal implications will soon lead to the user's supplying one of a few stock answers. If the rationale for performing an irreversible act was flawed from the outset, no warning or confirmation method can prevent the user from making a mistake.
Trapped in the Pitfall of Automaticity
I was trapped in the pitfall of automaticity while I was writing this chapter: I italicized a word, then tried to unitalicize it. In most Macintosh word processors, pressing and holding the key marked with a drawing of an apple (called the Command key) and then pressing and releasing the key marked with the letter T (Command-T) returns the text to normal status. In Microsoft Word, however, Command-T alters the paragraph format. If you had asked me if you had thus made it my locus of attention I would have told you that I was using Word, yet I reached (automatically!) for Command-T, typed it, and mangled the paragraph formatting. The only way to prevent such errors is through interface designs that take into consideration the inevitability of habit formation.
2-3-3 Singularity of the Locus of Attention
I can't think about X if I'm thinking about Y.
?span>Chris, a character on the television show Northern Exposure, 31 October 1994
For our purposes, an essential fact about your locus of attention is that there is but one of them. This observation underlies the solution of numerous interface problems. Many people do not believe that they or others have only one locus of attention, but experiments, described in the cited literature, strongly support the hypothesis that we are unable to attend to multiple simultaneous stimuli. This notion, which parallels our discussion on the limitations of the cognitive conscious, is sufficiently surprising to justify examining the support for it.
As Roger Penrose (1989) noted, "A characteristic feature of conscious thought...is its 'oneness' as opposed to a great many independent activities going on at once" (p. 398). Bernard Baars (1988), a widely recognized leader in the study of the cognitive conscious, explains that when people are "asked to monitor a demanding stream of information [they are] largely unconscious of alternative streams of information presented at the same time, even to the same sensory organ. Similarly, in absorbed states of mind, when one is deeply involved with a single train of information, alternative events are excluded from consciousness" (p. 33). The alternative events are not loci of your attention.
Common parlance recognizes this observation. For example, we may have a thought, and we may have many thoughts, but we speak of our attention only in the singular. We never speak of a person's attentions except in unrelated usages, such as unwanted attentions. Although you are unconscious of all but the one line of thought the single conceptual thread that you are following an unexpected or surprising event can pull your attention from that thread. I have described how surprising events trigger conscious attention. What is salient here is that you have acquired a new locus of attention and lost the old; it is not the case that a second locus has been brought into play.
An interrupting event does not need to be external: A sudden pain or the realization that it is time for an appointment may break into your cognitive conscious, derailing your current train of thought and putting it on a new track. However, if outside or internal events are routine and unpressing, your unconscious recognizes that status, and you ignore those events without being conscious that you are ignoring them. In other words, in the presence of the ordinary, your attention is not pulled away. You can train yourself to scan the environment consciously from time to time to notice events to which your attention would not be otherwise called. To illustrate, pilots are taught to scan their instruments regularly, without outside stimuli to initiate scanning. A scan allows pilots to detect, for example, an instrument that subtly shows an abnormal condition. (Not every instrument in an aircraft has an alarm associated with it.) Nonetheless, pilots regularly fail to perform their scan when events force their attention to a particular locus.
 Considerable attention has been paid to the biological mechanisms that allow animals to synchronize with external time cycles, but I am unaware of any work regarding how we set and respond to internal alarm clocks.
Absorption Kills 101 People
An extreme example is an accident that killed 101 people in December 1972. Normally a green indicator in the airliner's cockpit lights to signal that the landing gear is down and ready for landing. When the indicator failed to light, the pilot decided to circle at an altitude of 2,000 feet, and the copilot put the aircraft on auto pilot to maintain the altitude. All three crew members then tried to change the bulb, but it stuck and they could not get it out. Perhaps due to their moving around and working on the bulb, they accidentally turned off the autopilot; in any case, it became disengaged. Soon, as the cockpit recording later showed, an automatic warning sounded; a 0.5-second chord warned them that they had gone 250 feet below their assigned altitude. A yellow warning indicator also lit up. The crew, absorbed in the problem with the green bulb, failed to notice either warning. A little later, while still struggling with the bulb, the copilot noticed that the altimeter indicated 150 feet, alarmingly low. He then asked the pilot, "We're still at two thousand, right?" The pilot replied, "Hey, what's happening here?"
As the pilot spoke, a low-altitude warning horn went off. "And even with the altimeter approaching zero, an amber light on the altimeter indicating they were off their assigned altitude, the radio altimeter approaching zero, and a radio altimeter warning horn beeping, everyone in the crew was so sure they were at 2,000 feet that no one could bring himself to act and eight seconds after the first officer noticed the altimeter, the aircraft crashed into the Everglades." (Quoted material from Garrison 1995.)
You can be more or less absorbed in the task that involves your locus of attention. The more intensely you are focused, the more difficult to transit to a different locus of attention, and the greater the stimulus needed to effect such a change. In the extreme case, when we are completely absorbed by a task, we cease to monitor our environment. You have probably experienced the absorbed state when you are reading a book, are thinking deeply about a problem, or are in the midst of a crisis that, as the expression goes, demands your attention. The use of a computer is often so stressful and difficult that a user will become absorbed in working on the computer system, and therefore distracted from the completion of tasks. Our goal is to leave the task as the locus of the user's attention.
Absorption in a task or a problem decreases the ease with which a person can change her locus of attention. On the other hand, such absorption if it is confined to the task and if the system does not pull attention to itself is essential to productivity. Systems should be designed to allow users to concentrate on their jobs. Interfaces should be designed as though the user will be so absorbed in her task that she may not respond to your attempts to communicate with her. An interface must work, whatever the user's state of absorption. For example, interface designers sometimes assume that the user's locus of attention is the cursor and that changing the cursor's shape will inevitably command the user's attention. The cursor location is a good place to put indicators, but even there, the indicator can go unnoticed; the shape of the cursor is not the locus of attention; rather, the place or object to which it is pointing may well be. An example is given in Section 3-2.
Many examples of absorption seem unbelievable until you experience a similar incident or until you have seen so many reports that you become convinced of the strength of absorption's grip. Because aviation accidents are often well researched and carefully documented, they are a good source for case studies. Here is another (Garrison 1994). A well-known pilot was flying an aircraft unfamiliar to him, one that required him to lower the retractable landing gear as he made his descent. As a reminder, a buzzer sounds when this particular model of aircraft is a certain distance from the ground and the gear has not been lowered. "I landed gear-up at one point, having persuaded myself that the insistent buzzer I kept hearing as I approached the gravel runway had something to do with the airbrakes. (This was one of my early lessons in the bizarre mental mix-ups that can lead to accidents)" (Garrison 1994). But there was no bizarre mental mix-up: Garrison was concentrating on making a good landing, one of the most difficult tasks that a pilot must accomplish and one that requires a great deal of concentration.
 An interface designer might wonder why, if the aircraft could sound the buzzer, could it not also automatically lower the landing gear? This book is not the forum for a discussion of the details, but at times, automatically lowering the landing gear could be dangerous to the occupants. Therefore, it is always left up to the pilot to choose whether to lower the gear.
The human ability to tune out disturbances is not necessarily an all-or-nothing response, as in the previous examples; it can be proportional to the level of absorption and the degree of disturbance. As stress increases, "people concentrate more and more on but a few features of their environment, paying less and less attention to others" (Loftus 1979, p. 35). Thus, if the computer behaves unexpectedly while you are using an interface, you become less likely to see hints, help messages, or other user aids as you become increasingly agitated about the problem.
The more critical the task, the less likely it is for users to notice warnings that alert them to potentially dangerous actions. A computer warning message is most likely to be missed when it is most important for it not to be missed; this sounds like a humorous corollary of Murphy's law, but it is not. One way we can help is to make sure that users cannot make interface operation errors, or that the effects of any actions are readily reversible rather than simply notifying users about the potential consequences of their actions. Most interface situations can be designed such that error messages are unnecessary. A forceful diatribe against using error messages appears in About Face (Cooper 1995, pp. 421?40).
 If anything can go wrong, it will. The first corollary is, If nothing can go wrong, it will anyway.
2-3-4 Origins of the Locus of Attention
That we have only one locus of attention may seem odd. Let us explore how we may have come to have this trait. Baars (1988) speaks eloquently to the question; he seeks a biological rationale for our having evolved in this limited fashion, asserting that
consciousness and related mechanisms pose a great challenge to functional explanations because of the paradoxical limits of conscious capacity. Why can't we experience two different "things" at one time? Why is Short Term Memory limited to half a dozen unrelated items? How could such narrow limits be adaptive? Reasoning naively, it would seem wonderful to be able to consciously read one book, write another one, talk to a friend, and appreciate a fine meal, all at the same time. Certainly the nervous system seems big enough to do all these things simultaneously. The usual answers, that the limitations are "physiological" or that we only have two hands and one mouth to work with, are quite unsatisfactory because they simply move the issue one step backwards: Why have organisms blessed with the most formidable brain in the animal kingdom not developed hands and mouths able to handle true parallel processing? And why does our ability to process information in parallel increase with automaticity, and decrease with conscious involvement? (p. 348)
 Short-term memory, often abbreviated STM in the literature, describes the behavior of memory with respect to stimuli just seen, heard, or otherwise sensed. If we do not make use of the memory or make it our locus of attention, STM fades in 10 to 20 seconds, or far less if we pay attention to new events. As Baars notes, STM is not only short but also of very limited capacity, and new events will drive out the old irretrievably. For a nontechnical and eminently readable account of the structure of human memory, see Loftus (1980).
Baars suspects that the answer lies in there being only one "whole system": There is but one "I" in each of us. But to say that there is one personhood per human being begs the question. That is, why are there not multiple personhoods per mind-body ensemble? I am speaking not of changes that occur serially, but rather of true simultaneous and independent minds in a single, connected physical entity. It may simply be that having a single personhood is a biological accommodation to the linearity of time or results from an accident of evolution rather than from a functional adaptation. Nevertheless, it seems more likely that our single personhood is adaptive: an accommodation to the purely physical impediments to having multiple simultaneous persons in one body. Given our evolved body plan, both personalities would not be able to speak at once or to turn the head in different directions simultaneously. Even if our eyes could have evolved to operate as independently as a gecko's, would they be able to serve two independent curiosities? I can imagine to mention just one possible catastrophe that any multiple-mind mutation was eaten when it tried to escape a predator by deciding to run in two different directions at once.
 Continuing change of personality is a human constant: We grow and change all the time. These changes, as well as the changes called multiple-personality disorder, are not what is being discussed.
 I have often seen my cat poised between curiosity and fear, its senses intently trained on an unfamiliar object, its body tensed for immediate flight. I have acted the same way myself. Sometimes, what is learned by not fleeing is of value, and sometimes the delay is fatal: whence the expression "curiosity killed the cat." This occasional "being of two minds," as the idiom goes, is an internal sequential dialogue rather than two independent simultaneous processes.
Siamese twins and two-headed animals do occur from time to time, and they have two independent minds, but they are developmental accidents, a mismatching or a misreading of the genetic code. They are not successful from an evolutionary standpoint and are not products of natural selection. In the wild, such sports of nature rarely survive and reproduce.
2-3-5 Exploitation of the Single Locus of Attention
We have examined the effects and the possible origins of having a single locus of attention. The next step is to make use of that singularity. We can redesign neither our own internal wiring nor that of other users, but we can create products that have interfaces that accommodate these cognitive capabilities.
That people have a single locus of attention is not always a drawback. Magicians exploit this characteristic shamelessly. A good magician can fix the attention of an entire audience on one hand so that not a single spectator will see what the other hand is doing, although that hand is in no way concealed. If we know where the user's attention is fixed, we can make changes in the system elsewhere, knowing that the changes will not distract the user. This phenomenon was exploited in designing the Canon Cat (Figure 2.1). When a user stopped working, the Cat stored a bit-for-bit image of the screen exactly as the display appeared when she stopped on the first track of the disk. When the user again loaded her disk, the Cat placed the most recently viewed image on the screen, in only a fraction of a second. It takes about 10 seconds for a person to switch contexts or to prepare mentally for an upcoming task (Card, Moran, and Newell 1983, p. 390), but it took only 7 seconds for the Cat to read into memory the rest of the disk. So while the user was staring at the static screen image, recalling what she had been doing and deciding what she was going to do next her locus of attention being the preparations for the coming task the system finished loading. Only then did the screen become active, although it did not change appearance, except that the cursor began to blink. Only a handful of users ever noticed this trick. Most Cat owners thought that the product magically managed to read in the whole diskette in the fraction of a second that it took to display the first screen. Presto!
Figure 2.1. The Canon Cat. Note the two LEAP keys below the space bar.
Many people do not believe that it takes a person approximately 10 seconds to switch contexts; the time is measured between the final command executed in the previous context and the first command issued in the new context. The hiatus is not noticed because the minds of the users are occupied; they are not aware of the passage of time. However, this phenomenon should be used carefully when designing an interface. If the work flow is such that a user makes a particular context switch repeatedly, so that it becomes habitual, the user will make the switch in far less time.
Time delays can be masked; for example, a card game that takes time to create a new deal will feel faster if a card-shuffling noise is played during the delay. The value of masking was vividly demonstrated when such a game had its sound turned off inadvertently and the players suddenly found the delay annoying (Dick Karpinski, personal communication 1999).
2-3-6 Resumption of Interrupted Work
Usually, after dealing with an interruption to a task, you then return to the interrupted task. If the interruption lasts only a few seconds within the decay time of short-term memory no further stimulus is required to signal you to return to the prior task. After a longer break, however, your return to the interrupted task must be triggered, often by seeing your incomplete work lying before you. Such cueing is as common in daily life as it is when using a computer: A banana peel left on the kitchen table by your 4-year-old child becomes a cue to dispose of the peel.
A metaphor that permeates personal computers and derivative technologies is that of a central, neutral dispatch area, or desktop, from which you can launch a variety of applications. When computers are turned on, most of them present the desktop, although some of them can be set to launch a fixed set of applications. When you quit an application, you are usually returned to the desktop. This interface strategy is inefficient and nonhumane. The reason is straightforward: When you quit an application, you (1) want to either return to the previous task on which you were working or (2) start a new task.
In current desktop-based systems, you must always navigate to the task. This is the same as the worst case for an interface that always returns you to what you were last doing, because in the case of wishing to return to the task from which you left off, you have to do no work at all.
Similarly, when you return to an online site, such as a web page, it would generally be better to return you to where you last were than to a home page that, if the site is well designed, is always available with a single click. The same reasoning suggests that when you open a document in an application, such as a word processor, you should be returned to the place where you were working when you last closed or saved it.
The Canon Cat had the property of always returning you to the previous task when you started to use it; moreover, it presented exactly the same screen appearance, including cursor placement, as when it was last used. Many users reported that seeing the same display helped them to remember what they had been doing when they had last used the machine, which made returning to the Cat a more pleasant experience than returning to a computer that boots into a desktop. More recently, the Apple iBook has taken a similar approach, saving the current state to disk and whisking it back in when you turn the machine on.
Designers of digitally tuned radios and televisions make sure that their products retain the most recently tuned station and volume settings, a requirement that adds to the complexity and cost of those products in the form of nonvolatile memory that otherwise would be unnecessary. For computer designers, who work with products that already have substantial nonvolatile memory, such as a hard disk, the hardware needed to do this is already in place, and there is no excuse.