What s Wrong with Software?


What's Wrong with Software?

Much of my first book was devoted to answering this question in detail. However, I'd like to take a few pages to provide you with a glimpse of some interaction-design principles that are effective in designing better software-based products.

Software Forgets

Every time you use a program, you learn a bit more about it, but the program doesn't remember a thing. Troy Daniels is our media producer. He practically lives in Adobe Photoshop. Yet, every time he opens Photoshop, it has forgotten everything and anything he has ever done with it. It doesn't remember where he keeps his image files, nor does it remember the typical work that he does with them. Controls and functions that he uses constantly are given the same emphasis in the interface as controls that he has never used and likely never will.

Software Is Lazy

Most application programs just don't work very hard for users. This is not to say that they don't work hard, but they often waste enormous effort trying to please users by treating them the way programmers wish to be treated. It's like giving your wife an electric drill for her birthday. Just because you like electric drills doesn't mean that she does. If we could only get the programmers to put their efforts behind something that the user really desires, we could make the user much happier without working the programmer any harder.

graphics/04inf02.gif

Software Is Parsimonious with Information

Just like the ATM that doesn't tell me how much money is in my account, most interactive products are very stingy with information. They also tend to camouflage the process what is happening as well as the information relevant to that process. The typical user of an interactive system cannot tell the state of the system until it blurts out a message indicating total failure. For instance, my new clock-radio I described in Chapter 1, "Riddles for the Information Age," fools me by inadvertently concealing its state. The system seems to be working just fine, but it isn't, and there is simply no way of knowing.

If you ever find yourself with a pad of paper taking marginal notes as you work in some program, you know that you are a victim of an information-stingy program. It would be so easy for any program to put lots more helpful information on the screen, but few programmers think about it. For example, when my email program receives an incoming message, it displays a tiny envelope icon. The same little envelope is visible whether I have one new message or a thousand. It doesn't give me any clue about the depth of my digital inbox. That parsimony doesn't let me see the big picture.

Software Is Inflexible

When people can see the big picture, they often tailor their actions to it, but software rarely is so flexible. When a person sees that the stack of forms in his inbox has grown to a depth of six inches, he knows that he must take some drastic action to keep from getting swamped. The way almost all software programs are written, they can only see the single form on the very top of the stack never beyond it. If a computer program's inbox is stacked six inches or six feet deep metaphorically speaking the computer still behaves as though it has only a single form awaiting its ministrations. The converse is true, too. When there is only one form in the human's inbox, he might take advantage of the lull to help his colleague with a taller pile. The computer would never do that.

When a manual, human process is computerized, the programmers (or analysts) study the current behavior of users performing the manual job, and they distill the tasks or functions out of it. These tasks are then programmed into the computer. Typically, all of the nontask aspects of the job are simply lost.

In a manual, human system, the person in charge can pull her brother-in-law's form off the bottom of the stack and expedite its handling. Alternatively, the annoying caller who behaves rudely gets his form moved way to the bottom of the stack. This system flexibility is a key to maintaining social order. In computerized systems, an inhuman rationality is imposed that wears away at the fabric of civilization.

Human users prefer systems that let them fudge things a little. They want to be able to bump the pinball machine just a little bit not enough to tilt the game, but enough to have some positive influence on the outcome. This fudgability is what makes our manual systems work so much better albeit more slowly than our computerized ones.

Software Blames Users

When a program does have a problem, it invariably dumps it in the user's lap, and it typically blames the user for the problem, too. If a human being has an accident, he will usually work to make up for it. For example, if I'm at a friend's house for dinner, and I spill someone's glass of wine, I'll use my napkin to stop the wine from spreading, and then I'll pour the person a new glass. Because I show concern and helpfulness, no offense is taken, and the accident is clearly seen for what it is.

Recently I used a vendor's program to access the vendor's own support site. For some unknown reason, the program failed to make a connection. It issued an error message telling me, erroneously, offensively, and entirely unhelpfully, that I was not connected to the Internet. It was as if the program spilled my wine, refused to clean it up, and then blamed me for it.

When an interactive product has a small problem, it often drops everything and collapses into a useless, inert heap. When it collapses, it tends to cause a lot of collateral damage. For example, an installation program will ask the user several questions before it begins loading the new program onto the hard disk. In the old days, if it ran out of disk space, it would just crash. Modern install programs are hardly better. If they run out of room, they might issue an error message, but then would stop running, forgetting all the settings you have meticulously keyed in. If you clear out some space on your hard disk and run the install again, the first thing it does is ask you all those questions again, instead of remembering what you keyed in.

Software Won't Take Responsibility

Confirmation dialog boxes are one of the most ubiquitous examples of bad design the ones that ask us if we're sure that we want to take some action. In the early days of desktop computing, programs would take irreversible actions the instant the user entered a command. Typing in "erase all" would do just that, immediately, irreversibly. As soon as the first user inadvertently erased his entire disk, he no doubt complained to the programmer, and the programmer added what he considered to be an adequate level of protection. After the user gives the "erase all" command, but before the computer executes it, the program displays a dialog box asking the user to confirm the erase instruction.

It is all so logical, yet it is all so wrong.

A confirmation dialog box is a convenient solution for the programmer because it absolves him from the responsibility of being the agent of an inadvertent erasure. But that is a misunderstanding of the real issues. The erasure is the user's responsibility, and she has already given the command. Now is not time for the program to waver. It should go ahead and perform the requested task. The responsibility that is actually being dodged is the program's responsibility to be prepared to undo its actions, even though the user requested them.

Humans generally don't make decisions in the same way that computers do, and it is quite normal and typical for a person to change his mind or to want to undo a decision he made earlier. In the real world outside of computers, most actions can be deferred, changed, or reversed. There is no reason that this can't also be true for software-based products, except that the programmers who create them don't think that way.

The ATM in Chapter 1 abdicates responsibility with confirmations, just as desktop software does. When I insert my card, the ATM demands that I acknowledge that I have inserted my card. When I request a withdrawal, it demands that I acknowledge that I wish to withdraw money. When I enter an amount, it demands that I acknowledge that I have entered an amount. Why doesn't the machine just trust me? Why doesn't it just proceed with the transaction?

It can give me the opportunity to extricate myself from the transaction at any time in a much easier way. If the ATM merely offered a big red CANCEL button that I could press at any time, it could assume that I am intelligent and capable, and that I know what I want and what I am doing, instead of assuming that I am stupid, incompetent, and confused about what I want.

I'm sure that some of the people who use the ATM are stupid and incompetent, but nobody not even a stupid and incompetent person likes to be treated as if he is stupid and incompetent. Besides, it never generates customer loyalty and good feelings to treat your clients that way.

Fixing the problem isn't difficult. The program should put the word "Withdraw" at the top of the screen and leave it there throughout the transaction. Then it should put the $1.50 charge up on the screen, and leave it there, too. Then it should add the word "Checking," along with my account number, balance, and withdrawal limit, and leave them visible. Then, when I come to the amount question, I am a fully informed consumer, instead of a confused victim of an interrogation. I can make the crucial decision: the amount, from a position of knowing what is legal, available, ready, and appropriate.

A system that is forthcoming with useful information such as I have described is very typical of how human systems work because humans need to see the big picture. Computers, on the other hand, need to see only a small bit of information to take the next step in the process, and that is exactly how this interaction is modeled: It assumes that the person standing there in the cold, punching buttons while her friends impatiently stamp their feet, is another computer, not a warm-blooded human being with feelings.

graphics/kolam.gif

Many newcomers to the world of computing imagine that software behaves the way it does for some good reason. On the contrary, its behavior is often the result of some whim or accident that is thoughtlessly propagated for years. By bringing timely interaction design to the creation of software-based products, we can change its behavior to something more pleasant and productive for humans.



Inmates Are Running the Asylum, The. Why High-Tech Products Drive Us Crazy and How to Restore the Sanity
The Inmates Are Running the Asylum Why High Tech Products Drive Us Crazy &How to Restore the Sanity - 2004 publication
ISBN: B0036HJY9M
EAN: N/A
Year: 2003
Pages: 170

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net