7-3 Ethics and Management of Interface Design


Human Interface, The: New Directions for Designing Interactive Systems
By Jef Raskin
Table of Contents
Chapter Seven.  Interface Issues Outside the User Interface

The reasonable man adapts himself to the world; the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable.

?span>George Bernard Shaw

It is difficult to create a good interface when management does not think that interface design is important. In the short-term view, careful interface design may appear to add to the expense of creating a product and to lengthen the development time. In my experience, the short-term view is wrong even in the short term; improving the user interface often simplifies the design. Careful design and detailed specifications do not hinder but rather speed implementation. A superior interface is also an exceptional long-term investment that returns

  • Higher productivity for the customer

  • Increased customer satisfaction

  • A greater perceived value

  • A lowered cost of customer support

  • Faster and simpler implementation

  • A competitive marketing advantage

  • Brand loyalty

  • Simpler manuals and online help

  • Safer products

Interface designers are rarely in a position to control where in the development cycle interface design will occur and what weight will be given to its pronouncements. Where it has been given primacy, as in the Macintosh project, the results can be spectacular.

Aside from the field being relatively new, so that few practitioners have moved into higher management, another problem is that interface designers have little professional or collective clout. There is some work toward partially solving this problem by proposing educational standards and testing, but a practitioner's having a certificate is no guarantee of competence. The concern addressed here is with the other face of the problem: Assuming that a designer is competent, he or she is often required to design bad interfaces. I note with a trace of envy that doctors have legal safeguards if they wish to do right. For example, doctors can sue for wrongful termination if they are fired for refusing to follow a course of action that poses a threat to their patients. Structural engineers can seek the protection of the law if fired for being asked to violate the canons of their profession.

Interface designers work in an area in which incorrect decisions can contribute to physical injury and psychological debility. For example, excessive key- and mouse clicks in an interface can hasten the onset of or exacerbate existing repetitive stress injuries (RSI). Poor interface design can cause psychological distress. What is needed is a basis for establishing legal safeguards to protect conscientious practitioners. Another necessity is to establish some standards of practice not interface design standards for the profession. Measures, such as those discussed in this book and those that will be developed in the future, may allow numerical, objective benchmarks to be established. For example, a structural engineer must show that she has designed a bridge that meets codified standards requiring that it withstand a load of, say, twice the maximum expected, and a car must have less than 0.2 percent carbon monoxide emissions in its exhaust in order to be licensed. Similarly, we may be able to specify that an interface to a word processor is not acceptable if, say, it has an overall information-theoretic efficiency of less than 0.7 or an overall character efficiency of less than 0.8, with no individual feature rating less than 0.5.

Criteria might also be developed such that the average times and number of keystrokes and graphical input device movements and clicks to do tasks, averaged over a frequency-of-use weighted set of tasks, in a new word processor cannot exceed those of any prior or contemporary commercial product that does substantially the same task. Products that meet the criteria would receive some form of certification. The criteria would self-adjust as interface technology improves. At present, new products are often more difficult to use than older ones, but there is no way to know this until you've tried the product. Because these criteria affect productivity, and thus the bottom line, enterprise management should take great interest in them. Not only practitioners and management but also consumers would gain protection from the publication of objective standards of interface quality.

Steve Wildstrom, a writer for Business Week, pointed out that "computer manufacturers, and especially software publishers, often seem to believe that the requirements of the Uniform Commercial Code only apply to other people" (personal communication, October 1998). Many of the present software licenses that are unilaterally foisted on consumers do not promise even that the software will do the task for which it is advertised. Many of these documents specifically deny merchantability, the concept that the sale of a product automatically carries with it the presumption and promise that the product can do the task for which it was clearly intended. Some states in the United States have passed legislation that nullifies denials of merchantability for computer products; all sovereign powers should do so.

An interface-quality rating system, administered by an independent organization, might be of value to buyers of products that have a significant interface component. User interface design itself must not be regulated or restricted, and we must avoid guidelines that are based on specific interface mechanisms so that innovation will not be penalized, but relative performance ratings that compare the productivity of comparable products will spur designers to move in the right directions.

We must walk a fine line between making a product so novel that users experienced with conventional interfaces will feel lost, and making it so much like standard GUIs that we are failing to aid users to the best of our ability. At one extreme, we must avoid novelty for its own sake, although at the other extreme, we must not lose a valuable marketing opportunity by slavishly making our product too much like other products.

One of the oldest canards in the interface business is the one that says "Maximizing functionality and maintaining simplicity work against each other in the interface" (Microsoft 1995, p. 8). What is true is that adding ad hoc features works against simplicity. But that's just bad design. It is often, but not always, possible to increase functionality without increasing difficulty at a greater rate. Often, added functionality can be had without any added interface complexity; note the difference between interface complexity and task complexity. If the added functionality unifies what had previously been disparate features, the interface can get simpler.

"One way to support simplicity is to reduce the presentation of information to the minimum required to communicate adequately" (Microsoft 1995, p. 8). That is certainly the case, except that I'd replace "adequately" with "well." They err, however, by saying, "For example, avoid wordy descriptions for command names or messages" (Microsoft 1995, p. 8). But what is the minimum required to communicate well? Most of today's interfaces emphasize brevity at the expense of clarity. Why should we have to decipher a cryptic item marked "List" in a word processor's pull-down menu when it could say "Make Index or Table of Contents"? (Remember that a pull-down menu takes up no room from your document, because it goes away as soon as you move the cursor away from it or choose an option.) Do not confuse a clean, open look to the screen with a simple-to-use interface.

Navigation versus White Space

We seem to have a real fear of displaying data in our interfaces. We know that people quickly can find one among a few items much more quickly than they can find one among dozens: There is less to look through. But it does not follow, as some seem to think, that it is therefore better to have fewer items on each screen. If you have hundreds of items and split them up among dozens of screens, you lose more in navigation time than you gain in searching for the individual item, even if the one you seek is swimming in a sea of similar-looking items. Looking for something in long, gray lists is not always a terrible thing in an interface.

If people weren't good at finding tiny things in long lists, the Wall Street Journal would have gone out of business years ago. Would you rather have the stocks listed 15 to a page, each page decorated like a modern GUI screen, even given some scheme for helping you to find the right page, such as:











and so forth


Such a scheme would be considered childish, wasteful, and a nuisance. Yet we sometimes spend more pixels on a screen in making neat, gray-shadowed borders than we do in presenting information. When a person is motivated by interest or by salary to refer to data, he finds long lists no problem at all. Visual designer Edward Tufte's (1983, p. 105) first three principles for displaying information are:

  • Above all else, show the data.

  • Maximize the data-ink ratio.

  • Erase nondata ink.

All we need do is substitute pixels for ink for his advice to apply to display-based devices. A serious, professional user wants screens packed with useful stuff. Screens should be dense with the information that the user wants, and well-labeled, with methods that make finding information easier. (After all, we do have a computer sitting there, and we should be making the best use of it we can.)

There are a multitude of studies on the design of displays. Many early but still useful studies are discussed in Tullis (1984). Some of the results are still valid, such as search time through a list of items is approximately 30 milliseconds per item (p. 126). Tullis's major results apply to 24-by-80 alphanumeric displays and gave a quantitative evaluation.[2] If the results were extended to today's bitmapped displays and could give a measure of target-finding time, that could be used to optimize not only single, isolated, nonwindowed displays (Tullis's restrictions) but also, in conjunction with estimates of navigation time, could lead to a more global optimization. There is almost certainly a tradeoff between screen complexity and navigational complexity, as Tullis himself noted (p. 132). The tradeoff depends on the speed and ease of navigation and how the data is structured. Where a search facility, such as LEAP, is used for a significant percentage of within-screen searches instead of visual scanning, still other measures would have to be devised. This is a lively area for further research.

In any case, following the popular white-space-makes-for-readability philosophy of screen design to its logical conclusion would teach us that there should only be one item of data on any screen. Thus, the user will always be able to detect it visually with an irreducible minimum of effort.

Having improved many products by decreasing the number of screens and increasing the information on each of the remaining screens improving the logic of the design to achieve the reduction I have come to believe that, in almost all commercial software, we have erred on the side of too little data per screen.

[2] Tullis's measures did not look at content quality but only layout. It is possible to construct counterexamples, that is, displays that rate highly on his measures but which are difficult to use.

The best way to differentiate your product's interface is to make it work. A well-written and cogent argument for the importance of interface issues, directed in large part to management, is The Invisible Computer (Norman 1998).


    The Humane Interface. New Directions for Designing Interactive Systems ACM Press Series
    The Humane Interface. New Directions for Designing Interactive Systems ACM Press Series
    ISBN: 1591403723
    EAN: N/A
    Year: 2000
    Pages: 54

    flylib.com © 2008-2017.
    If you may any questions please contact us: flylib@qtcs.net