Section 13.2. Design Guidelines


13.2. Design Guidelines

Having established this background, we are now ready to look at the main challenge of secure interaction design: minimizing the likelihood of undesired events while accomplishing the user's intended tasks correctly and as easily as possible. Let's dissect that general aim into more specific guidelines for software behavior.

Minimizing the risk of undesired events is a matter of controlling authorization. Limiting the authority of other parties to access valuable resources protects those resources from harm. The authorization aspect of the problem can be broken down into five guidelines:

  1. Match the most comfortable way to do tasks with the least granting of authority.

  2. Grant authority to others in accordance with user actions indicating consent.

  3. Offer the user ways to reduce others' authority to access the user's resources.

  4. Maintain accurate awareness of others' authority as relevant to user decisions.

  5. Maintain accurate awareness of the user's own authority to access resources.

Accomplishing the user's intended tasks correctly depends on good communication between the user and the system. The user should be able to convey his or her desires to the system accurately and naturally. I'll discuss the following additional guidelines concerning the communication aspect of the problem:

These guidelines are built on straightforward logic and are gleaned from the experiences of security software designers. They are not experimentally proven, although the reasoning and examples given here should convince you that violating these guidelines is likely to lead to trouble. I'll present each guideline along with some questions to consider when trying to evaluate and improve designs. Strategies to help designs follow some of these guidelines are provided in the second half of this chapter.

13.2.1. Authorization

13.2.1.1 1. Match the most comfortable way to do tasks with the least granting of authority.
What are the typical user tasks?
What is the user's path of least resistance for each one?
What authorities are given to software components and other users when the user follows this path?
How can the safest ways of accomplishing a task be made more comfortable, or the most comfortable ways made safer?

In 1975, Jerome Saltzer and Michael Schroeder wrote a landmark paper on computer security[4] proposing eight design principles; their principle of least privilege demands that we grant processes the minimum privilege necessary to perform their tasks. This guideline combines that principle with an acknowledgment of the reality of human preferences: when people are trying to get work done, they tend to choose methods that require less effort, are more familiar, or appear more obvious. Instead of fighting this impulse, use it to promote security. Associate greater risk with greater effort, less conventional operations, or less visible operations so that the user's natural tendency leads to safe operation.

[4] Jerome Salzter and Michael Schroeder, "The Protection of Information in Computer Systems," Proceedings of the 4th Symposium on Operating System Principles (ACM Press, 1973); http://web.mit.edu/Saltzer/www/publications/protection/.

A natural consequence of this guideline is to default to a lack of access and to require actions to grant additional access, instead of granting access and requiring actions to shut it off. (Saltzer and Schroeder called this fail-safe defaults.) Although computer users are often advised to change network parameters or turn off unnecessary services, the easiest and most obvious course of action is not to reconfigure anything. Wendy Mackay has identified many barriers to user customization: customization takes time, it can be hard to figure out, and users don't want to risk breaking their software.[5]

[5] Wendy Mackay, "Users and Customizable Software: A Co-Adaptive Phenomenon," (Ph.D. Thesis, Massachusetts Institute of Technology, 1990); http://www-ihm.lri.fr/~mackay/pdffiles/MIT.thesis.A4.pdf.

Consider Microsoft Internet Explorer's handling of remote software installation in the context of this guideline. Just before Internet Explorer runs downloaded software, it looks for a publisher's digital signature on the software and displays a confirmation window like the one shown in Figure 13-1. In previous versions of this prompt, the default choice was "Yes". As this guideline would suggest, the default is now "No", the safer choice. However, the prompt offers an option to "Always trust content" from the current source, but no option to never trust content from this source. Users who choose the safer path are assured a continuing series of bothersome prompts.

Figure 13-1. Internet Explorer 6.0 displays a software certificate


Regardless of the default settings, the choice being offered is poor: the user must either give the downloaded program complete access to all the user's resources, or not use the program at all. The prompt asks the user to be sure he or she trusts the distributor before proceeding. But if the user's task requires using the program, the most comfortable path is to click "Yes" without thinking. It will always be easier to just choose "Yes" than to choose "Yes" after researching the program and its origins. Designs that rely on users to assess the soundness of software are unrealistic. The principle of least privilege suggests that a mechanism for running programs with less authority would be better.

13.2.1.2 2. Grant authority to others in accordance with user actions indicating consent.
When does the system authorize software components or other users to access the user's resources?
What user actions trigger these transfers of authority?
Does the user consider these actions to indicate consent to such access?

The user's mental model of the system includes a set of expectations about who can do what. To prevent unpleasant surprises, we should ensure that other parties don't gain access that exceeds the user's expectations. When a program or another user is granted access to the user's resources, that granting should be related to some user action. If another party gains access without user action, the user lacks any opportunity to update his or her mental model to include knowledge of the other party's access.

The authorizing action doesn't have to be a security setting or a response to a prompt about security; ideally, it shouldn't feel like a security task at all. It should just be some action that the user associates with the granting of that power. For example, if we ask users whether they expect that double-clicking on a Microsoft Word document would give Word access to its contents, and the vast majority say yes, the double-click alone is sufficient to authorize Word to access the document.

In other situations, a user action can be present but not understood to grant the power it grants. This is the case when a user double-clicks on an email attachment and gets attacked by a nasty virus. The double-click is expected to open the attachment for viewing, not to launch an unknown program with wide-ranging access to the computer.

13.2.1.3 3. Offer the user ways to reduce others' authority to access the user's resources.
What types of access does the user grant to software components and other users?
Which of these types of access can be revoked?
How can the interface help the user to find and revoke such access?

After granting authorities, the user needs to be able to take them back in order to retain control of the computer. Without the ability to revoke access, the user cannot simplify system behavior and cannot recover from mistakes in granting access.

Closing windows is an example of a simple act of revocation that users understand well. When users close a document window, they expect the application to make no further changes to the document. This reduces the number of variables they have to worry about and lets them get on with other tasks.

Lack of revocability is a big problem when users are trying to uninstall software. The process of installing an application or a device driver usually provides no indication of what resources are given to the new software, what global settings are modified, or how to restore the system to a stable state if the installation fails. Configuration changes can leave the system in a state where other software or hardware no longer works properly. Microsoft Windows doesn't manage software installation or removal; it leaves these tasks up to applications, which often provide incomplete removal tools or no removal tools at all. Spyware exploits this problem: most spyware is designed to be difficult to track down and remove. An operating system with good support for revocability would maintain accurate records of installed software, allow the user to select unwanted programs, and cleanly remove them.

13.2.1.4 4. Maintain accurate awareness of others' authority as relevant to user decisions.
What kinds of authority can software components and other users hold?
Which kinds of authority impact user decisions with security consequences?
How can the interface provide timely access to information about such authorities?

To use any software safely, the user must be able to evaluate whether particular actions are safe, which requires accurate knowledge of the possible consequences. The consequences are bounded by the access that has been granted to other parties. Because human memory is limited and fallible, expecting users to remember the complete history of authorizations is unrealistic. The user needs a way to refresh his or her mental model by reviewing which programs or other users have access to do which things.

Special attention must be paid to powers that continuously require universal trust, such as intercepting user input or manipulating the internal workings of running programs. For example, Microsoft Windows provides facilities enabling programs to record keystrokes and simulate mouse clicks in arbitrary windows. To grant such powers to another entity is to trust that entity completely with all of one's access to a system, so activation of these authorities should be accompanied by continuous notification.

Spyware exploits both a lack of consent to authorization and a lack of awareness of authority. Spyware is often included as part of other software that the user voluntarily downloads and wants to use. Even while the other software isn't being used, the spyware remains running in the background, invisibly compromising the user's privacy.

13.2.1.5 5. Maintain accurate awareness of the user's own authority to access resources.
What kinds of authority can the user hold?
How is the user informed of currently held authority?
How does the user come to know about acquisition of new authority?
What decisions might the user make based on his or her expectations of authority?

Users are also part of their own mental models. Their decisions can depend on their understanding of their own access. When users overestimate their authority, they may become vulnerable to unexpected risks or make commitments they cannot fulfill.

PayPal provides a good example of this problem in practice. When money is sent, PayPal sends the recipient an email message announcing "You've got cash!" Checking the account at the PayPal site will show the transaction marked "completed," as in Figure 13-2. If the sender of the money is buying an item from the recipient, the recipient would probably feel safe at this point delivering the item.

Figure 13-2. PayPal sends email notifying a recipient of "cash" and displays the details of a "completed" transaction


Unfortunately, PayPal's announcement generates false expectations. Telling the recipient that they've "got cash" suggests that the funds are concrete and under the recipient's control. Prior experience with banks may lead the recipient to expect that transactions clear after a certain period of time, just as checks deposited at most banks clear after a day or two. But although PayPal accounts look and act like bank accounts in many ways, PayPal's policy on payments[6] does not commit to any time limit by which payments become permanent; PayPal can still reverse the payment at any time. By giving the recipient a false impression of access, PayPal exposes the recipient to unnecessary risk.

[6] PayPal, "Payments (Sending, Receiving, and Withdrawing) Policy" (Nov. 21, 2004); http://www.paypal.com/cgi-bin/webscr?cmd=p/gen/ua/policy_payments-outside.

13.2.2. Communication

13.2.2.1 6. Protect the user's channels to agents that manipulate authority on the user's behalf.
What agents manipulate authority on the user's behalf?
How can the user be sure that he or she is communicating with the intended agent?
How might the agent be impersonated?
How might the user's communication with the agent be intercepted or corrupted?

When someone uses a computer to interact in a particular world, there is a piece of software that serves as the user's agent in the world. For example, a web browser is the agent for interacting with the World Wide Web; a desktop GUI or command-line shell is the agent for interacting with the computer's operating system. If communication with that agent can be spoofed or corrupted, the user is vulnerable to an attack. In standard terminology, the user needs a trusted path for communicating with the agent.

The classic way to exploit this issue is to present a fake password prompt. If a web browser doesn't enforce a distinction between its own password prompts and prompt windows that web pages can generate, a malicious web site could use an imitation prompt to capture the user's password.

Techniques for preventing impersonation include designating reserved hotkeys, reserving areas of the display, and demonstrating privileged abilities. Microsoft Windows reserves the Ctrl-Alt-Delete key combination for triggering operating system security functions: no application program can intercept this key combination, so when users press it to log in, they can be sure that the password dialog comes from the operating system. Many web browsers reserve an area of their status bar for displaying an icon to indicate a secure connection. Trusted Solaris reserves a "trusted stripe" at the bottom of the screen for indicating when the user is interacting with the operating system.

Eileen Ye and Sean Smith have proposed adding flashing colored borders[7] to distinguish window areas controlled by the browser from those controlled by the remote site. The borders constantly flash in a synchronized but unpredictable pattern, which makes them hard to imitate, but would probably annoy users. For password prompts, the Safari web browser offers a more elegant solution: the prompt drops out of the titlebar of the browser window and remains attached (see Figure 13-3) in ways that would be difficult for a web page script to imitate. Attaching the prompt to the window also prevents password prompts for different windows from being confused with each other.

[7] Zishuang (Eileen) Ye and Sean Smith, "Trusted Paths for Browsers," Proceedings of the 11th USENIX Security Symposium (USENIX, 2002); http://www.usenix.org/events/sec02/ye.html.

Figure 13-3. Password prompts in Safari fall out of the titlebar like a flexible sheet of paper and remain attached to the associated window


13.2.2.2 7. Enable the user to express safe security policies in terms that fit the user's task.
What are some examples of security policies that users might want enforced for typical tasks?
How can the user express these policies?
How can the expression of policy be brought closer to the task, ideally disappearing into the task itself?

If security policies are expressed using unfamiliar language or concepts unrelated to the task at hand, users will find it difficult to set a policy that corresponds to their intentions. When the security model doesn't fit, users may expose themselves to unnecessary risk just to get their tasks done.

For instance, one fairly common task is to share a file with a group of collaborators. Unix file permissions don't fit this task well. In a standard Unix filesystem, each file is assigned to one owner and one group. The owner can choose any currently defined group, but cannot define new groups. Granting access to a set of other users is possible only if a group is already defined to contain those users. This limitation encourages users to make their files globally accessible, because that's easier than asking the system administrator to define a new group.

13.2.2.3 8. Draw distinctions among objects and actions along boundaries relevant to the task.
At what level of detail does the interface allow objects and actions to be separately manipulated?
During a typical task, what distinctions between affected objects and unaffected objects does the user care about?
What distinctions between desired actions and undesired actions does the user care about?

Computer software systems consist of a very large number of interacting parts. To help people handle this complexity, user interfaces aggregate objects into manageable chunks: for example, bytes are organized into files, and files into folders. User interfaces also aggregate actions: downloading a web page requires many steps in the implementation, but for the user it's a single click. Interface design requires decisions about which distinctions to expose and hide. Exposing pointless distinctions generates work and confusion for the user; hiding meaningful distinctions forces users to take unnecessary risks.

On a Mac, for example, an application is shown as a single icon even though, at the system level, that icon represents a set of folders containing all the application's files. The user can install or remove the application by manipulating just that one icon. The user doesn't have to deal with the individual files, or risk separating the files by mistake. This design decision simplifies the user's experience by hiding distinctions that don't matter at the user level.

On the other hand, the security controls for web page scripts in Mozilla neglect to make important distinctions. Mozilla provides a way for signed scripts to gain special privileges,[8] but the only option for file access is a setting that grants access to all files. When the user is asked whether to grant that permission, there is no way to control which files the script is allowed to access. The lack of a boundary here forces the user to gamble the entire disk just to access one file.

[8] Jesse Ruderman, "Signed Scripts in Mozilla"; http://www.mozilla.org/projects/security/components/signed-scripts.html.

13.2.2.4 9. Present objects and actions using distinguishable, truthful appearances.
How does the user identify and distinguish different objects and different actions?
In what ways can the means of identification be controlled by other parties?
What aspects of an object's appearance are under system control?
How can those aspects be chosen to best prevent deception?

In order to use a computer safely, the user needs to be able to identify the intended objects and actions when issuing commands to the computer. If two objects look indistinguishably similar, the user risks choosing the wrong one. If an object comes to have a misleading name or appearance, the user risks placing trust in the wrong object. The same is true for actions that have some representation in the user interface.

Identification problems can be caused by names or appearances that are hard to distinguish even if they aren't exactly the same. For example, in some typefaces, the lowercase "L" looks the same as the digit "1" or the uppercase "I", making some names virtually indistinguishable; later in this chapter, we'll look at a phishing attack that exploits just this ambiguity. Unicode adds another layer of complexity to the problem, because different character sequences can be displayed identically: an unaccented character followed by a combining accent appears exactly the same as a single accented character.

No interface can prevent other parties from lying. However, unlike objects in the real world, objects in computer user interfaces do not have primary control over their own appearances. The software designer has the freedom to choose any appearance for objects in the user interface. Objects can "lie" in their appearances only to the extent that the user interface relies upon them to present themselves. The designer must choose carefully what parts of the name or appearance are controlled by the system or controlled by the object, and uphold expectations about consistent parts of the appearance.

For instance, Microsoft Windows promotes the convention that each file's type is indicated by its extension (the part of the filename after the last period) and that an icon associated with the file type visually represents each file. The file type determines how the file will be opened when the icon is double-clicked. Unfortunately, the Windows Explorer defaults to a mode in which file extensions are hidden; in addition, executable programs (the most dangerous file type of all) are allowed to choose any icon to represent themselves. Thus, if a program has the filename document.txt.exe and uses the icon for a text file, the user sees a text file icon labeled document.txt. Many viruses have exploited this design flaw to disguise themselves as harmless files. By setting up expectations surrounding file types and also providing mechanisms to violate these expectations, Windows grants viruses the power to lie.

13.2.2.5 10. Indicate clearly the consequences of decisions that the user is expected to make.
What user decisions have security implications?
When such decisions are being made, how are the choices and their consequences presented?
Does the user understand the consequences of each choice?

When the user manipulates authorities, we should make sure that the results reflect what the user intended. Even if the software can correctly enforce a security policy, the policy being enforced might not be what was intended if the interface presents misleading, ambiguous, or incomplete information. The information needed to make a good decision should be available before the action is taken.

Figure 13-4 shows an example of a poorly presented decision. Prompts like this one are displayed by the Netscape browser when a web page script requests special privileges. (Scripts on web pages normally run with restricted access for safety reasons, although Netscape has a feature allowing them to obtain additional access with user consent.) The prompt asks the user to grant a privilege, but it doesn't describe the privilege to be granted, the length of time it will remain in effect, or how it can be revoked. The term "UniversalXPConnect" is almost certainly unrelated to the user's task. The checkbox labeled "Remember this decision" is also vague, because it doesn't indicate how the decision would be generalizeddoes it apply in the future to all scripts, all scripts from the same source, or repeated uses of the same script?

Figure 13-4. A script requests enhanced privileges in Netscape 7.2


An interface can also be misleading or ambiguous in nonverbal ways. Many graphical interfaces use common widgets and metaphors, conditioning users to expect certain unspoken conventions . For example, a list of round radio buttons indicates that only one of the options can be selected, whereas a list of square checkboxes suggests that any number of options can be selected. Visual interfaces also rely heavily on association between elements, such as the placement of a label next to a checkbox or the grouping of items in a list. Breaking these conventions causes confusion.



Security and Usability. Designing Secure Systems that People Can Use
Security and Usability: Designing Secure Systems That People Can Use
ISBN: 0596008279
EAN: 2147483647
Year: 2004
Pages: 295

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net