Section 2.2. Product: Human Factors, Policies, and Security Mechanisms


2.2. Product: Human Factors, Policies, and Security Mechanisms

It is unfortunate that usability and security are often seen as competing design goals in security, because only mechanisms that are used, and used correctly, can offer the protection intended by the security designer. As Bruce Tognazzini points out in Chapter 3, a secure system needs to be actually, not theoretically, secure. When users fail to comply with the behavior required by a secure system, security will not work as intended. Users fail to show the required behavior for one of the following two reasons:

  • They are unable to behave as required.

  • They do not want to behave in the way required.

2.2.1. Impossible Demands

The current situation with computer passwords provides a good example of the first case: most users today find it impossible to comply with standard policies governing the use of computer passwords (see Chapter 7 in this volume). Remembering a single, frequently used password is a perfectly manageable task for most users. But most users today have many knowledge-based authentication items to deal with. We have multiple and frequently changed passwords in the work context, in addition to passwords and personal identification numbers (PINs) outside work, some of which are used infrequently or require regular change. The limitations of human memory make it impossible for most users to cope with the memory performance this requires.[4] As a result, users behave in ways forbidden by most security policies:

[4] M. Angela Sasse, Sacha Brostoff, and Dirk Weirich, "Transforming the 'weakest link': a human-computer interaction approach to usable and effective security," BT Technology Journal 2001 19, 122131.

  • Users write passwords down. Externalizing items we have to remember is the most common way of dealing with memory problems. In office environments, users stick notes with passwords onto their screens, or maintain a list of current passwords on the nearest whiteboard.

    Similarly, many bank customers write their PINs on their cards. A less common remedy is to write or scratch the PIN on the ATM or its surroundings.

    ANECDOTAL EVIDENCE

    • A reality TV show set in a police station in the UK featured a whiteboard behind a PC used to log the movement of prisoners with a prominent reminder:

    • The customer relations manager of a UK building society received irate phone calls after a major re-branding exercise, in which ATMs and the surrounding environments had been restyled. The customers did not object to the new corporate color scheme, but rather, to the fact that the panels and surroundings onto which they had written or scratched their PINs had been replaced, and as a result they were unable to withdraw cash.


  • Users share passwords with other users. Another common way of preventing loss of data due to the vagaries of human memory is by sharing the information widely, so if you cannot remember the password, you are likely to find a colleague who can.

  • Users choose passwords that are memorable but not secure (when the mechanism allows this).[5] Many users choose passwords or PINs that are memorable but easy to crack (names of spouses or favorite sports stars, birth dates, 1234, 8888).

    [5] Sacha Brostoff and Angela M. Sasse, "Ten strikes and you're out: increasing the number of login attempts can improve password usability," CHI Workshop on Human-Computer Interaction and Security Systems (Apr. 16, 2003, Ft. Lauderdale, FL).

The standard password mechanism is cheap to implement andonce recalledexecuted quickly. But in the preceding examples, users are knowingly breaking the rules, and the examples give a feeling for the despair that the ever-growing number of passwords and PINs induces in many users. A key human factors principle is not to impose unreasonable demands on users ; in fact, designers should minimize the physical and, especially, the mental workload that a system creates for the user.

Frequently used passwordsthat is, passwords used on a daily basisare not a problem for the average user in an office context. Infrequently used passwords and PINs, however, can create significant problemsfor instance, many people withdrawing money once a week have problems recalling a PIN. There are a number of ways in which the memory demands of passwords and PINs can be reduced:

  • Provide mechanisms that require users to recognize items rather than recall them. Recognition is an easier memory task than recollection, and designers of graphical user interfaces (GUIs) have applied this principle for decades now. Recognition of images[6], [7] has already been used for security mechanisms; but even text-based challenge-response mechanisms (see Chapter 8) and associative passwords[8] can offer improvements over the unaided recall that current passwords require.

    [6] Rachna Dhamija and Adrian Perrig, "Deja Vu: A User Study. Using Images for Authentication," Proceedings of the 9th USENIX Security Symposium (Aug. 2000, Denver, CO).

    [7] Passfaces (2004); http://www.realuser.com/cgi-bin/ru.exe/_/homepages/index.htm.

    [8] Moshe Zviran and William J. Haga, "Cognitive Passwords: The Key to Easy Access Control," Computer and Security, 9:8, 1990, 723-736.

  • Keep the number of password changes to a minimum. Login failures increase sharply after password changes[9], [10] because the new item competes with the old one.

    [9] Brostoff and Sasse, 2003.

    [10] Sasse, Brostoff, and Weirich.

    Provide mechanisms that are forgiving. Current password and PIN mechanisms require the item to be recalled and entered 100% correctly. Brostoff and Sasse found[11] that users do not completely forget passwords. Most of the time they confuse them with other passwords, do not recall them 100% correctly, or mistype them on entry. This means that given a larger number of attempts, most users will eventually log in successfully. They report that when the standard limitation of three attempts was removed, the number of successful logins increased from 53% to 93% within nine attempts. Not having to reset a password saves users considerable time and effortthe time, effort, and possible embarrassment involved in contacting a system administrator or help desk, and having to think of, and memorize, a new password. From the organization's point of view, a 40% reduction of resets saves considerably in system administrator or help desk time.

    [11] Sacha Brostoff and Angela M. Sasse, "Are Passfaces More Usable Than Passwords? A Field Trial Investigation," People and Computers XIVUsability or Else! Proceedings of HCI 2000 (Sept. 58, 2000, Sunderland, U.K.), 405424.

As mentioned previously, usability and security are often seen as competing goals. Security experts are often inclined to reject proposals for improving usability (such as the ones listed earlier) because the help given to users might help an attacker. There is a tendency to discount more usable mechanisms because they may introduce an additional vulnerability or increase risk. For example, changing passwords less frequently means that a compromised password may be used longer. However, we would argue that a usable mechanism should not be dismissed immediately because it may introduce a new vulnerability or increase an existing one. Such a sweeping dismissal ignores the importance of human factors and economic realities, andas Tognazzini points out in Chapter 3the goal of security must be to build systems that are actually secure, as opposed to theoretically secure. For example, users' inability to cope with the standard requirements attached to passwords leads to frequent reset requests. This increases the load on system administrators, and in response many organizations set up help desks. In many organizations, the mounting cost of help desks has been deemed unacceptable.[12]

[12] Sasse, Brostoff, and Weirich.

To cope with the increasing frequency of forgotten passwords, many organizations have introduced password reminder systems, or encouraged users to write down passwords "in a secure manner"for example, in a sealed envelope kept in a locked desk drawer. But such hastily arranged "fixes" to unusable security mechanisms are often anything but secure:

  • Password reminders. These may be convenient for users and a fast and cheap fix from the organization's point of view, but they create considerable vulnerabilities that can be exploited by an attacker, and the fact that the password has been compromised may not be detected for some time. For this reason, the FIPS password guidelines[13] mandate that forgotten passwords should not be reissued, but must be reset.

    [13] "Announcing the Standard for Password Usage," Federal Information Processing Standards Publication 112 (May 30, 1985).

  • Encouraging users to write down passwords. This violates the cardinal principle of knowledge-based authentication: that the secret should never be externalized. The "secure manner" of writing down passwords facilitates insider attacks. And relaxing the rules may seem to help users, but also has drawbacks. Simple but strong rules ("you should never write down this password, or tell anyone what it is") are easier for users to cope with than more permissive but complex ones ("it's OK to write down your password and keep it in a sealed envelope in your desk, but it's not OK to write it on a Post-it that you keep under your mouse pad").

The risks associated with changing passwords less frequently thus need to be weighed against the risks associated with real-world fixes to user problems, such as password reminders and writing down passwords. The FIPS guidelines actually acknowledge that the load on users created by frequent password changes creates its own risks, which in many contexts outweigh those created by changing a password less frequently. Allowing users more login attempts helps only a fellow user attacking the system from the inside, but makes no difference if the main threat is a cracking attack. Frequent changing or resetting of passwords, on the other hand, tends to lead users to create weaker passwordsmore than half of users' passwords use a word with a number at the end,[14] a fact that helps crackers to cut down significantly the time required for a successful cracking attack.[15]

[14] Sasse, Brostoff, and Weirich.

[15] Jeff Yann, "A Note on Proactive Password Checking," Proceedings of the New Security Paradigms Workshop 2001 (ACM Press, 2001).

2.2.2. Awkward Behaviors

Sometimes users fail to comply with a mechanism not because the behavior required is too difficult, but because it is awkward. Many organizations mandate that users must not leave systems unattended, and should lock their screens when leaving their desks, even for brief periods. Many users working in shared offices do not comply with such policies when their colleagues are present. If a user locks the screen of his computer every time he leaves the office, even for brief periods, what will his colleagues think? They are likely to suspect that the user either has something to hide or does not trust them. Most users prefer to have trusting relationships with their colleagues. Designers can assume that users will not comply with policies and mechanisms requiring behavior that is at odds with values they hold.

Another reason why users may refuse to comply is if the behavior required conflicts with the image they want to present to the outside world. Weirich and Sasse[16] found that people who follow security policies to the letterthat is, they construct and memorize strong passwords, change their passwords regularly, and always lock their screensare described as "paranoid" and "anal" by their peers; these are not perceptions to which most people aspire. If secure systems require users to behave in a manner that conflicts with their norms, values, or self-image, most users will not comply. Additional organizational measures are required in such situations. For example, a company can communicate that locking of one's screen is part of a set of professional behaviors (e.g., necessary to have reliable audit trails of access to confidential data), and not because of mistrust or paranoia. Labeling such behaviors clearly as "it's business, not personal" avoids misunderstandings and awkwardness among colleagues. In organizations where genuine security needs underlie such behavior, and where a positive security culture is in place, compliance can become a shared value and a source of pride.

[16] Dirk Weirich and M. Angela Sasse, "Pretty Good Persuasion: A First Step Towards Effective Password Security for the Real World," Proceedings of the New Security Paradigms Workshop 2001 (Sept. 1013, Cloudcroft, NM); (ACM Press, 2001), 137143.

For designers of products aimed at individual users, rather than corporations, identifying security needs and values ought to be the first step toward a usable security product. The motivation to buy, install, and use a security product is increased vastly when it is based on users' security needs and valuesin Chapter 24 of this volume, Friedman, Lin, and Miller provide an introduction to value-based design and further examples.

2.2.3. Beyond the User Interface

The need for usability in secure systems was first established in 1975, when Saltzer and Schroeder[17] identified the need for psychological acceptability in secure systems. Traditionally, the way to increase acceptability has been to make security mechanisms easier to use (by providing better user interfaces). The most widely known and cited paper on usability and security, "Why Johnny Can't Encrypt" (reprinted in Chapter 34 of this volume), reports that a sample of users with a good level of technical knowledge failed to encrypt and decrypt their mail using PGP 5.0, even after receiving instruction and practice. The authors, Alma Whitten and Doug Tygar, attributed the problems they observed to a mismatch between users' perception of the task of encrypting email and the way that the PGP interface presents those tasks to users, and they proposed a redesign to make the functionality more accessible.

[17] Jerome H. Saltzer and Michael D. Schroeder, "The Protection of Information in Computer Systems," Proceedings of the IEEE, 63:9 (1975), 12781308.

User-centered design of security mechanisms, however, is more than user interface design. The case of PGP presents a good example. The problem lies less with the interface to PGP and more with the underlying concept of encryption (which predates PGP). The concept of encryption is complex, and the terminology employed is fundamentally at odds with everyday language: a cryptographic key does not function like a key in the physical world, and people's understanding of "public" and "private" is different from how these terms are applied to public and private keys. This will always create problems for users who do not understand how public-key encryption works. While some security experts advocate educating all users on the workings of public-key encryption so that they can use PGP and other encryption mechanisms, we argue that it is unrealistic and unnecessary to expect users to have the same depth of understanding of how a security mechanism works. Some computing people in the 1980s argued that it would never be possible to use a computer without an in-depth knowledge of electronics and programming; arguing that all users will have to become security experts to use systems securely is similarly misguided. The conceptual design approach, pioneered by Don Norman,[18] has been used to make complex functionality available to users who don't understand the detailed workings of a system, but have a task-action model ("if I want this message to be encrypted, I have to press this button").

[18] Donald A. Norman, "Some Observations on Mental Models," in D.A. Gentner and A.A. Stevens (eds.), Mental Models (Hillsdale, NJ: Erlbaum, 1983).

However, the way in which people interact with security policies and mechanisms is not limited to the point of interaction. It is a truism of usability research that a bad user interface can ruin an otherwise functional system, but a well-designed user interface will not save a system that does not provide the required functionality. Designers can expend much effort on making a security mechanism as simple as possible, and find that users still fail to use it. Using a well-designed security mechanism is still more effort than not using it at all, and users will always be tempted to cut corners, especially when they are under pressure to complete their production task (as we will discuss later in this chapter). To make an effort for security, users must believe that their assets are under threat, and that the security mechanism provides effective protection against that threat.



Security and Usability. Designing Secure Systems that People Can Use
Security and Usability: Designing Secure Systems That People Can Use
ISBN: 0596008279
EAN: 2147483647
Year: 2004
Pages: 295

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net