2.2. Product: Human Factors, Policies, and Security MechanismsIt is unfortunate that usability and security are often seen as competing design goals in security, because only mechanisms that are used, and used correctly, can offer the protection intended by the security designer. As Bruce Tognazzini points out in Chapter 3, a secure system needs to be actually, not theoretically, secure. When users fail to comply with the behavior required by a secure system, security will not work as intended. Users fail to show the required behavior for one of the following two reasons:
2.2.1. Impossible DemandsThe current situation with computer passwords provides a good example of the first case: most users today find it impossible to comply with standard policies governing the use of computer passwords (see Chapter 7 in this volume). Remembering a single, frequently used password is a perfectly manageable task for most users. But most users today have many knowledge-based authentication items to deal with. We have multiple and frequently changed passwords in the work context, in addition to passwords and personal identification numbers (PINs) outside work, some of which are used infrequently or require regular change. The limitations of human memory make it impossible for most users to cope with the memory performance this requires.[4] As a result, users behave in ways forbidden by most security policies:
The standard password mechanism is cheap to implement andonce recalledexecuted quickly. But in the preceding examples, users are knowingly breaking the rules, and the examples give a feeling for the despair that the ever-growing number of passwords and PINs induces in many users. A key human factors principle is not to impose unreasonable demands on users ; in fact, designers should minimize the physical and, especially, the mental workload that a system creates for the user. Frequently used passwordsthat is, passwords used on a daily basisare not a problem for the average user in an office context. Infrequently used passwords and PINs, however, can create significant problemsfor instance, many people withdrawing money once a week have problems recalling a PIN. There are a number of ways in which the memory demands of passwords and PINs can be reduced:
As mentioned previously, usability and security are often seen as competing goals. Security experts are often inclined to reject proposals for improving usability (such as the ones listed earlier) because the help given to users might help an attacker. There is a tendency to discount more usable mechanisms because they may introduce an additional vulnerability or increase risk. For example, changing passwords less frequently means that a compromised password may be used longer. However, we would argue that a usable mechanism should not be dismissed immediately because it may introduce a new vulnerability or increase an existing one. Such a sweeping dismissal ignores the importance of human factors and economic realities, andas Tognazzini points out in Chapter 3the goal of security must be to build systems that are actually secure, as opposed to theoretically secure. For example, users' inability to cope with the standard requirements attached to passwords leads to frequent reset requests. This increases the load on system administrators, and in response many organizations set up help desks. In many organizations, the mounting cost of help desks has been deemed unacceptable.[12]
To cope with the increasing frequency of forgotten passwords, many organizations have introduced password reminder systems, or encouraged users to write down passwords "in a secure manner"for example, in a sealed envelope kept in a locked desk drawer. But such hastily arranged "fixes" to unusable security mechanisms are often anything but secure:
The risks associated with changing passwords less frequently thus need to be weighed against the risks associated with real-world fixes to user problems, such as password reminders and writing down passwords. The FIPS guidelines actually acknowledge that the load on users created by frequent password changes creates its own risks, which in many contexts outweigh those created by changing a password less frequently. Allowing users more login attempts helps only a fellow user attacking the system from the inside, but makes no difference if the main threat is a cracking attack. Frequent changing or resetting of passwords, on the other hand, tends to lead users to create weaker passwordsmore than half of users' passwords use a word with a number at the end,[14] a fact that helps crackers to cut down significantly the time required for a successful cracking attack.[15]
2.2.2. Awkward BehaviorsSometimes users fail to comply with a mechanism not because the behavior required is too difficult, but because it is awkward. Many organizations mandate that users must not leave systems unattended, and should lock their screens when leaving their desks, even for brief periods. Many users working in shared offices do not comply with such policies when their colleagues are present. If a user locks the screen of his computer every time he leaves the office, even for brief periods, what will his colleagues think? They are likely to suspect that the user either has something to hide or does not trust them. Most users prefer to have trusting relationships with their colleagues. Designers can assume that users will not comply with policies and mechanisms requiring behavior that is at odds with values they hold. Another reason why users may refuse to comply is if the behavior required conflicts with the image they want to present to the outside world. Weirich and Sasse[16] found that people who follow security policies to the letterthat is, they construct and memorize strong passwords, change their passwords regularly, and always lock their screensare described as "paranoid" and "anal" by their peers; these are not perceptions to which most people aspire. If secure systems require users to behave in a manner that conflicts with their norms, values, or self-image, most users will not comply. Additional organizational measures are required in such situations. For example, a company can communicate that locking of one's screen is part of a set of professional behaviors (e.g., necessary to have reliable audit trails of access to confidential data), and not because of mistrust or paranoia. Labeling such behaviors clearly as "it's business, not personal" avoids misunderstandings and awkwardness among colleagues. In organizations where genuine security needs underlie such behavior, and where a positive security culture is in place, compliance can become a shared value and a source of pride.
For designers of products aimed at individual users, rather than corporations, identifying security needs and values ought to be the first step toward a usable security product. The motivation to buy, install, and use a security product is increased vastly when it is based on users' security needs and valuesin Chapter 24 of this volume, Friedman, Lin, and Miller provide an introduction to value-based design and further examples. 2.2.3. Beyond the User InterfaceThe need for usability in secure systems was first established in 1975, when Saltzer and Schroeder[17] identified the need for psychological acceptability in secure systems. Traditionally, the way to increase acceptability has been to make security mechanisms easier to use (by providing better user interfaces). The most widely known and cited paper on usability and security, "Why Johnny Can't Encrypt" (reprinted in Chapter 34 of this volume), reports that a sample of users with a good level of technical knowledge failed to encrypt and decrypt their mail using PGP 5.0, even after receiving instruction and practice. The authors, Alma Whitten and Doug Tygar, attributed the problems they observed to a mismatch between users' perception of the task of encrypting email and the way that the PGP interface presents those tasks to users, and they proposed a redesign to make the functionality more accessible.
User-centered design of security mechanisms, however, is more than user interface design. The case of PGP presents a good example. The problem lies less with the interface to PGP and more with the underlying concept of encryption (which predates PGP). The concept of encryption is complex, and the terminology employed is fundamentally at odds with everyday language: a cryptographic key does not function like a key in the physical world, and people's understanding of "public" and "private" is different from how these terms are applied to public and private keys. This will always create problems for users who do not understand how public-key encryption works. While some security experts advocate educating all users on the workings of public-key encryption so that they can use PGP and other encryption mechanisms, we argue that it is unrealistic and unnecessary to expect users to have the same depth of understanding of how a security mechanism works. Some computing people in the 1980s argued that it would never be possible to use a computer without an in-depth knowledge of electronics and programming; arguing that all users will have to become security experts to use systems securely is similarly misguided. The conceptual design approach, pioneered by Don Norman,[18] has been used to make complex functionality available to users who don't understand the detailed workings of a system, but have a task-action model ("if I want this message to be encrypted, I have to press this button").
However, the way in which people interact with security policies and mechanisms is not limited to the point of interaction. It is a truism of usability research that a bad user interface can ruin an otherwise functional system, but a well-designed user interface will not save a system that does not provide the required functionality. Designers can expend much effort on making a security mechanism as simple as possible, and find that users still fail to use it. Using a well-designed security mechanism is still more effort than not using it at all, and users will always be tempted to cut corners, especially when they are under pressure to complete their production task (as we will discuss later in this chapter). To make an effort for security, users must believe that their assets are under threat, and that the security mechanism provides effective protection against that threat. |