Section 13.1. Introduction


13.1. Introduction

This section introduces the topic of secure interaction design , touching briefly on mental models , the apparent conflict between security and usability, and the overall issues underlying interaction design.

13.1.1. Mental Models

For software to protect its user's interests, its behavior should be consistent with the user's expectations. Therefore, designing for security requires an understanding of the mental modelthe user's idea of how the system works. Attempting to make security decisions completely independent of the user is ultimately futile: if security really were independent, the user would lack the means to predict and understand the consequences of her actions. On the other hand, users should not be expected to speak the language of security experts or think in terms of the security mechanisms to get their work done safely. Thus, usable secure systems should enforce security decisions based on the user's actions while allowing those actions to be expressed in familiar ways.

A common error in security thinking is to classify an entity as "trusted" or "untrusted" without carefully defining who is doing the trusting and exactly what they trust the entity to do. To ignore these questions is to ignore the mental model. For example, suppose that a user downloads a file-deletion program. The program has been carefully tested to ensure that it reliably erases files, leaving no trace. Is the program secure or not? If the program is advertised as a game and it deletes the user's files, that would be a security violation. However, if the user intends to erase files containing sensitive information, the program's failure to delete them would be a security violation. The only difference is in the expectations. A digital signature asserting who created the program[1] provides no help. The correctness of a program provides no assurance that it is secure, and the presence of a digital signature provides no assurance that it is either correct or secure.

[1] Today's code-signing schemes are designed to provide reliable verification that a program came from a particular source. They don't warrant what programs are supposed to do. They usually don't even specify the author of a program, merely the entity whose key was used to sign it.

13.1.2. Sources of Conflict

At the heart of the apparent conflict between security and usability is the idea that security makes operations harder, yet usability makes operations easier. Although this is usually true, it's imprecise. Security isn't about making all operations difficult; it's about restricting access to operations with undesirable effects. Usability isn't about making all operations easy, either; it's about improving access to operations with desirable effects. Tension between the two arises to the extent that a system is unable to determine whether a particular result is desirable. Security and usability come into harmony when a system correctly interprets the user's desires.

Presenting security to users as a secondary task also promotes conflict. Some designs assume that users can be assigned extra homework: users are expected to install patches, run virus scanners, set up firewalls, and/or check certificates in order to keep their computers secure. But most people don't buy computers and take them home so that they can curl up for a nice evening of firewall configuration. They have better things to do with their computers, like keeping in touch with their friends, writing documents, playing music, or organizing their lives. Users are unlikely to perform extra security tasks and may even circumvent security measures to make their main task more comfortable.

As Diana Smetters and Rebecca Grinter have suggested,[2] security goals should be closely integrated with the workflow of the main task to yield implicit security. Extracting information about security expectations from the user's normal interactions with the interface enables us to minimize or eliminate the need for secondary security tasks.

[2] Diana Smetters and Rebecca Grinter, "Moving from the Design of Usable Secure Technologies to the Design of Useful Secure Applications," Proceedings of the 2002 Workshop on New Security Paradigms (New York: ACM Press, 2002).

13.1.3. Iterative Design

Security and usability are qualities that apply to a whole system, not features that can be tacked on to a finished product. It's regrettably common to hear people speak of "adding security features," even though they would find the idea of adding a "correctness feature" absurd. Trying to add security after the fact is rarely effective and often harms usability as well. Likewise, although usability is also often described in terms of particular features (such as icons, animations, themes, or widgets), interaction design is much deeper than visual appearance. The effectiveness of a design is affected by whether work flows smoothly, whether the symbols and concepts make sense to the user, and whether the design fits the user's mental model of actions and their consequences.

Instead of adding security or usability as an afterthought, it's better to design software in iterations consisting of three basic phases: analysis of needs, then design, then testing. After a round of testing, it's time to analyze the results to find out what needs to be improved, then apply these discoveries to the next round of design, and so on. With each cycle, prototypes become more elaborate and polished as they approach product quality. This advice is nothing new; software engineers and usability engineers have advocated iterative design for many years. However, iterative design is particularly essential for secure software, because security and usability design choices affect each other in ways that are difficult to predict and are best understood through real tests.

Every piece of software ultimately has a human user, even if that user is sometimes a system administrator or a programmer. Therefore, attention to usability concerns is always necessary to achieve true security. The next time someone tells you that a product is secure, you might want to ask, "How much user testing was done?"

13.1.4. Permission and Authority

Much of the following discussion concerns the management of authorities. By an authority, I mean the power to make something happen, in contrast to permission, which refers to access as represented by settings in a security mechanism.[3]

[3] These definitions of permission and authority are due to Mark S. Miller and Jonathan Shapiro. See Mark S. Miller and Jonathan S. Shapiro, "Paradigm Regained: Abstraction Mechanisms for Access Control," in Vijay A. Saraswat (ed.), Proceedings of the 8th Asian Computing Science Conference, Lecture Notes in Computer Science 2896, (Heidelberg: Springer-Verlag, 2003); http://erights.org/talks/asian03/.

For example, suppose that Alice's computer records and enforces the rule that only Alice can read the files in her home directory. Alice then installs a program that serves up her files on the Web. If she gives Bob the URL to her web server, she is granting Bob the authority to read her files even though Bob has no such permission in Alice's system. From the system's perspective, restricted permissions are still being enforced, because Alice's files are accessible only to Alice's programs. It's important to keep in mind the difference between permission and authority because so many real-world security issues involve the transfer of authority independent of permissions.



Security and Usability. Designing Secure Systems that People Can Use
Security and Usability: Designing Secure Systems That People Can Use
ISBN: 0596008279
EAN: 2147483647
Year: 2004
Pages: 295

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net