Section 5.1. Introduction


5.1. Introduction

Current security systems are often seen as difficult to use, or as getting in the user's way. As a result, they are often circumvented. Users should not have to delve into arcane issues of security to be able to allow access to a part of their personal information online: they don't have to in the real world, after all. In the real world, they rely on trust, an understanding of fiduciary responsibilities, and common sense. So it should be online.[4]

[4] To continue this discussion, see Barber, Luhmann, and also see Helen Nissenbaum, "How Computer Systems Embody Values," IEEE Computer (2001), 118120.

Fundamental questions arise when considering trust, including how to reliably represent trust in different interactions and interfaces, how to transform trust-based decisions into security decisions while maintaining the meaning of the trust-based decisions (in other words, attaining computational tractability without sacrificing meaning), how to transform in the opposite direction, and what the building blocks of trust really are in such contexts as information sharing or secure access to systems. Finally, because trust is fallible, what are its failings, how can they be addressed in this context, and what means of controlling the fallibility exist or should exist? Through investigating prior and current work in the area, this chapter arrives at recommendations for future systems and guidance for how they can be designed for use in a context of trust.

In the next section, we discuss the definitions of trust, and in the following section, we examine the context of trust, its relation to risk, and the fundamental building blocks of trust online that have arisen from e-commerce research. Later, we present formal models of trust and describe what can be learned from these models. We conclude with a set of guidelines addressing how trust can be used in security systems, and concrete suggestions for system developers.

5.1.1. Definitions of Trust

Trust has not always been a subject of mainstream consideration.[5] In fact, prior to the Internet boom and bust, trust was a poor sibling to other sociological and psychological constructs. The Internet boom changed things, as people began to realize that, with trust, people will buy things, and without it, they will not.[6] As simple as this observation may seem, it remains profound. What's more, the realization that imperfect designs can affect the trust of a user has had an equally profound effect on how people have gone about implementing user interfaces, web sites, and interactivity in general.[7] The result has been an increasing amount of well-designed, well-thought-out interfaces, and a great deal of discussion in fields such as Human-Computer Interaction (HCI) and Computer-Supported Cooperative Work (CSCW) about how to encourage, maintain, and increase trust between people and machines, and between people and other people.[8]

[5] Misztal and Luhmann.

[6] Cheskin Research & Studio Archetype/Sapient, "eCommerce Trust Study" (1999), http://www.cheskin.com/think/studies/ecomtrust.html; Cheskin Research, "Trust in the Wired Americas" (2000), http://www.cheskin.com/p/ar.asp?mlid=7&arid=12&art=0.

[7] Jakob Nielsen, "Trust or Bust: Communicating Trustworthiness in Web Design," AlertBox (1999); http://www.useit.com/alertbox/990307.html.

[8] See, for example, Cheskin Research, "eCommerce Trust Study" and Cheskin Research, "Trust in the Wired Americas." See also Ben Shneiderman, "Designing Trust into Online Experiences," Communications of the ACM 43:12 (2000), 5759; Gary Olson and Judith Olson, "Distance Matters," Human-Computer Interaction 15 (2000), 139178; Ye Diana Wang and Henry H. Emurian, "An Overview of Online Trust: Concepts, Elements, and Implications," Computers in Human Behavior (2005), 105125; Cynthia L. Corritore, Beverly Kracher, and Susan Wiedenbeck, "On-Line Trust: Concepts, Evolving Themes, a Model," International Journal of Human-Computer Studies 58 (2003), 737758; Jens M. Riegelsberger, M. Angela Sasse, and John D. McCarthy, "The Researcher's Dilemma: Evaluating Trust in Computer-Mediated Communication," International Journal of Human-Computer Studies 58 (2003), 759781.

Unfortunately, given all of this interest in trust, a deep and abiding problem became evident: everyone knows what trust is, but no one really knows how to define it to everyone's satisfaction. Thus, we now have a great many different definitions, almost as many as there are papers on the subject, all of which bear some relation to each other, but which have subtle differences that often cannot be reconciled. Trust, it seems, is a lot of things to a lot of people.

Looking at the literature, this state of affairs is understandable because trust is multifaceted, multidimensional, and not easy to tie down in a single space.[9] The problem remains, however, that to discuss trust, one must in some way define terms. We suggest the following definition: "Trust concerns a positive expectation regarding the behavior of somebody or something in a situation that entails risk to the trusting party."[10] Problems remain with this and other definitions,[11] but it will do for our purposes.

[9] Stephen Marsh and Mark Dibben, "The Role of Trust in Information Science and Technology," in B. Cronin (ed.), Annual Review of Information Science and Technology 37 (2003), 465498.

[10] Marsh and Dibben (2003), 470.

[11] R. C. Mayer, J. H. Davis, and F. D. Schoorman, "An Integrative Model of Organizational Trust," Academy of Management Review 20:3 (1995), 709734.

Given the multidimensional nature of trust, we have found it useful to discuss the different layers of trust, because it is these layers that affect how trust works in context. We have found that trust has three basic layers: dispositional trust, the psychological disposition or personality trait to be trusting or not; learned trust, a person's general tendency to trust, or not to trust, as a result of experience; and situational trust, in which basic tendencies are adjusted in response to situational cues.[12] These layers work together to produce sensible trusting behavior in situations that may or may not be familiar to the truster. For example, in an unfamiliar situation, learned trust may be given less importance than dispositional trust (because no learned information is available), whereas a situation similar to others encountered in the past can allow a reliance on more learned trust. The situational trust allows cues, such as the amount of information or social expectations, to act to adjust trust levels accordingly. Clearly, the more information available, the better. Bear in mind, however, that a state of perfect information by definition removes the need to rely on trust.

[12] Marsh and Dibben (2003).

Looked at in this manner, the goal of much HCI research and development is to create systems and interfaces that are as familiar as possible to the user such that the user need not make a (necessarily more limited) dispositional trusting decision, and to allow that user to make a (more solid and comfortable) learned trusting decision. The goal of security and privacy systems is to allow the user to make these decisions with as many positive situational cues as possible, or to allow the user to provide and maintain his own situational cues in situations of less than perfect information, comfort, and, ultimately, trust.

5.1.2. The Nature of Trust in the Digital Sphere

The concept of trust undergoes some interesting transformations when it is brought into the digital sphere. Whereas people may be quite adept at assessing the likely behavior of other people and the risks involved in the physical, face-to-face world, they may be less skilled when making judgments in online environments. For example, people may be too trusting online, perhaps routinely downloading software or having conversations in chat rooms without realizing the true behaviors of the other parties and the risks involved. People may also have too little trust in online situations, perhaps dogmatically avoiding e-commerce or e-government transactions in the belief that such actions cannot be done securely, at the cost of missed opportunities and added convenience.[13] Online users have to develop the knowledge needed to make good trust decisions, and developers must support them by making trustable designs.

[13] Batya Friedman, Peter H. Khan, Jr., and Daniel C. Howe, "Trust Online," Communications of the ACM 43:12 (2000), 3440.

One thing that is obvious is that trust in the digital sphere is negotiated differently from trust in face-to-face situations. Take the example of eBayone of the most successful e-commerce businesses in operation today, and one in which complete strangers routinely send each other checks in the mail (although this is becoming a less common means of payment as more sophisticated methods become available). How do eBay users develop sufficient trust in these unseen others to offset financial security concerns? One approach is eBay's reputation system that not only enhances a sense of community among eBay members but also provides a profile of user experiences. These profiles are available to all vendors and customerssomething that was unheard of in the world of offline commerce. Over years, the nature and utility of such cues has changed (as we will discuss in more detail in a later section), but the principle that trust can be designed into a transaction is clearly established.

Another interesting example of the trust cues that can be provided to online users, and how difficult they can be to interpret, was provided in a study by Batya Friedman and her colleagues.[14] These researchers conducted detailed interviews of Internet users to explore the users' understanding of web security. They asked users to describe how they determine if a web connection is secure or not. The most frequent evidence was the appearance of the "https" protocol in the URL, and this was usually used correctly. On the other hand, the "lock" icon that appears in most browsers to indicate a secure connection was often misunderstood by the users, with many confusing the meaning of the open and closed locks. It was also common for people to use evidence about the point in the transaction (e.g., "this is the home page, so it probably is not secure"), the type of information (e.g., "they are asking for my Social Security number, so it must be secure"), and the type of web site (e.g., "it is a bank, so they must be using security"). In addition, some people just made global mistrust decisions regardless of the evidence available (e.g., "I don't think any sites are secure"). This study makes it clear that people are making trust decisions that are based on apparent misunderstanding of web security and the threats that they face.

[14] Batya Friedman, David Hurley, Daniel C. Howe, Edward Felten, and Helen Nissenbaum, "Users' Conceptions of Web Security: A Comparative Study," CHI '02 Extended Abstracts on Human Factors in Computing Systems (2002), 746747.

Phishing, the practice of creating mirror web sites of, for example, commerce or banking sites, and then sending emails to customers asking them to "update their records urgently at the following [fake] link," is a particularly problematic exploitation of trust because it allows the fake site to obtain real account numbers, personal details, and passwords for subsequent fraudulent use on the real site. Phishing sites are often extremely sophisticated, sometimes indistinguishable from the real site. Defenses against such attacks are possible but difficult. Some developers, for example, are creating web browser plug-ins that highlight the true location of a link, rather than the normal location display that can be easily obscured.[15] Ironically, recent features in web sites that are seen as security concerns, such as using cookies to store login IDs and only asking for passwords, are an interesting defenseif I normally don't have to enter my ID, then a similar site that asks for the ID should be a clue about its authenticity. Phishing attacks are discussed in more detail in Chapter 14.

[15] For example, http://www.corestreet.com/spoofstick/.

Trust (and distrust) requires at least two parties: the truster and the trustee. It requires that the truster make an informed decision. Trust is not a subconscious choice, but requires thought, information, and an active truster. The converse is not true: it is not necessary for the trustee to know that the truster is, in fact, trusting themit may be necessary for the trustee to know that someone trusts them, but that's a different debate.

As discussed briefly already, it has generally been accepted that the trustee has to have some aspect of free will: that is, in this instance, the trustee can do something that the truster would find untrustworthy. In the precomputer age this was taken to mean that the trustee must be rational, conscious, and real: thus, machines could not be trusted, they could only be relied upon, a difference that is subtle, but not moot.

In an age of autonomous agents, active web sites, avatars, and increasingly complex systems, both conscious entities and complex machines can be trusted. The corresponding argument that the trustee must know when he or she acts in an untrustworthy manner is somewhat more problematic. In any case, the phenomenon of anthropomorphism, whether validly directed or not, allows us to consider technologies as "trustable" because people behave as if machines and technologies are trustable social entities that can in fact deceive us, and leave us feeling let down when trust is betrayed.[16]

[16] See Byron Reeves and Clifford Nass, The Media Equation: How People Treat Computers, Television, and New Media Like Real People and Places (Stanford University, Palo Alto, CA: CSLI Publications, 1996); B. J. Fogg, Persuasive Technology: Using Computers to Change What We Think and Do (New York: Morgan Kaufman, 2002); Cristiano Castelfranchi, "Artificial Liars: Why Computers Will (Necessarily) Deceive Us and Each Other," Ethics and Information Technology 2:2 (2000), 113119.

The question remains, then, especially when active entities such as autonomous agents or interactive interfaces are in mind, as to whom or what can trust and whom or what can be trusted. In this instance, one can consider humans as trusters and trustees, and computers in similar roles. Thus, we can consider trust between humans and humans, and between humans and computers, but we can also consider trust between computers and other computers, and, finally, between computers and humans. Heretical as it may seem, there are situations where computers are trusterssometimes even as surrogate agents for humans.

In the circumstances where the truster is a computer, there is a need for a means by which the computer can "think" about trust. Thus, a computationally tractable means of reasoning about trust is needed. It is not enough for the computer to be able to say, "I trust you, so I will share information with you." What information? How much? In which circumstance? In what context? We sometimes have a need to put some kind of value on trust; thus, "I trust you this much" is a much more powerful statement than "I trust you." Of course, this leads to its own questions, such as what does "this much" actually mean, how can we trust, and how can trust values be shared? We address these questions in subsequent sections.

Formalizations and formal models of trust do exist and more are appearing regularly.[17] With each formalization, old questions are answered, new questions arise, and we move closer to a real understanding of human trust and more capable trust-reasoning technologies. However, while formalizations exist, computationally tractable formalizations are much rarer. Unfortunately, it is these that are needed to better approach understanding and to better approximate trusting behaviors in computers.

[17] See below and Stephen Marsh, "Formalizing Trust as a Computational Concept"; Alfarez Abdul-Rahman and Stephen Hailes, "A Distributed Trust Model," Proceedings of the ACM New Security Paradigms Workshop '97 (Cumbria, U.K., Sept. 19970; Cristiano Castelfranchi and R. Falcone, "Principles of Trust for MAS: Cognitive Anatomy, Social Importance, and Quantification," Proceedings of the 3rd International Conference on Multi Agent Systems, 1998, 72; Jonathan Carter and Ali A. Ghorbani, "Towards a Formalization of Value-Centric Trust in Agent Societies," Journal of Web Intelligence and Agent Systems 2:3 (2004), 167184.



Security and Usability. Designing Secure Systems that People Can Use
Security and Usability: Designing Secure Systems That People Can Use
ISBN: 0596008279
EAN: 2147483647
Year: 2004
Pages: 295

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net