Interpersonal and System Trust: the Case of Privacy


Representations of trust carry costs, not the least of which involves privacy compliance. In terms of trust typologies, privacy spans both interpersonal and system trust. [System trust, according to Baier (1986) refers to the roles, responsibilities, promises and contracts that are endorsed by interacting parties in a partnership.] First of all, legal requirements must be complied with. But, users must be convinced that personal data are secure and that privacy guidelines are observed, whether they are dictated by law, by ethics or by usability. Ackerman et al. (2001), in a review of the pertinent European and US legislative regimes, suggest that four basic areas need to be taken into account:

  • Notice — individuals should have clear notice of the type of information collected, its use, and an indication of third parties other than the original collector who will have access to the data.

  • Choice — individuals should be able to choose not to have data collected.

  • Access — the data subject should be able to see what personal information is held about him or her, to correct errors and to delete the information if desired.

  • Security — reasonable measures should be taken to secure (both technically and operationally) the data from unauthorized access.

In an earlier study, Ackerman, Craner and Reagle (1999) examined a number of users to establish levels of privacy concern in the US. They identified three main groups: "fundamentalists," "pragmatists" and "unconcerneds." They concluded that there will be considerable variation in the rules that people wish to have govern perceptual contexts and privacy as "one man's privacy is another person's spyone person's contextual awareness is another person's lack of privacy." In a further study, Ackerman (2000) and his colleagues suggest that basic user interface mechanisms are needed for "unobtrusive notice." Viega et al. (2001) state that interface designers and development teams must consider who will use interfaces and what they will be trusted (or not) to do. They conclude that it is important to minimize assumptions about trust between components in a multiparty system by specifying explicitly what or who is trustworthy. Other analysts think that transparency may be a critical commercial feature. Martin et al. (2001) in a discussion of the privacy implications of Internet Explorer extensions conclude: "It is time to elevate privacy practices to first-class criteria that discerning consumers will count along with speed, memory consumption, and ease of use in the search for the perfect tool for the job" (p. 50).

A number of research projects have addressed privacy issues in contexts that are relevant to the case study project. Lau et al. (1999) describe a prototype interface called CollabClio that stores a person's browsing history and makes it searchable by content, keyword and other attributes. They suggest that an ideal privacy interface "must make it easy to create, inspect, modify and monitor privacy policies" and that privacy policies themselves should be proactive — that is they should apply to objects as they are encountered. In a modified version of the prototype, they created a privacy policy editor window (to support the creation, inspection and modification of privacy policies) and monitor and query-log windows to allow users to see the effects of a policy. More recent work on privacy policies for systems that will be used in multiple social contexts and supported by multiple platforms has focused on P3P, the Preference Exchange Language supported by W3C (Koch & Worndl, 2001; Ackerman et al., 2001). This currently provides a description of a privacy policy in terms of a "notice" by the provider and a "choice" by the user. The P3P specification defines a vocabulary for describing the data practices of a service and the user agent can check the conformity of the privacy policy of a community with the user's privacy preferences.

Koch and Worndl (2001) offer a comprehensive review of privacy issues where user profiles must be supported across different communities, following the dictum that knowing the identity of those with whom you communicate is essential for understanding and evaluating an interaction. (They provide examples from the Cassiopeia project.) The main issue, say Koch and Worndl, is the "cold-start" problem and this may be solved by the design of a platform for using user profiles in more than one application. They define "identity management" as everyday decisions about what to tell one another about ourselves, a pressing requirement where different sets of information are released to different interaction partners — aliases, pseudonyms, etc. An ideal identity management system should, thus:

  • Allow people to define different identities, roles, etc.

  • Associate personal data to these

  • Decide when to give data and when to act anonymously

  • Maintain privacy and control (see other paper on "control" as privacy)

  • Make it easy for user to use different communities and thereby lower the entry barrier to online communities

This work is fully compatible with the representation methods that are described in our earlier discussion of the focal points of trust, as it attempts to make visible the components of trust within a system. The project team has taken this work into account in the specification for the partnering platform. One possible approach is to establish trust profiles for individuals and groups that can be edited as required at different stages of interaction.




L., Iivonen M. Trust in Knowledge Management Systems in Organizations2004
WarDriving: Drive, Detect, Defend, A Guide to Wireless Security
ISBN: N/A
EAN: 2147483647
Year: 2004
Pages: 143

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net