Section 19.3. Relevant HCI Research Streams


19.3. Relevant HCI Research Streams

HCI is composed of numerous research streams, as is any scientific area. Thus, we cannot hope to cover the field here. Useful surveys include the Handbook of Human-Computer Interaction[7] and Readings in Human-Computer Interaction: Toward the Year 2000,[8] especially the chapter introductions. See also the annotated bibliography of HCI resources provided by Karat, Brodie, and Karat.[9]

[7] Martin G. Helander, Thomas K. Landauer, and Prasad V. Prabhu, Handbook of Human-Computer Interaction, 2nd Edition (New York: Elsevier, 1997).

[8] Ronald M. Baecker, William Buxton, Jonathan Grudin, and Saul Greenberg, Readings in Human-Computer Interaction: Toward the Year 2000 (New York: Morgan Kaufmann, 1995).

[9] See Chapter 4, this volume.

In any case, several research streams within HCI are of immediate interest to the examination of privacy and the design of privacy mechanisms. These include:

  • Basic design considerationsdesigning for general usability and the evaluation of usability (usability engineering ).

  • How people interact with and through systems (Computer-Supported Cooperative Work, or CSCW).

  • How individuals differ in their capabilities and how that affects the human-computer interface (individual differences and tailorability).

  • The role of HCI in next-generation architectures (ubiquitous computing, pervasive computing).

Each is covered in the following sections.

19.3.1. Usability Engineering

Over the last 20 years, considerable interest and effort have gone into improving the usability of computers. Advances in the 1980s such as mice and GUIs greatly expanded the market by removing ease-of-use barriers. Subsequent investment in and by the HCI community has yielded a wide variety of usability engineering and testing methods. It is now generally recognized that modern software and hardware cannot ignore usability. Potential users simply will not adopt or use features that are difficult to use, and organizations will not deploy hardware and software that are difficult to manage.

Addressing these usability requirements has become an acknowledged part of most development methodologies. In software engineering, it has been adopted into process models such as the prototyping, iterative, and even spiral models.[10] Generally, these recognize the need to iteratively design, develop, and test against real users in order to create usable systems. An excellent example of this process for a privacy mechanism can be seen in Cranor's implementation of Privacy Bird, which went through five iterations of development and evaluation.[11] Only through successive refinement can software engineers meet users' needs, capabilities, and expectations.

[10] Ian Sommerville, Software Engineering (Reading, MA: Addison Wesley, 2001).

[11] See Chapter 22, this volume.

Privacy mechanisms are no exception. In many respects, they can be treated as any other critical platform feature, and addressed with existing usability engineering methods. Karat, Brodie, and Karat[12] provide an excellent overview of these methods and their application for security and privacy. They also point out some key differences between privacy (and security) mechanisms and kinds of functional features with which usability engineering is more typically concerned, caveats that are worth paraphrasing and reflecting upon here:

[12] Karat, Brodie, and Karat.


While valued, privacy is not the users' primary task

We would just add that calling attention to privacy and making it an explicit task at any level can be problematic. For example, Cranor discusses users' difficulties in explicitly articulating their privacy preferences.[13] The goal with privacy, then, is often not so much to measure and refine task performance as it is to refine task invisibility or lightweightness.

[13] Cranor.


Designs must encompass many different types of users

Indeed, we devote a later section of this chapter to a discussion of techniques for dealing with individual differences.


Privacy raises the stakes

Badly designed features can lead not only to user rejection and increased development costs, but also to potential injury (even bodily injury, in the case of stalking via location-tracking technologies).


Systems must respond to the legal and regulatory environment

We note that this places additional demands for specialized expertise on the makeup of usability engineering efforts, beyond their traditional interdisciplinary competencies.

Karat, Brodie, and Karat outline the various phases of system development and the types of methods appropriate to each. Instead of repeating that here, we conclude this section by emphasizing and introducing a number of general approaches from usability engineering and user-centered design of particular use to people seeking to understand a design domain in depth. These may be of particular use for new, "disruptive" technologies that do not yet have substantial deployments in the field.


Context

Privacy is extremely contextual, based in the specifics of by whom, for what, where, why, and when a system is being used. Understanding people's needs and attitudes, and developing the necessary empathy to understand the world from their point of view, is best derived by observing them "in the wild" and asking them open-ended questions. There really is no substitute for getting out into the field, and a number of more or less structured ethnographic methods have been developed. For example, contextual design[14] is a highly structured methodology for pulling out the task requirements and contextthat is, going beyond the user interface and considering how users will use the privacy mechanisms in their tasks. Other approaches include discount ethnography[15] and rapid ethnography;[16] all of these seek to balance the valuable open-endedness and freedom of ethnographic investigations with practical requirements of timely return on research investments.

[14] Hugh Beyer and Karen Holtzblatt, Contextual Design: A Customer-Centered Approach to Systems Designs (San Francisco: Morgan Kaufmann, 1997).

[15] John Hughes, Val King, Tom Rodden, and Hans Andersen, "Moving Out from the Control Room: Ethnography in System Design," Proceedings of the ACM Conference on Computer-Supported Cooperative Work (CSCW '94) (1994), 429439.

[16] David R. Millen, "Rapid Ethnography: Time Deepening Strategies for HCI Field Research," Proceedings of the ACM Conference on Designing Interactive Systems: Processes, Practices, Methods, and Techniques (2000), 280286.


Nuanced control

Control over one's personal data is often very nuanced and unconscious in everyday life. In order to understand what people are doing, it is often necessary to get them talking; observation alone is not enough because it does not provide the subjective understanding of the situation. Nor are discussions of past behavior: people are often not conscious of their actions, and the memory of what they did and why they did it can fade within minutes. For this reason, think-aloud protocols[17] were developed. In this methodology, users continuously describe their actions and reasons aloud. The researcher (or designer) may prompt the user from time to time to keep him talking, but the user provides a steady stream of reasons and subjective judgments. Over longer periods, related methodologies such as experience sampling[18] and diary keeping[19] may be useful.

[17] Clayton Lewis, "Using the 'Thinking Aloud' Method in Cognitive Interface Design," IBM Research Report, RC-9265 (1982); K. Anders Ericsson and Herbert A. Simon, Protocol Analysis: Verbal Reports as Data (Cambridge, MA: MIT Press, 1993).

[18] For example: Sunny Consolvo and Miriam Walker, "Using the Experience Sampling Method to Evaluate Ubicomp Applications," IEEE Pervasive Computing 2:2 (2003), 15361268.

[19] For example: Leysia Palen and Marilyn Salzman, "Voice-mail Diary Studies for Naturalistic Data Capture Under Mobile Conditions," Proceedings of the CSCW'2002 (2002), 8795.


Low and high fidelity

Systems need not be fully constructed in order to evaluate their usability. Both low-fidelity and high-fidelity prototypes can be used by potential users. An often fruitful method is the Wizard of Oz study . In a Wizard of Oz study, the functionality of the system is simulated by people. For example, if the system requires the parsing of natural-language text, this can be effectively done by a person who simulates the functioning, for example, of a natural-language processing component of the potential system. In this way, designers can evaluate the adequacy of the system without constructing the system itself.


Hybrids

The previous approachesdelving into users' worlds, helping them to articulate and self-reflect, and getting prototypes into their handscan be combined, elaborated on, and experimented with in almost limitless ways. Some particularly interesting hybrids include experience prototyping, bodystorming, and informance.[20] These design techniques go beyond what is traditionally meant by usability engineering, but show promise for more adequately addressing the real-world nuances of domains like privacy.

[20] Marion Buchenau and Jane Fulton Suri, "Experience Prototyping," Proceedings of the Conference on Designing Interactive Systems: Processes, Practices, Methods, and Techniques (2000), 424433.

None of these usability requirement-gathering and usability evaluation techniques was constructed for privacy per se. However, because of the inherent complexity of privacy mechanisms, the large research stream about usability and user-centered design in HCI is potentially of considerable use.

The next research stream to be discussed considers how privacy mechanisms might be made more usable given the wide range of concerns and preferences that people have about their personal data.

19.3.2. Computer-Supported Cooperative Work

An important stream of HCI research is Computer-Supported Cooperative Work (CSCW ).[21] As already mentioned, HCI began by examining largely single-user applications and systems. Starting in the late 1980s, CSCW started as a counter-effort to consider collaborative computer use. Although this subarea of HCI began in the consideration of cooperative or collaborative work, it quickly grew to include many different forms of coordination and social organization. Much of the early work in CSCW simply ignored the issue of privacy: researchers implicitly assumed that people working jointly on a project had no reason to screen information from each other.

[21] Jonathan Grudin, "Computer-Supported Cooperative Work: History and Focus," IEEE Computer 27:5 (1994), 1926; Gary M. Olson and Judith S. Olson, "Research on Computer Supported Cooperative Work," in M. Helander (ed.), Handbook of Human Computer Interaction (Amsterdam: Elsevier, 1997).

The interest of CSCW researchers in privacy began in the 1990s as a side effect of work on "shared spaces"remote spaces that were electronically linked through audio and video.[22] An example of such a shared space was a video wall linking two lunchrooms in different cities. Media spaces had privacy problems that today are obvious. In one important study,[23] one of the authors found that she forgot a camera was on and began to change clothes. Other authors have reported similar events. Thus began a more systematic investigation into the issue of privacy within the context of CSCW.

[22] For example: Paul Dourish and Sara Bly, "Portholes: Supporting Awareness in a Distributed Work Group," Proceedings of the ACM CHI '92 Conference on Human Factors in Computing Systems (1992), 541547; William Buxton, "Telepresence: Integrating Shared Task and Person Spaces," in R. M. Baecker (ed.), Readings in Groupware and Computer-Supported Cooperative Work (San Mateo, CA: Morgan Kaufman, 1993), 816822; Sara A. Bly, Steve R. Harrison, and Susan Irwin, "Media Spaces: Bringing People Together in a Video, Audio, and Computing Environment," Communications of the ACM 36:1, (1993), 2847.

[23] Paul Dourish, Annette Adler, Victoria Bellotti, and Austin Henderson, "Your Place or Mine? Learning from Long-Term Use of Audio-Video Communication," Computer Supported Cooperative Work 5:1 (1996), 3362.

Unlike HCI, which found its history in the literature of cognitive psychology, CSCW's literature came from the field of social interactionspecifically, social psychology and cognitive anthropology. Not only is this background important to understanding current CSCW research, but it also is critical in understanding privacy overall. For that reason, we next survey some of the key social theorists. (We follow this with an overview of the current CSCW literature appropriate to privacy mechanisms.) In many ways, these theorists' views have become almost assumptions within CSCW, and many CSCW studies have borne out their theories. The theorists' views most important to a discussion of privacy include the following:

  • As those interested in privacy are aware, people have very nuanced views of their interactions with other people and find it problematic when those social interactions are constrained.[24] They handle this nuance with agility and contextually.[25]

    [24] Anselm L. Strauss, Continual Permutations of Action (New York: Aldine de Gruyter, 1993).

    [25] Lucy A. Suchman, Plans and Situated Actions: The Problem of Human-Computer Communication (New York: Cambridge University Press, 1987).

  • Goffman[26] noted that people present a "face" to others. Goffman, fascinated by spies and scam artists, proposed that everyone presents bits and pieces of themselves as socially appropriate to the other and, in fact, may wish to present themselves differently depending on the circumstances. A person may present himself as a loyal employee to his supervisor and a job seeker to another company. People find it very disconcerting when that capability is removed.

    [26] Erving Goffman, The Presentation of Self in Everyday Life (New York: Anchor-Doubleday, 1961).

  • Harold Garfinkel, in his examination of how people make sense of their everyday worlds, showed that people find it disconcerting when what they believe to be their everyday "normal" world is disrupted.[27] Some people may even become violently angry when they believe the rules of conduct or "normal" behavior are violated.

    [27] Harold Garfinkel, Studies in Ethnomethodology (Englewood Cliffs, NJ: Prentice Hall, 1967).

Privacy mechanisms, in specific, suffer from these issues. People have extremely nuanced views of other people (and groups, companies, and institutions), and want to safeguard their ability to properly present themselves to those others. At the same time, they will find it very difficult when those modes of presenting themselves change, or when the "rules" about their privacy and safeguards change.

Drawing on these theorists, CSCW research relevant to privacy can be roughly divided into three categories: media space applications, other collaborative applications with privacy concerns, and studies discussing privacy in relation to awareness.

Other applications raised similar privacy concerns. Palen[28] explored the issues with shared calendars and sharing information about users' schedules with co-workers, managers, and employees. For example, Palen noted that some workers used viewing their supervisors' open calendars to determine whether layoffs were likely, a move that might not have been in the company's interest. Other work on shared or public displays has raised concerns about automatically generated views or making information public. Finally, allowing people to view one another's temporal information, such as when people are available for communication, also raises obvious privacy concerns.[29]

[28] Leysia Palen, "Social, Individual and Technological Issues for Groupware Calendar Systems," Proceedings of the ACM Conference on Human Factors in Computing Systems (1999), 1724.

[29] James Bo Begole, Nicholas E. Matsakis, and John C. Tang, "Lilsys: Sensing Unavailability," Proceedings of the ACM Conference on Computer Supported Cooperative Work (2004), 511514.

These privacy problems have been analyzed in a series of papers that discuss the tradeoffs between awareness and privacy. Awareness is knowing what others are doing or even that they are around. First raised in media space studies[30] and other shared work investigations,[31] awareness is a critical issue in distributed, collaborative applicationsone needs to know what other people are doing in the shared space. Hudson and Smith[32] went on to discuss the fundamental tradeoff between awareness and privacy. In their view, awareness requires the release of personal information; this necessitates the disruption of privacy or at least requires one's attention to controlling the release of personal data. They proposed a number of interesting technical solutions to solving the privacy-awareness tradeoff. Their video solution allowed cameras to provide awareness of a presence, but the blurred image did not allow the viewer to see details.[33] Their audio solution allowed one to hear voices in a media space, but not to make out the exact words. Neither solution required a user's attention, and yet particularly egregious problems, such as seeing too much or overhearing private conversations, were eliminated for workplace environments. More recently, however, Neustaedter, Greenberg, and Boyle have found that blurring is not always sufficient to protect privacy in home environments.[34]

[30] For example: Paul Dourish and Victoria Bellotti, "Awareness and Coordination in Shared Workspaces," Proceedings of the Conference on Computer-Supported Cooperative Work (CSCW '92) (1992), 107114.

[31] For example: Christian Heath and Paul Luff, "Collaboration and Control: Crisis Management and Multimedia Technology in London Underground Line Control Rooms," Computer Supported Cooperative Work Journal 1:1 (1992), 6994.

[32] Scott E. Hudson and Ian Smith, "Techniques for Addressing Fundamental Privacy and Disruption Tradeoffs in Awareness Support Systems," Proceedings of the ACM Conference on Computer-Supported Cooperative Work (CSCW '96) (1996), 248257.

[33] This idea was also considered in Michael Boyle, Christopher Edwards, and Saul Greenberg, "The Effects of Filtered Video on Awareness and Privacy," Proceedings of the ACM Conference on Computer Supported Cooperative Work (2000), 110.

[34] C. Neustaedter, S. Greenberg, and M. Boyle, "Blur Filtration Fails to Preserve Privacy for Home-Based Video Conferencing," ACM Transactions on Computer Human Interactions (TOCHI) (2005, in press).

Work on privacy continues within CSCW. Recently, Palen and Dourish[35] pointed out that privacy is a dynamic, dialectic process. Based on the work of Altman,[36] Dourish and Palen analyze the relational nature of privacy. Ackerman[37] has discussed the difficulty of privacy, and has suggested that because of the relational, nuanced, and situated complexity of privacy issues for many people, there is likely to be a gap between what we know we must do socially and what we know how to do technically. He calls this the social-technical gap, and sees it as a major stumbling block for building effective user-centered controls for privacy mechanisms.

[35] Leysia Palen and Paul Dourish, "Unpacking 'Privacy' for a Networked World," Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI) (2003), 129136.

[36] Altman.

[37] Mark S. Ackerman, "The Intellectual Challenge of CSCW: The Gap Between Social Requirements and Technical Feasibility," Human-Computer Interaction 15:2-3 (2000), 179204.

19.3.3. Individual Differences

Users differ widely in their privacy concerns. We know from the research literature that individuals do not view "privacy" uniformly, even in e-commerce. Types of concerns and degrees of concern segment the population:


People have differing types of concerns

Culnan and Armstrong[38] make the argument that people have two kinds of privacy concerns. First, they are concerned about unauthorized others accessing their personal data because of security breaches or the lack of internal controls. Second, people are concerned about the risk of secondary use; that is, the reuse of their personal data for unrelated purposes without their consent. This includes sharing with third parties who were not part of the original transaction. It also includes the aggregation of personal data to create a profile. Smith, Milberg, and Burke[39] raise two additional concerns: People have a generalized anxiety about personal data being collected, and people are also concerned about their inability to correct any errors.

[38] Mary J. Culnan and Pamela K. Armstrong, "Information Privacy Concerns, Procedural Fairness and Impersonal Trust: An Empirical Investigation," Organization Science 10:1 (1999), 104115.

[39] Jeff H. Smith, Sandra J. Milberg, and Sandra J. Burke, "Information Privacy: Measuring Individuals' Concerns about Organizational Practices," MIS Quarterly (June 1996), 167196.


People also differ in their level of concern

The research literature generally describes a general anxiety and its extent, but some research provides more detail. A persistent finding is that it is useful to consider U.S. consumers not as one homogenous group. Westin[40] found three separate groups: the marginally concerned, the privacy fundamentalists, and the pragmatic majority. The groups differ significantly in their privacy preferences and attitudes. The marginally concerned group is mostly indifferent to privacy concerns; the privacy fundamentalists, on the other hand, are quite uncompromising about their privacy. The majority of the U.S. population, however, are members of the pragmatic majority. The pragmatic majority are concerned about their privacy but are willing to trade personal data for some benefit (e.g., customer service).

[40] Alan F. Westin, Harris-Equifax Consumer Privacy Survey 1991 (Atlanta: Equifax, Inc., 1991).

THUNDERWIRE: A CSCW RESEARCH STUDY

Thunderwire was an experimental audio-only media space prototype developed at Interval Research. It provided a kind of "party line" shared audio connection that was continuously available to a small, fixed group of spatially distributed users. Thunderwire had an intentionally minimalist interface (to see how simple such interfaces could be while still usable): basically an on/off switch. When on, microphones fed local audio into the party line and lit a red "on the air" light to notify the user (and any passersby). When off, microphones and lights were deactivated. Users of the system comprised a video analysis and analysis tool-building team that routinely worked with audio, facilitating the addition of Thunderwire to their work practices. Their manager wanted to use the system as an awareness technology, aimed at more tightly integrating the team across locations within two buildings.

The field study lasted two months and consisted of nine users. Overall, the success of the Thunderwire experiment was mixed. But for a small core of habitual users, it became a valued and enlivening aspect of their workplace, a predominantly social medium allowing for intermittent chat among friends. These benefits came with privacy problems, chiefly the inability to tell who at any given time was present on the party line, as well as recurring problems with leaving the system on by mistake and unintentionally broadcasting phone conversations, bodily noises, and other distractions. Unintentional broadcasting was a serious issue; similar to other media space studies, participants forgot that they were part of a live, shared space. Of particular interest was how the group developed informal social norms about how the system was and was not to be used, as well as how exceptions were to be handled. In this way, the participants were able to make the system usable for themselves.

The researchers used multiple methods to collect data about the system over a two-month study period. A central server continuously logged when each user connected or disconnected. Two weeks of audio activity were recorded (with all users' knowledge and permission). An outside researcher (the first author) was contracted to study the system. He observed the users' work, and he interviewed the users before, during, and after the system deployment. He and his students also transcribed and analyzed 18 hours of the system's audio, which captured the nuances of actual system use. Having an "outsider" as principal investigator was important to ensure the confidentiality of the people's data and to encourage openness on the part of interviewees. It also brought new perspectives and disciplinary backgrounds to the research team.

For a full analysis, see Mark S. Ackerman, Debby Hindus, Scott D. Mainwaring, and Brian Starr, "Hanging on the Wire: A Field Study of an Audio-Only Media Space," ACM Transactions on Computer-Human Interaction 4:1 (1997), 3966.


These groupings have been consistent across studies.[41] (Spiekermann, Grosslags, and Berendt divided the pragmatics into those who were concerned with revealing their identity and those who were more concerned about making their personal profiles available.) Estimates of these groups' sizes differ, and they appear to be changing over time. Westin found population estimates shown in Table 19-1; notice that Westin 2003 is a study after 9/11. Spiekermann et al. noted a larger group of privacy fundamentalists and fewer marginally concerned in Germany. Note, however, that despite these groupings, consumers still want adequate measures to protect their information from inappropriate sale, accidental leakage or loss, and deliberate attack.[42] Indeed, in Ackerman, Cranor, and Reagle,[43] the concerns of pragmatists were often reduced significantly by the presence of privacy protection measures such as privacy laws or privacy policies on web sites.[44]

[41] For example: Mark S. Ackerman, Lorrie Cranor, and Joseph Reagle. "Privacy in E-Commerce: Examining User Scenarios and Privacy Preferences," Proceedings of the ACM Conference in Electronic Commerce (1999), 18; Sarah Spiekermann, Jens Grossklags, and Bettina Berendt. "E-Privacy in 2nd Generation E-Commerce: Privacy Preferences Versus Actual Behavior," Proceedings of the ACM Conference on Electronic Commerce (2001), 3846.

[42] Gurpreet S. Dhillon and Trevort T. Moores, "Internet Privacy: Interpreting Key Issues," Information Resources Management Journal 14:4 (2001), 3337.

[43] Ackerman, Cranor, and Reagle.

[44] Alan Westin (2003); http://www.harrisinteractive.com/advantages/pubs/DNC_AlanWestinConsumersPrivacyandSurveyResearch.pdf.

Table 19-1. Population estimates, by privacy cluster

Privacy fundamentalists

Marginally concerned

Pragmatic majority

Westin 1995

25%

20%

55%

Westin 2000

25%

12%

63%

Westin 2003

37%

11%

52%


Given this diversity in how users view privacy, how might one design for these differences among users' capabilities, concerns, and preferences? An old research theme in HCI is that of individual differences. Experimental and cognitive psychology, as literatures, have largely ignored differences among subjects, seeing them as part of experimental error. As well, as Egan[45] notes, "Differences among users have not been a major concern of commercial computer interface designers." However, HCI's heritage in man-machine interfaces (human factors) led HCI to appreciate how people varied. Human factors had found this critical: when constructing airplane cockpits, industrial lighting, or even office chairs, differences among individuals can be critical for safety, comfort, and usability. This concern about human factors led into user interfaces and HCI research.

[45] Dennis E. Egan, "Individual Differences in Human-Computer Interaction," in M. Helander (ed.), Handbook of Human-Computer Interaction (New York: North-Holland, 1988), 543568.

Egan summarizes much of the work in early HCI about individual differences. Note that this research theme is largely moribund in HCI.[46] The later volume of the Handbook of Human-Computer Interaction,[47] published in 1997, does not have a similar chapter. However, rekindling this research stream is likely to be of help to privacy and similar mechanisms.

[46] But see Andrew Dillon and Charles Watson, "User Analysis in HCI: The Historical Lesson from Individual Differences Research," International Journal of Human-Computer Studies 45:6 (1996), 619637.

[47] Helander, Landauer, and Prabhu.

The interest in Egan's chapter, as well as in most of the individual differences research, was to determine the source of efficiencies and errors in using computer systems. The goal was to find ways to help users more effectively use their task knowledge and to reduce errors. As he notes, there are huge variances among users' performance (occasionally 20:1), much more extreme than among workers performing tasks (at most, 2:1). Users have, if anything, even wider variance when considering privacy. One can see that people vary not only in their system performance, but also in their understanding of the task and its implications for privacy. All of these differences, as well as their attitudes, must be considered when constructing privacy mechanisms, and as HCI found, several standard techniques can be used.

These approaches to accommodating user differences should be of considerable interest to those constructing or researching privacy mechanisms. The approaches described by Egan are still the predominant methods in HCI research and practice for handling diversity, and they are of direct relevance to privacy mechanisms. These approaches are:


Constructing better interfaces

As Egan stated, "This approach is similar to standard human-interface design, except that it is shaped by a concern for the variability among users."[48] This is particularly important for systems where people are not expert users and where they will remain "permanent casual users." Redesigning interfaces and systems so as to reduce usability problems is a laudable goal. Yet, because of the complexity of privacy concerns for users, it is unlikely that a "one size fits all" approach will work adequately.[49] The concomitant possibility of constructing software that has all potential privacy functionality for a task (like the solution adopted by some word processors and office applications) may not work with privacy concerns or may be too complex for users, because the functionality is likely to cut across many tasks, systems, and applications.

[48] Egan, 559.

[49] Mark S. Ackerman, "The Intellectual Challenge of CSCW: The Gap Between Social Requirements and Technical Feasibility," Human-Computer Interaction 15:2-3 (2000), 179204.


Clustering users

This approach advocates accommodating user differences by finding a set of user clusters and then interacting with the users through those classifications. This can be done in several different ways. Egan viewed it largely as a question of developing different interfaces. One could also have different dialog or interaction patterns with different user classes. More currently, one might treat these differing clusters of users differently. As Egan states, "Identical actions from two different users may be treated quite differently if the users have been classified as different prototypes [classes]."[50] Indeed, work on several problems shows the analytical power in examining user clusters. One set of papers examines default settings.[51] Most users not only do not program their systems, but also neither customize them nor change the default settings.[52]

[50] Egan, 560.

[51] Wendy E. Mackay, Thomas W. Malone, Kevin Crowston, Ramana Rao, David Rosenblitt, and Stuart K. Card, "How Do Experienced Information Lens Users Use Rules?" Proceedings of ACM CHI '89 Conference on Human Factors in Computing Systems (1989), 211216; Wendy E. Mackay, "Triggers and Barriers to Customizing Software," Proceedings of ACM CHI '91 Conference on Human Factors in Computing Systems (1991), 153160; Jonathan Grudin, "Managerial Use and Emerging Norms: Effects of Activity Patterns on Software Design and Deployment," Proceedings of the Hawaii International Conference on System Sciences 37 (2004).

[52] Wendy E. Mackay, "Patterns of Sharing Customizable Software," Proceedings of ACM CSCW '90 Conference on Computer-Supported Cooperative Work (1990), 209221.

Another set of research shows that different groups have different mental models or technology frames.[53] Orlikowski,[54] in her study of a collaborative work system, showed that administrative personnel, managers, frontline consultants, and information technology staff all brought differing incentives and disincentives, reward and compensation expectations, and goals. For example, frontline consultants could not bill to learn the system, whereas information technology workers wanted to know as much about the system as possible. Similarly, privacy mechanisms will be used very differently by people with differing assumptions about power and control, the efficacy of regulation and law, and the benign intent of companies. Finding suitable user clusters is important, but may be challenging, especially for members of the pragmatic majority. In some situations, however, it may be possible to teach users and consumers new technology frames. Orlikowski noted the importance of training.

[53] Wanda J. Orlikowski, "Learning from Notes: Organizational Issues in Groupware Implementation," Proceedings of the Computer Supported Cooperative Work (CSCW '92) (1992), 362369; Wanda J. Orlikowski, "The Duality of Technology: Rethinking the Concept of Technology in Organizations," Organization Science 3:3 (1992), 398427.

[54] Orlikowski, "Learning from Notes."

These two lines of inquiry have led to discussions of creating specific default and other settings for varying user clusters. Grudin[55] suggested defaults for office applications, where different groups of people (managers, administrative assistants, and knowledge workers) use the systems very differently. For privacy, while the pragmatics are a large and highly differentiated group in their everyday, contextualized preferences, it may be quite possible to treat privacy fundamentalists and the marginally concerned as user clusters. By doing so, it may be possible to create usable privacy mechanisms for at least these groups (which may be over a majority of the population). Very recently, Olson, Grudin, and Horvitz[56] explored clustering in privacy preferences. While their work is still preliminary, it suggests that there are key classes of recipients and data. While people vary overall, these classes of recipients and data may remain relatively constant.

[55] Grudin, 2004.

[56] Judith S. Olson, Jonathan Grudin, and Eric Horvitz, "Toward Understanding Preferences for Sharing and Privacy," Microsoft Research Technical Report (2004), 138.


Adaptive systems

These systems prevent user errors by helping users. Carroll, in a line of work,[57] promoted "training wheels" interfaces providing reduced functionality so as to avoid errors from complex interactions with the system. Similarly, critics are interface agents that help users avoid mistakes by noting when there are problems.[58] In Fischer and Lemke,[59] the critics let the users (who were kitchen designers) know when they had made a mistake, such as placing an appliance in front of a door. Ackerman and Cranor[60] used critics to help users with their privacy on the Web. Their Privacy Critics system alerted the user when, for example, he might be violating his own privacy or when sites might be problematic.

[57] For example: John M. Carroll and C. Carrithers, "Training Wheels in a User Interface," Communications of the ACM 27:8 (1984), 800806.

[58] Gerhard Fischer, Andreas C. Lemke, Thomas Mastaglio, and Anders I. Morch, "Using Critics to Empower Users," in Proceedings of ACM CHI '90 Conference on Human Factors in Computing Systems (1990), 337347.

[59] Gerhard Fischer and Andreas C. Lemke, "Construction Kits and Design Environments: Steps Toward Human Problem-Domain Communication," Human-Computer Interaction 3:3 (1988), 179222.

[60] Mark S. Ackerman and Lorrie Cranor, "Privacy Critics: UI Components to Safeguard Users' Privacy," Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI '99) (1999), 258259.


Automated "Mastery Learning"

HCI has had a large number of studies on training and documentation. Egan promoted using automatic training, such as tutors, to help users gain the expertise necessary to use systems effectively. Many studies[61] have noted the use of training to facilitate changing or expanding users' mental models of the system and potential tasks. To our knowledge, no such tutoring or training system has been constructed for privacy, although one clearly would be useful.

[61] For example: Orlikowski, "Learning from Notes."


Tailorable systems

A fifth approach has also arisen, following from the first two approaches. With this approach, users tailor or customize the systems to fit their needs. Customizing usually refers to changing the surface interfaces of a system; tailoring usually refers to deeper changes to the functionality of an application.[62] With this approach, the designer includes large amounts of functionality, most of which any given user will not use. Unlike robust interfaces, which present "one size fits all" interfaces, tailorable systems allow users to pick and choose their functionality. Information technology personnel, users with computer expertise (called gardeners in Nardi[63]), or even end users customize and tailor the systems to create new or specialized applications.

[62] Jakob Hummes and Bernard Merialdo, "Design of Extensible Component-Based Groupware," Computer Supported Cooperative Work Journal 9:1 (2000), 5374.

[63] Bonnie Nardi, A Small Matter of Programming: Perspectives on End User Computing (Cambridge, MA: MIT Press, 1993).

Much of this work has appeared in the context of group applications (discussed earlier), because of the need to fulfill the needs of many users simultaneously. Discussions of the organizational and social requirements can be found in Trigg and Bodker.[64] Their major findings include the necessity for having both people with task knowledge and local technical support help the tailoring process. Discussions of potential system architectures and requirements can be found in Hummes and Merialdo[65] and Dourish and Edwards.[66]

[64] Randall H. Trigg and Susanne Bodker, "From Implementation to Design: Tailoring and the Emergence of Systematization in CSCW," Proceedings of the ACM Conference on Computer-Supported Cooperative Work (1994), 4554.

[65] Hummes and Merialdo.

[66] Paul Dourish and W. Keith Edwards, "A Tale of Two Toolkits: Relating Infrastructure and Use in Flexible CSCW Toolkits," Computer Supported Cooperative Work Journal 9:1 (2000), 3351.

In summary, then, HCI has considerable experience dealing with individual differences. In one approach suitable to privacy mechanisms, it has been found valuable to cluster users, and then to present different interfaces or functionality to those users. Another approach is to allow users to tailor the systems to their own needs; however, this often requires that they obtain tailoring help from others. And finally, two intelligent augmentations have been found to be helpful in the HCI literaturemechanisms to help users prevent errors and mechanisms to help tutor the users about, in this case, privacy.

The idea of designing for individual differences also has a downside that is important to keep in mind: the potential for amplifying power imbalances and decreasing fairness. By classifying someone as a privacy fundamentalist, say, a system could decide that he is too much trouble and put up barriers to discourage use. Conversely, unscrupulous designers could segment users in order to seek out novices or the marginally concerned, not to offer targeted assistance, but for relatively easy exploitation. These issues are not restricted to privacy, as any system that discriminates between users opens itself to the question of whether this discrimination is unfair (or paternalistic, de-individualizing, or otherwise unwarranted). It is important to acknowledge and guard against this problem, in terms of both actual effects and perception. However, this is not to say that an "individual differences" approach is to be avoidedonly that it must be used with caution.

19.3.4. Ubiquitous Computing (Ubicomp)

There is currently considerable interest in ubiquitous and pervasive computing in HCI.[67] In these architectures, one might have hundreds or even thousands of sensors and other computational devices spread out through a room, building, or other environment. People would wear them, carry them, or even have them embedded. HCI and ubicomp are not identical areas of computer science, but there is overlap in research. In particular, HCI researchers are very interested in augmented reality applications (where the digital world augments the physical), sensor-based entertainment (such as geo-games), and user-centered interfaces to ubicomp rooms.

[67] For an overview, see Gregory D. Abowd and Elizabeth D. Mynatt, "Charting Past, Present, and Future Research in Ubiquitous Computing," ACM Transactions on Computer-Human Interaction 7:1 (2000), 2958.

There are significant privacy concerns for ubiquitous computing environments. Obviously, location sensors can track individuals through an environment. Sensor aggregation could tell a large amount about what any given individual might be doing. Large amounts of seemingly personal data could be collected without the notice or consent of an environment's users.

Recently, several studies have specifically examined privacy in ubicomp environments. In an ethnographic field study, Beckwith[68] found that workers and elderly residents in a sensor-network-equipped assisted-care facility had very limited understanding of the potential privacy risks of the technology. They instead trusted that the system was benign and perceived the privacy risks to be minimal. Without understanding, informed consent is very difficult. As with work in media spaces,[69] unobtrusive interfaces that encourage users to forget that they are being recorded or tracked bring benefits in ease of use but also risks to users.

[68] R. Beckwith, "Designing for Ubiquity: The Perception of Privacy," IEEE Pervasive Computing 2:2 (2003), 4046.

[69] For example: Mark S. Ackerman, Debby Hindus, Scott D. Mainwaring, and Brian Starr, "Hanging on the Wire: A Field Study of an Audio-Only Media Space," ACM Transactions on Computer-Human Interaction 4:1 (1997), 3966. Also, Dourish et al.

Other work considers methodologies for designing for privacy in these new environments. Langheinrich, drawing upon the European Union's privacy directive of 1995, identifies the following guiding principles for ubicomp designs:

  • Notice

  • Choice and consent

  • Anonymity and pseudonymity

  • Proximity and locality

  • Adequate security

  • Access and recourse[70]

    [70] Marc Langheinrich, "Privacy by DesignPrinciples of Privacy-Aware Ubiquitous Systems," Proceedings of the Ubicomp 2001 (2001), 273291.

This work calls attention to legal frameworks not just as requirements to be met, but also as sources of design inspiration and insight. Hong et al. propose a methodology for prototyping ubicomp applications involving the development of a privacy risk model (a kind of heuristic evaluation, combined with validation through user testing).[71] Lederer et al. also provide design guidelines for privacy.[72]

[71] Jason I. Hong, Jennifer D. Ng, Scott Lederer, and James A. Landay, "Privacy Risk Models for Designing Privacy-Sensitive Ubiquitous Computing Systems," Proceedings of the ACM Conference on Designing Interactive Systems (2004), 91100.

[72] Scott Lederer, Jason I. Hong, Anind K. Dey, and James A. Landay, "Personal Privacy Through Understanding and Action: Five Pitfalls for Designers," Personal and Ubiquitous Computing 8:6 (2004), 440454. See also Chapter 21, this volume.

Finally, Hong and Landay examine the system issues.[73] Their Confab system provides a middleware layer for building ubicomp applications. Combining a blackboard and dataflow architecture, Confab allows users to publish and services to request data with strong privacy controls. Users can place privacy tags on all data that controls access within a Confab infospace and can provide hints about how the data is to be used outside the Confab system. In addition to customizable privacy mechanisms, Confab also includes extension mechanisms, currently about location awareness.

[73] Jason I. Hong and James A. Landay, "An Architecture for Privacy-Sensitive Ubiquitous Computing," Proceedings of the 2nd International Conference on Mobile Systems, Applications, and Services (2004), 177189.

Ubicomp research is just beginning, and over time, we expect this field to provide considerable feedback to privacy mechanisms overall.



Security and Usability. Designing Secure Systems that People Can Use
Security and Usability: Designing Secure Systems That People Can Use
ISBN: 0596008279
EAN: 2147483647
Year: 2004
Pages: 295

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net