Section 24.2. A Model of Informed Consent for Information Systems


24.2. A Model of Informed Consent for Information Systems

The model of informed consent for information systems we present here was first developed in 2000 by Friedman, Felten, and Millett[13] in the context of online interactions.[14] This model is based on six components:

[13] Batya Friedman, Edward Felten, and Lynette I. Millett, "Informed Consent Online: A Conceptual Model and Design Principles," CSE Technical Report (Seattle: University of Washington, 2000).

[14] See also Ruth R. Faden and Tom L. Beauchamp, A History and Theory of Informed Consent (New York: Oxford University Press, 1986).

  • Disclosure

  • Comprehension

  • Voluntariness

  • Competence

  • Agreement

  • Minimal distraction

The word informed encompasses the first two components: disclosure and comprehension. The word consent encompasses the following three components: voluntariness, competence, and agreement. In addition, the activities of being informed and giving consent should happen with minimal distraction, without diverting users from their primary task or overwhelming them with intolerable nuisance.

24.2.1. Disclosure

Disclosure refers to providing accurate information about the benefits and harms that might reasonably be expected from the action under consideration. What is disclosed should address the important values, needs, and interests of the individual, explicitly state the purpose or reason for undertaking the action, and avoid unnecessary technical detail. The information should also disabuse the individual of any commonly held false beliefs. Moreover, if the action involves collecting information about an individual, then the following should also be made explicit:

  • What information will be collected?

  • Who will have access to the information?

  • How long will the information be archived?

  • What will the information be used for?

  • How will the identity of the individual be protected?

24.2.2. Comprehension

Comprehension refers to the individual's accurate interpretation of what is being disclosed. This component raises the question: how do we know when something has been adequately comprehended? While there is no easy answer here, at least two methods seem viable: (1) being able to restate in different words what has been disclosed, and (2) being able to apply what has been disclosed to a set of hypothetical events. Take, for example, a web-based recommendation system, such as an e-commerce site recommending products based on the customer's prior purchases or on purchases of other customers with similar profiles. Based on information disclosed to the customerabout what data is being collected and how it will be usedcan the customer answer reasonable questions about the data's use, such as:

  • Will information about the customer's last three purchases be included in the recommendation system?

  • Will some other user of the recommendation system be able to determine what the customer has purchased in the past?

  • Will information about the customer's past purchases be a part of the recommendation system two years from now?

In face-to-face interactions, two-way dialog, facial expressions, and other physical cues help ensure the adequate interpretation of any information disclosed. Technologically mediated interactions, however, lack many of the cues and opportunities to ascertain and ensure comprehension. Typical web-based interactions present the disclosure of information in a web page or dialog box, and users are expected to agree to or decline participation by clicking on a button. Rarely are email or chat facilities provided during the process of disclosure. As more and more interactions move online or become mediated by technology, ensuring comprehension becomes more challenging. Nevertheless, comprehension is a crucial component of informing, and it should not be dismissed.

24.2.3. Voluntariness

A voluntary action is one in which an individual could reasonably resist participation should she wish to. Voluntariness, then, refers to ensuring that the action is not coerced or overly manipulated.

Coercion is an extreme form of influence that controls by compulsion, threat, or prevention. A canonical example of coercion occurs as follows: Person A holds a gun to Person B's head and says, "Fly me to Havana or I'll shoot." Often, coercion can occur without notice when there is only one reasonable way for individuals to receive certain needed services or information (or, if other ways do exist, they are too costly in terms of finance, time, expertise, or other costs to be viable options.) This form of coercion is a serious concern for online interactions and other technology mediated interactions. As more and more critical services move online in their entirety, such as applying for medical insurance or to universities for higher education, individuals who want to obtain these services or information will have to engage in web interactions. Given the lack of substantive choice among web sites and web browsers, users can be, in effect, coerced into web-based interactions that compel them to give up personal information or engage in other activities.

Manipulation of certain forms can also undermine voluntariness. Manipulation can roughly be defined as "any intentional and successful influence by a person by noncoercively altering the actual choices available to the person or by nonpersuasively altering the person's perceptions of those choices."[15] The key here is that manipulation alters the individuals' choices or perception of choices by some means other than reason. This sort of manipulation can be achieved in at least three ways:

[15] Ibid., 354.

  • Manipulation of options. The first way entails manipulation of the options presented to the individual such that the presentation encourages certain choices or behaviors. For example, consider an e-business that asks the user for more information than is necessary to complete a purchase but does not indicate to the user that completing some fields is optional.

  • Manipulation of information. The second way entails manipulation of information. This manipulation uses information intentionally to overwhelm the individual or to provoke or take advantage of an individual's fear or anxiety. For example, some web sites have packaged information into multiple cookiesinformation that could have been packaged more conciselyso that the user who elects to accept cookies on a case-by-case basis would be bombarded with numerous requests to set cookies from a single site. As a result, the user may turn on the "agree to all cookies" option to avoid the overwhelming requests for information, or fail to notice an undesirable cookie.

  • Psychological manipulation. The third way is psychological. This form of manipulation includes any intentional act that influences a person by causing changes in the individual's mental processes by any means other than reason. Flattery, guilt induction, and subliminal suggestions are a few relevant influences. Recent work by Reeves and Nass and their colleagues[16], [17] indicates that individuals are vulnerable to psychological manipulation in online interactions, particularly with respect to psychological manipulations from the technology's interface. For example, in their research, Reeves and Nass have shown that users respond to flattery from a computer, judge computers that criticize rather than praise others to provide more accurate information, and apply gender stereotypes to the technology simply on the basis of subtle gender cues, to name but a few of their results. Web sites that use psychological manipulationfor instance, to flatter the user into divulging information or into attributing greater accuracy to online recommendationsmay violate the criterion of voluntariness.

    [16] B. Reeves and C. Nass, The Media Equation: How People Treat Computers, Television, and New Media Like Real People and Places (New York: Cambridge University Press, 1996).

    [17] C. I. Nass, Y. Moon, J. Morkes, E. Kim, and B. J. Fogg, "Computers Are Social Actors: A Review of Current Research," in Batya Friedman (ed.), Human Values and the Design of Computer Technology (New York: Cambridge University Press, 1997), 137162.

24.2.4. Competence

Competence refers to possessing the mental, emotional, and physical capabilities needed to give informed consent. For example, a person with Alzheimer's may lack the mental capability to make her own medical decisions. Or, in the online environment, a 15-year-old may have the technical competence, but lack the mental and emotional capability to make reasoned judgments about when to provide personal information to e-businesses and in online chat rooms.

For example, at roughly the same time as the Year 2000 Census was being conducted in the United States, the Barbi web site presented Barbi as a census taker who asked web site visitorsmostly young girls under the age of 12to help Barbi with her census work by completing a form that requested personal information. Troublesome from the perspective of informed consent, these young girls often lacked the mental and emotional competence necessary to judge the appropriateness of the information they volunteered to the web site.

Designers of web sites and other technologies targeted to children and adolescents will need to be especially cognizant of the component of competence. For example, the United States Children's Online Privacy Protection Act (COPPA) requires written parental consent when web sites collect information from children aged 12 and under. Another model is that used by many Institutional Review Boards; it suggests obtaining informed consent from both the adolescent and the adolescent's guardian for adolescents between the ages of 13 and 17 (inclusive). However, the case of adolescents is not straightforward: a tension exists between ensuring that adolescents can adequately assess the impacts of the type of information being collected from them and maintaining adolescents' privacy (as this is typically the time in a person's life that privacy vis a vis one's parents begins to become an important value). This tension should be considered when determining whether it is necessary to obtain consent from the adolescent's guardian as well as from the adolescent.

24.2.5. Agreement

Agreement refers to a reasonably clear opportunity to accept or decline to participate. Aspects to consider include:

  • Are opportunities to accept or decline visible, readily accessible, and ongoing?

  • Is agreement by the participant ongoing?

In traditional human subjects research, the component of agreement is ongoing. Participants may withdraw their agreement to participate at any time and for any reason (participants do not need to provide a reason for discontinuing participation). While the arena of research differs in important ways from interactions with information systems, and while considerable complexity exists concerning how to apply the guidelines from one to the other, still the aspect of ongoing agreement may have relevance for information systems. For example, in the case of recommendation systems, users could be provided with the opportunity to withdraw or prevent further use of their data from the recommendation system at any time. Many of today's recommendation systems lack such an ability.

A related issue for ongoing agreement arises in the context of discussion groups and online chat rooms where dialog that may often feel like "ethereal" online conversation is archived and, in reality, is more permanent and accessible than most other forms of communication. In these online forums, participants in the flurry of heated conversation may forget that they have agreed to have their online conversation recorded and archived and, if given the opportunity, might even suspend that agreement. Mechanisms that periodically remind participants that online dialog may be archived (and perhaps allow participants to remove dialog from the archive) could help preserve informed consent in these interactions.

Not all forms of agreement need to be explicit. As a society, we have a good deal of experience with implicit consent where, by virtue of entering into a situation, the individual has, in effect, agreed to the activities that are broadly known to occur in that context. For example, when a player steps out onto the football field in football garb and enters the game, the individual has implicitly agreed to participate in the normal activities of the gamenamely, to being bumped, bashed, and smashed by other players who have entered into identical agreements. Implicit consent holds in this case because the other components have also been met: disclosure and comprehension (via reasonable expectation), competence (if we assume that the individual is of a reasonable age and of sound mind and body), and voluntariness (if we assume that the individual was not coerced or manipulated to dress in football garb and to go out onto the field). For implicit consent to hold for information systems, similar criteria need to be met.

24.2.6. Minimal Distraction

This criterion for informed consent arose from empirical investigations in which users, overwhelmed by the activities of "being informed" and "giving consent," became numbed to the informed consent process and disengaged from the process in its entirety. Specifically, minimal distraction refers to meeting the preceding criteria of disclosure, comprehension, competence, voluntariness, and agreement without "unduly diverting the individual from the task at hand."[18] This criterion is challenging to implement, because "the very process of informing and obtaining consent necessarily diverts users from their primary task,"[19] yet it is crucial if informed consent is to be realized in practice.

[18] Batya Friedman, Daniel C. Howe, and Edward Felten, "Informed Consent in the Mozilla Browser: Implementing Value-Sensitive Design." Thirty-Fifth Hawaii International Conference on System Sciences (Hawaii, 2002).

[19] Ibid.

With a model of informed consent for information systems in hand, we turn now to examine how this model can be used to analyze, assess, and improve the design of existing information systems.



Security and Usability. Designing Secure Systems that People Can Use
Security and Usability: Designing Secure Systems That People Can Use
ISBN: 0596008279
EAN: 2147483647
Year: 2004
Pages: 295

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net