Promoting Trust in ADL Systems

 < Day Day Up > 



In this section, we focus on how to persuade the individual learner to trust and accept ADL systems and thereby be comfortable using them. We begin by discussing the impact of privacy, security, and trust in the learning process. We follow this by presenting an important component that engenders trust in ADL—the design of trustable user interfaces.

The Impact of Privacy, Security, and Trust on the Learning Process

In the above sections, privacy and security were discussed in terms of legislation, standards, and technology. Yet, the greatest challenge in the adoption of ADL may be learner acceptance of ADL technology. Holt et al. (2001) reported a number of obstacles to e-learning, including learner anxiety and resistance to computers brought about by the concerns for the privacy and security of a learner’s data. They indicate that this may potentially lead to other negative implications, including alienation, inadequacy, loss of responsibility, and damage to self-image. Fraser, Holt, and Mackintosh (2000) studied computer-related anxiety and psychological impacts of computers. They identified two trends in e-learning. First, some students were anxious about security, privacy, information overload, lost of data, cost, and keeping up with technology. Second, some anxieties are relatively unrelated—anxiety about privacy and security versus anxiety about keeping up with technology, for example. Holt et al. (2001) further noted that familiarity with a particular issue was generally associated with lower anxiety for that issue and concluded that education may be part of the answer. Holt and Fraser (2003) stated that elearners might experience “information anxiety” or a feeling of dread when overwhelmed and unable to understand an avalanche of data from diverse sources. Thumlert (1997) suggested that information overload may add to stress and promote faulty thinking.

Hiltz and Turoff (1985) reported some ways of dealing with information overload and improving user acceptance for computer-mediated communication systems (CMCSs). Some of their conclusions apply to e-learning systems as well. They based their conclusions on observations, user surveys, and controlled experiments. We apply some of their conclusions to e-learning as follows:

  1. Perceptions of information overload may peak at intermediate levels of use, when communication volume has built up but users have not had a chance to develop information handling skills.

  2. Users learn to self-organize communication flows that might initially appear overwhelming. Individuals have different preferences. Instead of imposing a single solution for all, systems should offer options for information handling and organization.

  3. User evaluation and feedback are necessary for understanding what kinds of structures or features would be useful for preventing e-learning information overload.

  4. Anonymity can improve user acceptance of e-learning systems by allowing users the opportunity to submit their concerns online without revealing their identities. This is analogous to the anonymous office suggestion box.

Friedman et al. (1999) formed a panel, as part of the 1999 ACM SIGCHI Conference on Human Factors in Computing Systems, to discuss the impact of electronic media on trust and accountability. Friedman observed that trust presupposes relationships among persons that must be distinguished from our concepts of technical system reliability. Friedman called for researchers and designers in the CHI community to play a critical role in creating conditions that are favorable to instilling trust in online transactions. These conditions applied to e-learning include ways to help users assess the types and sizes of risks that they are taking in using e-learning systems and, in the absence of face-to-face interactions, facilitate the development of goodwill among users and institutions in the e-learning community.

Holt and Fraser (2003) conducted a survey to more closely examine aspects of information technology that may lead to anxiety in different users. They found that the highest anxiety levels are associated with privacy, security, and loss of data. They also asked their respondents what measures would alleviate anxieties over loss of data, keeping up with software, keeping up with hardware, and so on. The responses indicated that more training, better online tutorials, and better reference materials would lessen the anxiety. These authors then explored the relationship between knowledge about a particular factor and anxiety with that factor. They found that more knowledge led to reductions in anxiety for loss of data, software currency, and hardware currency. However, anxiety about privacy increased with knowledge about privacy issues (sometimes better not to know). Among comments received in the survey on how to reduce anxiety, a repeated suggestion was to optimize the learner’s control over software (particular agents) and access to information. The comments also expressed concerns over the security risks in file sharing and the use of video (partially due to performance issues).

As part of future investigation, Holt and Fraser (2003) suggested some possible steps for reducing learner anxiety over security and privacy. First, with the increasing power of the learner’s computer, one can put more functionality and information under the learner’s control to satisfy the learner’s need for enhanced privacy and sense of control and security. Second, the student model (for e-learning) would be accessible for student inspection and reside primarily on the local computer, although this might contribute to information overload. Third, the learner would be the owner of all software agents that are directly involved in the e-learning process. Although these suggestions may seem to be reasonable, we need to be cautious and refer to the second and third points stated above by Hiltz and Turoff (1985). Not all users are the same, and some users may not want to have such control. Obtaining user feedback on the proposed measures would be of paramount importance.

Design of Trustable User Interfaces for ADL

Users and Their Agents

ADL services have the potential to be very valuable to learners. Software agents in an education situation can aid information exchange; monitor learner progress and performance; support decision making; and provide a convenient interface to a rich, complex set of information. In addition, agents can provide privacy protection if the agents operate on behalf of their users in an anonymous or pseudonymous fashion. Agent systems introduce some new human-factor concerns, however, that must be addressed before the systems will be usable and accepted. Not only will an agent system have to be “trustworthy,” meaning to operate correctly and reliably, but it will also have to be “trustable,” meaning to be perceived as trustworthy, usable, and acceptable to the user. This section examines the human side of trust and reviews what is necessary to make agent systems trustable.

Introducing agents into a learning environment creates a new level of indirection. That is, when using agents to perform a task, there is an indirect relationship between the users (e.g., learners or teachers) and the tasks or information with which they are working. Instead of manipulating the information directly, the user delegates this task to an agent, who reports back when the work is done. This indirection can make users less trusting and more risk averse than other nonagent services. Thus, the design of trustable systems is even more important when agents are used, and research recently conducted in our lab has looked at design factors that can address these concerns.

Increasing Trust and Reducing Risk

The issue of building trust when users are interacting with technical systems has most often been studied in the context of e-commerce transactions and was recently extended to agent scenarios (Patrick, 2002). When purchasing items over the Web, for example, users must make decisions about the trustability of the vendor and the technology with which they are interacting. A number of factors seem to be important in influencing those trust decisions. One of the most important factors for building feelings of trust is the visual design of the interface. Users often make rapid trust decisions based on a shallow analysis of the appearance of a WWW site, for example (Fogg et al., 2002). A visual appearance that is clean, uses pleasing colors and graphics, is symmetrical, and is professional looking is usually rated as more trustable. Other design factors that can build trust are the amount of information provided to the user, such as the information on how a system operates and the status of any processing. This transparency of operations can be particularly important for agent systems, where users have given up direct observability and control. Predictable performance can also be an important factor, with systems that respond rapidly and consistently, instilling higher levels of trust.

Research on human decision making has shown that people assess both the trustability and the risk of a particular situation, and then weigh the two against each other to decide on an action (e.g., Grandison & Soloman, 2000; Lee, Kim, & Moon, 2000; Rotter, 1980). Trustability can be low, for example, if the perceived risk is also relatively low. On the other hand, a system might need to be very trustable if the perceived risk is very high. For agent systems, one of the key factors that contributes to the assessment of risk is the level of autonomy granted to the agent (Lieberman, 2002). If the agent is empowered to take significant actions on behalf of the user, this will obviously be seen as a riskier situation that will require higher levels of trust than situations where an agent merely provides information or advice. Another risk factor is the number of alternatives available to the user. For example, a service available only from one vendor or via one interaction technique (e.g., no telephone or postal address provided) will often be considered by a user to be a higher-risk transaction.

Trust and risk assessments are not made in isolation. Instead, the context plays a role, where context can include the environment, conditions, and background in which the assessment is made (e.g., Dey & Abowd, 2000). In understanding context, a useful distinction can be made between internal and external context, where internal context is set by the thoughts and opinions of an individual, while external context is set by the physical environment. The internal context for building trustable agents includes such factors as a user’s general ability to trust. For example, research has demonstrated that some users have a higher baseline willingness to trust, and this influences their trustability assessments in specific situations (e.g., Cranor et al., 1999; Rotter, 1980). In a parallel fashion, users also have a baseline risk perception bias, where different users can assess the same situation as more or less risky. Another internal context factor is experience. Individual users will have had different experiences, and these will affect the trust and risk assessments that they make. In addition, these experiences may be direct or indirect, with the latter including reports or reputations about a service or vendor. Other internal contextual factors are the cultures or groups that the users belong to. Different groups may have different expectations or requirements in the areas of trust and risk, and these will have to be taken into account.

The most important external context factor for ADL services is the activity being performed by the user. If an activity is casual, such as reading course material or doing background research, users will likely make different trust- ability decisions than if they were working on an activity that was important to them, such as taking an exam. Another important contextual factor is the amount of personal information that is involved in the activity. An electronic transaction that involves a student’s marks, for example, includes personal and sensitive information so that the student and instructor may require higher levels of trust and lower degrees of risk before they are comfortable with such a service.

Building Usable Privacy Protection

It is not enough to increase feelings of trust and reduce perceptions of risk. Service designers must also ensure that their systems are usable and effective. Building usable privacy systems involves supporting specific activities or experiences of the potential users. These users’ needs have recently been summarized into four categories: comprehension, consciousness, control, and consent (Patrick & Kenny, 2003).

Comprehension refers to the user understanding the nature of private information, the risks inherent in sharing such information, and how their private information is handled by various parties. Design factors that can support comprehension include training, documentation, help messages, and tutorial materials. Designers can also use familiar metaphors or mental models so that users can draw on related knowledge to aid understanding. The layout of the interface can also support comprehension, such as when a left-to-right arrangement is used to convey the correct order of sequential operations.

Consciousness refers to the user being aware of, and paying attention to, some aspect or feature at the desired time. Design techniques that support consciousness include alarms, alert messages, and pop-up windows. Interface assistants, such as the animated help character in Microsoft Office, are also attempts to make users aware of important information at the time it is needed. Users can also be reminded of some features of function by the strategic placement of a control element on an interface screen, as is seen when similar functions are arranged together. Other methods to draw users’ attention to something can include changing the appearance, either by using a new color or font or by introducing animation or movement. Sound is also a powerful technique to make the user pay attention to a particular activity or event. Privacy-aware designers should use these techniques to ensure that the users are aware of the privacy features of a system, and that they are paying attention to all the relevant information when they perform a privacy-sensitive operation.

Control means the ability to perform a behavior necessary to use the interface. Supporting control means building interfaces that are obvious and easy to use. Door handles that reflect the behavior that is required, such as push plates for doors that open outwards and metal loops for doors that open inwards, are good examples of obvious controls. Another useful technique is mapping, where the display or arrangement of the controls is somehow related to the real- world objects they manipulate. This might be seen, for example, when light switches are arranged on a wall in a pattern that reflects how the light fixtures are installed on the ceiling. In the privacy domain, the concept of control is important for ensuring that users actually have the ability to manipulate their private information and their privacy preferences. Thus, a good interface design for an ADL system might include easy-to-use controls for monitoring a student’s progress or viewing a teacher’s course notes.

Consent refers to users agreeing to some service or action. It is most common that there is a requirement for informed consent, which means that the user fully understands all the information relevant to making the agreement. Supporting informed consent implies supporting the comprehension and conscious factors listed above, because the users must both understand and be aware of the relevant information when the agreement is made. In the privacy domain, consent to the processing of personal information is often obtained when a user enrolls for a service. At that time, users are often presented with a large, legally worded user agreement that specifies how their personal information will be handled. It is well known that users often ignore these agreement documents and proceed to using the service without considering or understanding the privacy implications. This is not what is meant by informed consent, and in our laboratory, we are experimenting with interface designs that allow users to make informed decisions in the appropriate context.

In summary, much is known about building trustable user interfaces, and this knowledge can be applied to agent-supported distributed-learning environments. Interface design techniques, properly used, can increase users’ feelings of trust and reduce their perceptions of risk. In addition, paying attention to users’ privacy needs in the areas of comprehension, consciousness, control, and consent, and ensuring these needs are satisfied by the service, will be an important step for building an environment that is usable and trusted.



 < Day Day Up > 



Designing Distributed Environments with Intelligent Software Agents
Designing Distributed Learning Environments with Intelligent Software Agents
ISBN: 1591405009
EAN: 2147483647
Year: 2003
Pages: 121

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net