Section 29.1. Users and Trust


29.1. Users and Trust

As part of continual usability research, usability engineers at Microsoft had observed hundreds of users answering questions posed by the computer (consent dialogs) in Internet Explorer, Windows Client and Server, Microsoft applications, and other companies' products. It was clear that users often weren't following the recommendations that the products made. The question was: why not?

Having seen this behavior over multiple usability sessions, we ran some specific studies to gain more insight. We conducted in-depth interviews about trust with 7 participants, and lab-based research with 14 more. We then used the results of this work to develop user interface prototypes that incorporated design elements suggested by the initial research, and observed a further 50 participants working with various iterations of the designs in different trust scenarios. Later, we had the chance to verify the concepts and designs with participants who were helping us evaluate the interface for Windows XP Service Pack 2 both in multiple lab sessions and through feedback and instrumentation from a very large user panel.

We found that it was not just that users didn't understand the questions being posed by the computer, although that was definitely part of it. It was also that the computer was not their only source of trust information. It turns out that users aggregate many "clues" about trustworthiness and then trade those off against how much they want the item in question. Interestingly, computers weren't presenting all of the clues that they could have to help users, and some of the clues they were presenting were so obscure that they just confused users.

WHAT IS USER RESEARCH AT MICROSOFT?

A User Researcher's role is, specifically, to bring data to the table about how people interact with PCs, what they want to do but can't do, and what's coming around the corner technologically that they'll need to do but don't even have a clue about. Then, the researcher works closely with designers, user assistance creators, and the feature program managers to ensure that we build the right features to meet user scenarios, that those features work the way users expect, and that all the other myriad design considerations are taken into account.

User Research at Microsoft draws on multiple data sources to build a picture of user behavior and user needs. Along with traditional lab-based studies of everything from paper prototypes to finished code, we also conduct site visits to watch users in their own environments, perform in-depth interviews on specific topics, and administer large-scale international surveys. In addition, we rely on community feedback and our panel of instrumented users. This user panel is composed of regular people who have opted to run special software that provides us with data on their computer settings and their behavior. Interpolating from all of these areas as well as market research and published academic studies helps us to understand what drives users.

Lab work (usability "testing") is actually quite a small part of what User Researchers do. While we're in the lab, though, along with measuring users' success on tasks, we also measure things like desirability, learnability, and comprehension. Having the controlled environment of the usability lab allows us to isolate specific issues more easily. We iterate the design and test again with more users until we get the user experience to a point where participants can be successful and satisfied with the task.

The data serves other purposes too. Knowing what proportion of users are likely to perform a certain tasksay, one that keeps them more secureis very useful in meetings where other team members are inclined to make wild guesses based on their own experiences. The realities can be very sobering. Being able to show how that proportion grows after injection of some user-centered design into the task is a major encouragement for teams to think and design in a user-centered way.


29.1.1. Users' Reactions to Trust Questions

Trust questions appear at many points in computer interfaces. Typically, they are shown as dialogs when the computer requires input or consent from the user before proceedingfor example, before downloading a file or before performing an action that could lead to data loss.

These trust question dialogs are often designed to serve a useful dual purpose of both informing users and requesting input. During usability research at Microsoft, we found that these dialogs regularly failed on both counts from users' perspectives. Some observations we made about the information and questions in trust dialogs were:


Often, the question being presented is a dilemma rather than a decision

In such cases, the user feels that he has no way of choosing between the options being presented. Without suitable assistance, the user will be forced into making a choice that may or may not be the right one for him. Superstitious behavior builds up this way.


Computers can't help interpret emotional clues because they behave in a purely logical way

This means that computer software has to defer decisions to users even if the outcomes of those decisions look logically "bad."


Users don't want to deal with the trust issues presented to them

The larger the scope of the decision, and the less context that is given, the more likely they are not to consent to the action being presented to them.


Users don't want to reveal personal data

The closer the question being asked is to revealing personal data, the less likely users will be to comply.

So, users do not respond to dialogs the way we might anticipate. This is because they are often forced to make a decision that is at odds with their understanding of the situation, and the information being provided is both incomplete and only partially intelligible to them.

29.1.2. Users' Behavior in Trust Situations

The research I performed also showed that users have some interesting things going on in their heads during their interactions with trust situations on their computers:


What users say they'll do and what they actually do often differ

For example, while users may claim to run virus-checking software, and be careful to whom they give personal data, in reality they are more lax than they describe.


Users don't necessarily want to think about the consequences of their behavior

They may "forget" that they've changed a setting or allowed a certain application to access their data, and thus be confused when they suffer consequences such as a broken user experience or unexpected email.


Users make one-off decisions about trust

Trying to get them to make a global decision to "always do X" will upset them and potentially lead to their declining that global decision where, in fact, they would want to accept in some specific instances.


Users conceive of security and privacy issues differently than developers do

Users don't have the background understanding of issues, are surrounded by myths and hoaxes, and have a different relationship with "junk" mail than application developers do.


Users have many superstitions about how viruses are propagated

They confuse hacking and viruses. They also interchange terms for software bugs and viruses. They often fall prey to virus hoaxes in an attempt to protect themselves, while simultaneously engaging in risky behavior likely to lead to virus transmission.

Users do not tend to consider events requiring trust decisions in the same way that technologists do. This is because their focus is not on the technology, but on the outcome of the trust event and its impact on their lives.

29.1.3. Security Versus Convenience

The worst dilemma for users, and the one that is also the hardest to resolve through user experience design, is that from a user perspective, increases in security are most frequently accompanied by a reduction in convenience. Likewise, when users try to accomplish a task in a convenient way, they often encounter security warnings.

For instance, choosing to set the browser security level to High in Internet Explorer or other browser products will turn off many of the features of the product that can be used to exploit users. However, this same action can degrade the browsing experience to a point where most users will be dissatisfied, as they will no longer have access to the plug-in components and scripting functions that they have come to expect on a web site. It is this dilemma that user experience designers must seek to resolve for users, presenting them instead with understandable options that allow them to perform their tasks with a minimum of inconvenience.

29.1.4. Making Decisions Versus Supporting Decisions

It is important to note that the emphasis here is not on allowing the computer to make trust decisions, but on how a computer can assist users with their trust decisions. Of course, there are some instances where the computer can make that decisionfor instance, when it detects the presence of a known virus in something the user plans to download. Here, the decision is easyprotect the user from the virus. Computers can be programmed to make this kind of decision. Most of the time, however, the decision is less clear cut, and so it still rests with the user. The challenge is to achieve the correct balance between exhausting the user with multiple questions and automating the process to the point where the computer runs the risk of making erroneous decisions.

Having observed that users have a tendency to simply dismiss any dialog that gets in their way, the tendency among interface designers is often to try to remove the dialog. If the dialog can be completely removed (if the computer can make the decision), that's great. If, however, the dialog still needs to exist, our studies have shown that users make a much more secure, appropriate, reasoned decision if the dialog is presented in the context of their task.

Placing the decision in an initial options screen or hiding it in a settings dialog removed in space and time from the point where users carry out their task requires them to think in a logical rather than an emotional way about a class of task rather than about a specific instance.

As noted earlier, users found it easier to make a specific decision rather than a generic decision. It was much easier for them to agree to trust a specific person at a specific time for a specific transaction than to agree to trust a whole category of people every time a transaction occurred.

Users could easily make a decision without too much interruption to their task if the dialog presented the facts they needed in a way they could understand. We classified this as presenting a decision, not a dilemma.

For common or repetitive tasks, obviously the fewer interruptions a user experiences, the better. In these situations, it makes sense to give the user an option to always apply his current decision to the situation. If you can scope the situation suitably, the user will be happy to have that decision applied consistently.

For less common tasks, it's not necessarily the number of screens between a user and his goal that determines the quality of the interaction. Instead, a major factor is whether all of those screens are perceived by the user to be flowing toward his end goal.

After eliciting from users some of the clues they use, and understanding the philosophies that they bring to their trust interactions, we worked out which clues can be provided by a computer, and then worked out how and when to present them in the trust process such that they aided in the decision. The tone of the interaction was dictated to a large degree by a wish to stay within users' comfort zones while simultaneously educating them.



Security and Usability. Designing Secure Systems that People Can Use
Security and Usability: Designing Secure Systems That People Can Use
ISBN: 0596008279
EAN: 2147483647
Year: 2004
Pages: 295

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net