Variations on Techniques


The chapter for each technique describes some alternatives and some flexibility that's built into each technique. You can do usability tests with 3 people or 30. You can survey random users to your site or just the people who've signed up for your mailing list. Focus groups can discuss large, abstract issues, or they can tightly focus on specific topics. There are many ways that you can make these methods fit the needs of your research. In addition, there are a number of popular variations that take the techniques in significantly different directions while still gathering valuable data.

Virtual Focus Groups

Focus groups don't have to have all the participants in the same room together. A group of people can be brought together through any of the "telepresence" technologies currently available. The simplest is the conference call. Telephone focus groups have the advantage that you don't have to get everyone in a room together, and the participants' time commitment is limited to the duration of the actual group. In-person focus groups have to take factors such as travel time, traffic, availability, and waiting time into account when figuring out who can make it and when. This limits participants to people with large chunks of time who live or work near the meeting location and makes recruiting more difficult while also biasing the group somewhat. Conducting group interviews over the phone eliminates these geographic constraints. You can run a focus group over the phone with every member in a different part of the country, speaking from where it's most convenient for them.

Telephone focus groups, however, are more challenging than in-person groups. It's more difficult for strangers to bond and have uninhibited discussions when they can't see each other. It's also more difficult for them to read each others' intentions and meanings when they can't read body language. Visual illustrations, which are often used as triggers in focus groups to start and maintain discussion, are also more difficult to introduce. Moreover, in an in-person focus group, you know that the participants are focused: you can see them and they have little stimuli other than what you've presented them. You don't have that kind of control over a focus group. Although you can encourage people to pay attention, you can't prevent them from checking their email or buying their groceries when their attention should be on the group.

Still, in situations when you can't get people with specific skills to participate otherwise (say, busy executives) or when you want a wide geographic reach, telephone focus groups can be invaluable.

A second common virtual focus group is the online focus group. These are group interviews that are conducted using chatrooms (often with specialized software). Participants sign on to the focus group and instead of speaking to each other and to the moderator, they type. This has many of the same advantages as telephone focus groups in terms of the geographic and time availability of the participants. In addition, online focus groups provide a level of distance and anonymity that other methods don't. This is a two-sided issue. On one hand, it allows people to say things that they may not normally feel comfortable saying, or in ways that they wouldn't normally say them. On the other hand, it makes it still harder for people to feel like they're part of a group, which may make them less likely to speak their mind. Thus, some participants may become much more open while others clam up.

Moderation of such a situation—especially in the context where there are no intonational or physical cues about a person's attitude—is significantly more difficult than moderating a standard focus group. Additionally, the constraint that all the participants have to be able to type their thoughts in real time can really limit the potential audience.

In situations where the audience is appropriate (say, teenagers or heavy Internet users), the technique can work great. Web sites and software can be presented in the most natural environment—people's own monitors—and topics can be discussed that wouldn't be otherwise.

With higher bandwidth and more powerful desktop systems, the near future may also present new technologies for remote focus groups. High bandwidth may allow Internet telephony applications to be much better integrated with presentation software than current conference calls, thus allowing for much richer telephone focus groups. There may even soon be streaming video focus groups direct from people's desktops, which will eliminate many of the problems of chatroom- and phone-based interviews.

Nominal Groups

Focus groups are generally structured to get at the underlying values and desires of the participants. By having the participants discuss issues in depth among themselves, it's possible to understand what those issues are. However, it's difficult to understand their relative importance. Often, the most discussed or contentious issue appears to be the most important, when it could actually be of relatively minor importance compared to an issue that everyone agrees on. In order to minimize this, Delbecq, Van de Ven, and Gustafson devised the nominal group technique in the late 1960s and early 1970s. It's a highly structured group interview that mixes focus group and survey techniques.

Nominal groups are based on one primary idea: that people will be most honest about their feelings and most committed to their views if they're asked to write down their thoughts before group discussion begins. This reduces the possibility of group-think and the possibility of charismatic participants swaying the other participants' views.

The method is as follows:

  • First, a topic is introduced. This should be a topic that people can succinctly reply to in a short time. For example, "The qualities of silverware that you consider to be important."

  • The participants write down their responses to the topic.

  • Everyone reads and explains their responses, one at a time, while the responses are listed in a central location (like a whiteboard). There's no discussion of the responses, only clarification. The responses are then given unique identifiers (numbers or letters).

  • The participants then rank the responses by writing down the identifiers of responses they find important and rating each with a Likert scale (see Chapter 11).

  • The choices and ranks are then shared and discussion begins, using the ranking as the center point. (Discussion, which is the key in traditional focus groups, is even sometimes skipped or truncated in nominal groups.)

The method minimizes the amount of participant bias at the expense of depth. Since people can generally write a lot less in a given amount of time than they can say, this method produces less information about people's values and needs than a traditional focus group. Its benefits are that it's straightforward, needs little analysis afterward, and can prioritize issues. It can be used to set the agenda for a larger discussion (since the interpersonal interaction is limited, it can even be done via email before the actual focus group), but it's somewhat anemic when it comes to in-depth understanding.

Friction Groups

When focus groups are recruited, careful attention is paid to make all the participants similar in key ways so that they can feel comfortable with each other and draw from similar experiences. Keeping a specific slice of the population homogeneous reduces a number of variables that the researcher has to consider when attempting to understand what affects their perspectives and the breadth of their experience.

Picking people who see the world the same way has some drawbacks. A discussion with a group of similar people may miss fundamental issues completely since everyone involved is speaking from a common set of assumptions. Moreover, when people feel that they have become members of a group, they may be less reluctant to deviate from it and voice dissent. This leads to group-think, which is always dangerous in a focused interview situation.

Some marketing research companies have started to experiment with the focus group form in order to reduce these effects. One technique is called friction groups, and it's (somewhat) the evil twin of traditional focus groups. Instead of screening all the participants for the same values, participants are recruited to intentionally include divergent perspectives. The participants are then encouraged to discuss and defend their views.

When recruiting, the participants should be the same in every way with a single differentiating factor. This reduces the possibility that the differences in people's perspectives are caused by something other than the factor under research. A differentiating factor can be a view ("Real estate is a better investment than stocks") or a choice ("Dell versus Acer").

An equal number of people should be chosen to represent any given view so that no one feels that the others are ganging up on them. Thus, if you include two different views in an eight-person focus group, there should be four representatives of each. If you decide you want three views, schedule nine participants, with three representing each perspective.

Warning

Friction groups are a relatively new idea. There is not a lot of experience with them and plenty of places where they can go really wrong since people's divergent opinions cause them to retrench their views rather than try to defend them. Thus, they should be done with extreme care.

The recruiting is the same as for a regular group, but with extra care paid to verifying that all the key variables are met by all the participants, in terms of both their differences and their similarities.

Moderation of friction groups is sure to be especially tough. The goal of a friction group is to understand people's values by seeing how they defend those values without becoming offended or angry. To create such a situation among a group of strangers needs to be done carefully and with a thorough understanding of the motivations and mental models of the participants.

Virtual Usability Tests

It's not necessary to watch people over their shoulder to get an idea of whether they can do a task or what problems they have with a product. The basic idea of a usability test is to gauge a users' success with a product while allowing them to step back and comment about their experience with it. In standard usability tests, this is done by a moderator who gives users tasks to do and asks them questions as they're using the site. This technique works for any kind of product, whether it's a piece of software or a toaster. Web sites, however, allow the easy creation of this kind of "cognitive distance" without having someone nearby. Since they're downloaded from a central server, it's possible to create a kind of "mental wrapper" around the site using frames or another window and provide the kinds of tasks and questions a moderator would ask in the lab. Furthermore, it's possible to automate this process so that the testing can be done not with 6 or 10 people, but with 60 or 100 (or 6000 or 10,000). These data can lead to compound usability metrics that are formed from a much broader base than traditional usability tests.

Virtual usability tests (sometimes called "remote usability tests" by companies such as Vividence and NetRaker who offer the service) are more than merely a usability test performed "virtually"; they're a cross between a survey, a usability test, and log analysis. In these systems, a separate frame (or dialog box) contains questions and tasks. As the people perform the tasks, the system automatically tracks their progress (like a log file) and presents questions about their experience at appropriate times (such as when they feel they've finished a task or if they give up on it).In addition, it measures the speed at which they do the tasks and their success rate with those tasks.

In the end, the data are automatically analyzed, and various metrics are computed based on the collected data. Vividence, for example, can compute the proportion of people completing a task compared to the first item they clicked. This can lead to a deeper understanding of the effectiveness of the site navigation. They also offer a clickstream map with successful and unsuccessful paths highlighted.

Although such a system can measure success rate and ask basic questions—the foundation blocks of usability testing—it lacks the flexibility and in-depth anecdotal data that a true usability test has. When analyzing usability testing videotapes, most analysis will do a lot more than just measure the success rate for tasks and note down people's comments. A good moderator will observe problems as they happen and probe into their causes, focusing the usability test on the subjects that matter the most to the participant and the product. The analyst will be able to use these probes to try to understand the participants' mental models and assumptions, which can lead to a more fundamental and wider-ranging understanding of the user's experience.

This makes such systems great for straightforward questions ("How do people fail when they try to log in?") but fails to address more complex issues ("Why are people looking for knives in the fork section?"). Answering the basic questions is important and can highlight a lot of issues, but only goes part of the way to understanding the reasons behind a site's interaction problems. Moreover, it's difficult to understand users' reasoning or build mental models based on their answers to multiple-choice questions and their task successes. Although these systems also offer the ability for openended questions, they can't be tuned to the specific user's experience, so the replies (much as with open-ended survey questions) tend to be negative rants and of marginal utility.

It's also possible to conduct usability tests using the same kind of telepresence technology that allows for virtual focus groups. Video-conferencing facilities coupled with remote computer control technologies such as Netopia's Timbuktu Pro (www.netopia.com) or WebEx's desktop conferencing products (www.webex.com) allow for the moderator and user to be in different parts of the world and still maintain the spontaneity and depth of a face-to-face usability test.

Eye Tracking

Although it seems like a pretty low-level way to look at experience, knowing where people are looking at a given point in time can reveal a number of higher-level phenomena. Human eyes have the property that only the fovea, the center, can resolve fine detail. This means that whenever we want to examine something, we need to look directly at it. For example, try reading something on your right while looking left: you can still see the thing you want to read, but you can't read it because your eyes just don't have enough resolution there. In addition, the amount of time that people's eyes spend on a given object is pretty much proportional to how much time they spend thinking about it. Using tracking equipment (such as that made by EyeTools, www.eyetools.com), it's possible to create a map of where someone looked and for how long, to a fine granularity. When this is mapped to what they were looking at (e.g., the Web pages they were looking at), it's possible to see what dominated their thought processes.

When eye tracking is used in a usability test, it can reveal how certain behaviors and mistakes are based on the visual emphasis and organization of interaction elements.

A basic analysis of eye-tracking information can be done visually. The patterns of use can be examined to see which parts of the interface inspired the most traffic. This provides immediate feedback about how people's attention is distributed among the interaction elements. Did the participants pay any attention to the ad banner on the left or to the navigation bar at the bottom? Understanding the reasons for differences in attention is tougher. Did people spend a long time looking at the navigation list because they were really interested in it or because they were confused by it and were trying to figure it out? Did they not look at the links on the left-hand side because they didn't see them or because they knew that section well and didn't need to look there? Eye tracking can't, in general, answer these questions and needs to be coupled with a technique that can.

The biggest problems with eye tracking are that the equipment to perform the technique is expensive and cumbersome, and any interpretation besides the most basic ("the person looked at this spot the most") requires trained operators and analysts, which adds to the expense. That said, it could well become a powerful tool as the technology becomes more affordable.

Parallel Research

Any research process is susceptible to some kind of bias, whether it's in the way that participants are recruited, how questions are asked, or how the results are analyzed. Ideally, the people running a research study can understand where the biases can occur and can compensate for them (or at least warn about them). However, it's impossible to avoid them entirely.

The people who run studies often introduce idiosyncrasies into the process, and it's difficult to know where these biases occur, especially for the people who are responsible for them. One way to expose research skew is to compare the results of several studies on the same topic. Different groups of people will implement techniques in slightly different ways. These differences are likely to skew the results in different directions since the groups are not likely to include all the same biases. When the results are compared, it may be possible to see where they diverge.

In some cases, it's possible to compare your results to other research (as discussed in Chapter 15). If you're collecting geographically organized demographic information, it's possible to compare it to data produced by the U.S. Census. This can be a quick and straightforward way to check your data, but it rarely gets at biases that are specific to your product or to the questions that are being answered.

The best way to see how your own research and product skews is to run your own parallel research studies. This doubles the cost of doing the research (and the amount of data that need analysis), but it can provide a valuable perspective.

A simple parallel research technique is to run two studies simultaneously in-house. Such studies are carried out by two (or more) teams who use a "clean room" approach to the research. The teams do the research entirely independently of each other, not discussing their methods or results, but working from the same set of general questions. Each team will likely use a different technique and will analyze the data in a different way. It's also possible to have each team analyze the other team's data, as well as their own, and compare the results afterward.

If there aren't resources to run two sets of research in-house, it's possible to run one set of research in-house and hire a consultant or specialist research firm to run another set of research. Or two firms can be hired and their results compared.

The technique requires having the resources for several completely independent research teams, which can become quite expensive. Moreover, although it increases the certainty of the results, and doubles the amount of anecdotal evidence and pithy stories, it does not double the quantity of findings.

A different use of parallel research attempts to find phenomena that apply to all (or most) users versus ones that are specific to a group or a task. Comparing the results of identical research with intentionally divergent groups of users or tasks can show what's common to most users versus the ones that appear in only a certain group's perspective or behavior.

Participatory Design

As much a philosophy as a set of techniques, participatory design was developed in the 1970s in Scandinavia as a means to democratize product design, and developed further in the 1980s as a complete design method (see the June 1993 special issue of Communications of the ACM for an in-depth examination of the topic).

At its core, it's an iterative design process that uses focus groups to outline needs, task analysis to distill those needs into specifications, and user advisory boards to maintain user input throughout the development process. Brainstorming, prioritization exercises, and frequent end-user review make up a big chunk of the rest.

IBM's Joint Application Design (JAD) and Edward Deming's Total Quality Management (TQM) are methods that extensively use participatory design ideas. A common element that appears throughout most of these methods is the day-long workshop. During the workshop, representatives of the various stakeholders in the product development (people who are directly affected by or affect the product), including users, develop a consensus about the vision for the product and its needs. This is usually done through a focus group style of meeting that features a number of needs analysis and prioritization exercises. After the issues have been outlined, task analysis is used to decompose these issues into specific problems. These are then used as the basis for a group-written solution specification, which is then given to the development staff to implement.

At regular times thereafter, the process is repeated with prototype solutions, and the focus shifted to other products and problems.

The technique is excellent at creating solid solutions to the functional needs of users. Its weaknesses are similar to those of advisory boards: the users who participate will come to think like the members of the development team after a while, no longer being able to think outside the constraints of the development process. Moreover, it's quite easy for the process to limit itself to views expressed by user representatives, who are often not representative of the user population as a whole. And the techniques provide little guidance for determining the product's identity and financial responsibility. A participatory design panel with a narrow vision can easily miss the key elements that can make a product popular or profitable.




Observing the User Experience. A Practioner's Guide for User Research
Real-World .NET Applications
ISBN: 1558609237
EAN: 2147483647
Year: 2002
Pages: 144

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net