Combining


By themselves, all the techniques described in this book are useful and informative, but they can be even more powerful when used together, when the output of one becomes the input of another. Using multiple techniques to understand the answer to a question allows you to triangulate on a problem from different perspectives or delve deeper into issues.

Focus Groups and Diaries

One of the most useful ways to use diaries is in a hybrid with a series of linked focus groups or interviews. The diaries serve as the trigger for discussion, and they maintain context in between the focus groups.

For example, a group of users was recruited for a series of four focus groups over the course of three months to study a search engine. The goal of the focus groups was to understand the issues that experienced search engine users would face as they learned to use the search engine. At the time many of the product's users were "defectors" from other search services that they had used for a long time. The company was interested in attracting more such users, but it was important to know what kinds of issues they experienced in order to be able to maximize the value of their experience and increase their retention (search engines were considered to be a commodity by most users, and when one failed just once, the user was likely to abandon it immediately for another one).

During the first meeting the group was introduced to the concept of the research. The object of the focus group was to understand the problems that the users experienced with their current search engine choices and to uncover what they valued in search services in general (since this would be tracked throughout the study). They were then instructed on the diary system being used (a relatively unstructured format that would be emailed to them twice a week) and asked to keep diaries in between the meetings.

A month later, the second meeting convened. Throughout the process, the research team kept track of the surveys and the issues that appeared. Discussion of these made up the bulk of the second focus group. The participants clarified their thoughts and shared greater detail about their experiences in the previous month.

The third focus group, a month later, was designed to concentrate on the ongoing learning that had happened. Reading the second month's diaries, the researchers saw how the search engine was used and got an idea of what the participants had learned. The meeting focused on clarifying these issues and understanding the mental models the people had built about how the service worked and what they could expect from it. The desirability of proposed features was probed and other search services were compared.

Between the third and fourth meeting, the diary frequency dropped to once per month, and the diary forms became more usability-testlike in order to expose the participants to specific features. The fourth focus group concentrated on summarizing the participants' experiences with the various features of the service and introducing mock-ups of several feature prototypes that were specifically designed to address issues that had been discussed in the previous focus groups.

The back-and-forth play between the focus groups and the diaries gave the developers a richer understanding of the issues faced by a transitional user experienced with other systems.

It's similarly possible to combine diaries with interviews. Using the knowledge of people's experiences to structure follow-up interviews can target the topics of the interviews better than just using a standard script.

Observational Interviews and Usability Tests

Traditional usability testing is almost exclusively task based. The process is driven by the desire to have people perform tasks and then to use their performance on those tasks as the basis for understanding interaction and architecture problems with the product. This works fine, but underutilizes the potential of interviewing an actual (or potential) user of your product. These are people who can tell you more about their experience than just whether they have trouble saving their preferences. What attracts them? How well do they understand the product's purpose? Which features appear useful to them? Why?

Combining usability testing with in-depth attitudinal questions such as those that could be found in a contextual inquiry process or an observational interview can create a richer set of data to analyze. The description of a hybrid interview in Chapter 10 is one kind of such interview. In that process, the participants are asked for their impressions of the site as a whole before they are asked to use it. By allowing for a subjective, impressionistic component, it's possible to uncover more than just what people can't use and what they can't understand; it's possible to understand what attracts them and what their values are. Motivation can be a powerful force in user experience. Desirability and affinity are linked to motivation. A technique that allows the researcher to simultaneously understand what is comprehensible, functional, desirable, and attractive can uncover why a product "works" for the audience, or why it doesn't.

For example, an online furniture site was interested in how their prototype would be accepted by potential customers. Time was of the essence, and little other user research had been done. Since the prototype was already in development, it was too late to research the target audience's needs and desires in detail. A group of potential customers had already been invited to usability-test the prototype. This created an opportunity to do some research about people's desires, experiences, and expectations before they were exposed to the ideas or implementation of the product. The participants were recruited without revealing the product or the research criteria, and the research was held in an anonymous rented office space.

The first half hour of every 90-minute session was devoted to a series of general interview questions. All participants were asked to discuss their experience buying furniture, the difficulties they encountered, and the criteria they used in choosing a furniture retailer and furniture. This interview, though brief, provided a perspective on the customers and the product that had not been seen before and instituted profound changes in the positioning of the product (though it didn't prevent the eventual failure of the company).

Unfortunately, the process of asking people to examine and vocalize what attracts them can bias their use of the product. They may concentrate more on the interface and notice aspects of the product that they would otherwise pass over. This can lead to a skewing of the results: what was initially confusing could become understandable, or what was attractive at first blush may become overdone with extended exposure. Thus, combining usability testing and interviews does not replace rigorous usability testing, but it adds a valuable element when time and resources are short and when rapid, deep (if not as accurate) research is needed.

Surveys and Focus Groups

The classic marketing research mix is an interrelated series of surveys and focus groups. Surveys answer the "what" questions about your audience, whereas focus groups answer the "why." By interleaving them, it's possible to use one technique to answer questions posed by the other.

Surveys reveal patterns in people's behaviors. The causes for these behaviors are then investigated with focus groups, which in turn suggest other trends to be verified with surveys. It's a circular pattern that alternately flushes out interesting behaviors and attempts to explain them.

  • First, a general demographic/technographic/webographic survey is fielded to provide the basic audience profile.

  • Focus groups then recruit people based on this profile. The focus groups research their general values and their experiences relative to the problems the product is supposed to solve.

  • The focus groups bring up questions about functionality and competition, which are then investigated by another round of surveys.

  • And so on.

This is a tried-and-true technique, and it has been used for many years to create the products and advertising that you're familiar with. Companies like Clorox have a never-ending cycle of surveys and focus groups, each folding into the next, with the results then applied to the products. The downside of these techniques is that they reveal little about people's abilities or actual needs. Marketing and consumer product development is often more about perceived needs than real needs, which is what these techniques are best at uncovering. But people's views of themselves are not necessarily accurate and need to be tempered with a more objective understanding of their actual state. Thus, the output of this type of research needs to be seen as a valuable component to understanding people's motivations and desires, but it should be coupled with ability- and need-assessment techniques, such as usability tests and contextual inquiry.

Log Files and Usability Tests

Usage data analysis can be one of the best pointers toward interaction problems, which can then be examined and analyzed with usability testing. Log files provide a direct means of understanding how people behave. It's possible to use them to reveal where there are certain unexpected behaviors. For example, a task requires submitting information to a script that produces an error page if the information does not fit the required format. If a large proportion of users are shown the error page, it implies that the task is not defined clearly, the options are not presented well, or the users are misunderstanding the process. Log files by themselves may not uncover the prevalence of these three situations in the users' interaction with the product, or complex analysis may be required in order to extract any relationship. Instead, a usability test that concentrates solely on that single feature may quickly reveal the comprehension/interaction issues.

Likewise, it may be possible to examine log files based on information collected in usability testing. A hypothesis about how people behave can be formed by observing them using an interface, and then tested by analyzing log files to see if people actually seem to behave that way "in real life."

Both techniques help you understand the low-level interaction that people have with an interface. Usability testing is the less objective, more explanatory technique, whereas log analysis is objective and descriptive, but can provide little information about ability, understanding, or motivation. They complement each other well.

Task Analysis and Usability Tests

Task analysis is the process of decomposing how a task is done. Usability testing reveals how someone actually performs a task. Thus, a task analysis can serve as the "ideal" model that is verified by usability testing. Likewise, the format and goal of usability testing can be based in the model defined by task analysis.

For example, task analysis revealed that in auto repair shop selection, the lists of possible shops were determined by momentum ("we've always taken all our cars there"), location ("it's near my office"), authority ("I always take it to a Ford dealership"), or recommendation ("the insurance company recommended it"). Choices were few: people would have one, maybe two candidates, and they would choose the one that was generally considered first. Moreover, once people chose a shop, they stuck with it even if problems arose. A literal implementation of this in a claim management site initially presented these four options in a repair shop selection module. However, even though these were the criteria by which people chose repair shops, they did not think of it in those terms. Usability testing revealed that presenting these as options confused the participants. They understood the system as something more of an authority than they were, so they preferred to always let it recommend a shop.

These kinds of effects are difficult to predict when decomposing a task into its components, but they're critical to the functionality of the product. These effects can be investigated with talk-aloud usability tests of prototypes, and existing products can quickly zero in on the most prominent issues affecting people's ability to complete a task and so can verify the accuracy of the task analysis in general.

Ultimately, each of these techniques is really just a starting point. They're all different ways of observing and interpreting many of the same phenomena. By using these techniques and changing them, you get a better perspective on the goals and needs of your research and the strengths and limitations of the techniques. When you know what you want to find out and you know how the methods work, you can tune your research so it gets at exactly the information you need.




Observing the User Experience. A Practioner's Guide for User Research
Real-World .NET Applications
ISBN: 1558609237
EAN: 2147483647
Year: 2002
Pages: 144

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net