Focus Group Analysis


"Researchers must continually be careful to avoid the trap of selective perception."

—Richard A. Krueger, Focus Groups, p. 130

There are about as many ways of analyzing focus group information as there are analysts. Since the information is, by definition, qualitative and contextual, the focus of the analysis will depend on the purpose of the group. For some research projects, it will be critical to uncover the participants' mental models; for others, their first- hand experience may be what's most valuable. Still others may be completely focused on evaluating the competition.

The two fundamental processes of focus group analysis are collecting data and extracting trends. Although the bulk of data collection usually precedes the bulk of analysis, one does not strictly follow the other. Often, the two intertwine as trends cause a reconsideration of the data and patterns in the data emerge as trends.

Collecting Data

Focus groups produce a lot of information: transcripts, quotations, observer opinions, models, and videotapes. Organizing and prioritizing all this information is the first step in extracting trends from it. This begins with capturing the most ephemeral information, the gut-level trend observations of those who observed the groups. These first-order hypotheses help focus the data collection that happens later.

Capture Initial Hypotheses

The moderator, assistant moderator, and the observers should be debriefed in between every group. Since groups can get confused in people's memory as time passes, getting people's thoughts on each group immediately after it's over reduces the amount of disentanglement of ideas and experiences required later. Everyone's notes should be copied, and their observations should be collected through interviews. An effective way to organize and trigger people's memories is to use the guide. Walking through the guide section by section, ask the moderator and observers to recall their thoughts about it. What was unexpected? What was expected, but didn't happen? What attitudes did people display? What values did they espouse? What interesting statements did they make (and why were they interesting)? What trends did they observe? Which participants provided interesting feedback? What were the problems with the group? These observations often serve as the backbone of later analysis.

After the debriefing, the analyst should write down his or her memory of the events as completely as possible. Afterward, the analyst should organize the observations in the debriefing notes into themes or issues. These can serve as categories in organizing the more formal analysis of the proceedings.

Transcribe and Code

The process of formally analyzing the focus group should begin with transcription. The traditional method is to hire a transcription service. The service will provide a document with every word every person said. This can be quite useful for rapidly pulling together a larger number of quotations. Unfortunately, transcription services can be expensive and take a long time, and the transcripts can be unwieldy (it's not unusual for the transcript of a two-hour focus group to be 100 pages long). A simpler method is to watch the videotapes and transcribe just the parts that the analyst considers most important.

Even without formal transcription, carefully review the tapes. Merely remembering a situation can miss subtle behaviors. Words can be misquoted. Observers fall into group-think. Watching the original discussions can clarify ambiguities and reveal shades of meaning that are hidden when working from memory alone. The tapes need not be watched in the order they were made; the ones that you believe will be the most useful should be watched first. If there were a lot of groups in a series (say, five or more) and time is short, you can skip viewing the "dud" groups (though watching fewer than four tapes risks missing some key issues or revealing quotes).

As you're watching the tapes, you should be coding the comments. Coding is the process of categorizing responses in order to track trends. The codes should have short, descriptive names, and each should embody a single idea or trend that you're trying to follow. Codes should reflect the topics you're interested in studying. If you'd like to isolate the kinds of experiences people have in specific situations, you could code for different situations or experiences. If you want to understand people's priorities, you could have different codes for expressed preferences. You can start your list of codes with the topics that drove the writing of the discussion guide, adding others that may have come up during the debriefing. For example, if your original goals were "to understand the mental models people use when researching insurance" and "to collect stories of how people's existing insurance failed them" and you observed that people felt intimidated by their insurance company, then your initial set of codes could look like this.

start sidebar
SAMPLE TOP-LEVEL CODES FOR AN INSURANCE FOCUS GROUP

Model: How people understand the insurance claims process and the process by which they choose their insurance, including the parameters they base their choices on and the methods by which they evaluate the parameters.

Bad Story: Episodes where the process of picking insurance or the process of filing an insurance claim has been difficult or frustrating. This can include mistaken expectations, disappointment, or even the insurer's outright failure to deliver on promises. If there are positive stories, these can be coded as "Good Story."

Intimidation: If the participant ever felt intimidated by his or her insurance company, scared by the process, or that the process was not under his or her control.

end sidebar

Codes can, of course, be divided or combined if the need arises. If you decide that you need to differentiate between situations where people are intimidated by their insurance provider and situations where they're intimidated by the process, it may be appropriate to create several subcategories. Don't get overzealous about it, however. Although some social research studies are known to code hundreds of different kinds of events and utterances, a dozen categories are sufficient for most user experience research.

Note

A more formal method of creating a coding structure is described in Chapter 13. There are also a number of software packages that can assist in the process.

With code sheet in hand, watch the video. When something that matches one of your codes or seems interesting and relevant comes up, note it down, keeping track of who said it and when during the group it was said. If someone says something that really encapsulates an idea or fits into the coding scheme, transcribe it. The key to transcribing is capturing the meaning of people's words, so although you should aim for exact transcription, don't shy away from paraphrasing, dropping unnecessary words, or adding parenthetical expressions to provide context. However, always make it clear which words are yours and which are the participants'.

When you've transcribed and coded all the tapes you've chosen, go back through the transcriptions and check for accuracy in coding, revising your code system and recoding if necessary.

Warning

The analysis of focus group data can be a contentious process. The key to providing good, believable analysis is to create distance between the analyst and the product. Even if you're deeply involved in the product's creation, now is the time to be as objective as possible. Do not let expectations, hopes, or any conclusions you came to while watching or running the groups affect your perception of the participants' statements and behavior. Be ruthless. When you're analyzing the data, pretend that you've never seen these groups before and that you know nothing about either the site or the topic.

Extracting Trends

With the coded list of quotations in hand, it's time to find a deeper meaning in the groups. This section describes a thorough, fairly rigorous approach. Sometimes time or resource pressures make it difficult to do the process to the full extent. In such situations, simplification is perfectly acceptable, although it should be done with care so as not to skew the results so they're no longer useful.

Focus group analysis techniques are similar to the technique used in contextual inquiry research. Observations are clustered and labeled and become the basis for determining the trends in people's behavior and attitudes. Those trends are then fleshed out and hypotheses are made to explain them, using the data to back them up.

Start with your category codes. The codes represent your original intentions and the trends that you observed as you were examining the data. Flesh these out and revise them as necessary to fit your new understanding of the data. So if you've discovered that people aren't really intimidated by insurance companies as much as frustrated by their response in a claim situation, you could rewrite and add a code for situations where people were frustrated (or, if you wanted to concentrate on the claim experience, you could create a new code that labeled all comments about claims).

Next, divide the observations your moderators and observers made, along with your first-cut, gut-level analysis, into trends and hypotheses. Then try to align these trends with the categories you defined earlier, removing overlapping ideas and clarifying the categories. Save the hypotheses for later, when you'll try to explain the trends.

Now rearrange the quotations and observations according to the revised code, organizing everything by what code it falls under (you can organize by moving chunks of text around in a word processor or writing them on Post-its). This gives you the opportunity to see if there are ways to organize previously uncoded observations and to place observations and quotations into multiple categories.

Your clusters should now be showing identifiable trends. Some will be expected and clear. Others will be surprising. There may be situations where you expected to find material to support an idea, but didn't. Try to organize the trends by similarity as much as possible. You can even label groups of trends.

Note

If time is tight, coding can be skipped to expedite the process. Then the moderator, assistant moderator, and observers will base analysis on gutlevel impressions. Although this captures the most obvious and severe problems, it's susceptible to group-think within the group of evaluators that can lead to unnecessary emphasis on certain topics while less discussed topics are ignored.

The research goals define that trends are important. A focus group series that's concerned with uncovering mental models will be more focused on the language and metaphors people use than a series that's trying to prioritize features, which may be more about the participants' interests and self-perception. Of course, since people's perceptions, values, and experiences are intertwined, there will rarely be a situation where an observation perfectly fits a trend or a trend only affects a single aspect of people's experience. In practice, the borders between trends will be fuzzy, and what exactly defines a trend may be unclear. However, in good research, even if the borders are fuzzy the middle is solid.

Here is a list of some of the things that you may want to extract from your data.

  • Mental models. Mental models are related to metaphors. They're mental representations of how we understand the way the world works. For example, George Lakhoff and Mark Johnson describe the "time is money" metaphor in their classic Metaphors We Live By: when talking about time, English speakers will often use analogies that equate time with money—time is "made," "wasted," "spent," and so forth. This may be important for the maker of a piece of collaborative software to know since it may make the creation of an information architecture and the naming of interface elements easier. Some people give software a personality and think of it, in some sense, as a helper, friend, or confidant. Of course, mental models have limits. Time can't really be "earned," except in prison, so that models should not be taken too literally. On a more mundane level, certain people may not realize that they can challenge a claim adjuster's estimate. Their mental model doesn't include the concept of arbitration or second opinions.

  • Values. What do people like or dislike? What are the criteria that they use when they decide whether they like or dislike something? What do they consider important? What process do they use to create their values? How do their values interrelate? People's values determine a lot about the way they experience a product. When someone is excited by the content of a Web site, he or she may be willing to overlook all kinds of interaction problems, to a point. The same content presented another way may bore the person. People's value systems consist of likes, dislikes, beliefs, and the associations they make between these elements and the objects, people, and situations in their lives.

  • Stories. Stories are a powerful way to understand the intricacies of people's experiences. They provide details about people's assumptions, the sequences in which they do things, how they solve problems (and what problems they have), and their opinions. Stories can illuminate and clarify many uncertainties about product development all at once. The similarities and differences between stories told by different people can reveal what kinds of mental models the target audience shares and what kinds of individual idiosyncrasies the development team can expect.

  • Problems. Focus group brainstorms can quickly produce extensive lists of problems. Even without formal brainstorming, people's natural tendency to commiserate in a group of peers reveals lots of problems.

  • Competitive analysis. What do people dislike about competitive products? What do they find important? What are the competitive products?

This list is by no means exhaustive. Where one focus group may be interested in the sequence in which people typically do tasks, another may try to uncover what makes a good logo for the company.

Here are some tips for getting the most useful information out of the data.

  • Concentrate on the methods by which people come to their decisions. The actual decisions are important, too, but the reasons behind them can be even more revealing.

  • Note people's terminology. Products that speak the same language as their users are more easily accepted. Verbatim transcripts can really help with this.

  • Watch out for contradictions. How people say they behave or what they say they want may not actually correspond to what they actually do or how they'll actually use a product.

  • Watch for situations where people change their mind. Knowing that someone changed his or her mind can reveal a lot about what he or she values.

  • Popularity does not necessarily mean importance—what people consider to be important may not be what they talk about—but it is a strong indicator and should be noted. Likewise, the lack of popularity does not denote that a phenomenon is unimportant, but if something is only mentioned by a couple of people it should be considered a weak trend at best.

You may also want to do some quantitative analysis on the data. Although the numeric results of focus group data are not, on the whole, representative of your user population, they can be compared to each other. If you run two focus group series with similar groups before and after a redesign, and use the same discussion guide, moderator, and analyst, then comparing the number of certain problems or perceptions may be a valid way to see if people's experience has changed. However, the process needs to be closely controlled, and the results cannot be extrapolated beyond a comparison of the two groups.

In addition, when analyzing any focus group data, potential bias has to be taken into account and made explicit. The recruiting process, the phrasing of the questions, group-think, and the moderator and analyst's personal experiences may all have affected the answers. Even external events may affect people's perspectives (for example, people's views of leisure travel may be different on the day that the news picks up a report about transportation accidents than the day before). Be on the lookout for bias that may have been introduced, and make it clear in the report when it may exist and in what form.

Making Hypotheses

Explaining the causes of trends can be a difficult process. Each phenomenon may have a number of potential causes, and there may be data that support conflicting hypotheses. The analyst must make a judgment call on whether to propose hypotheses for people's behavior and beliefs, or whether to just state the beliefs and let the development team develop their own theories. This is an issue of debate in the industry, and there are no hard and fast rules about when it's appropriate. Often, the trends that are observed in focus groups can point at deep social and psychological issues that are difficult, if not impossible, to explain or justify (for example, "People claim to dislike advertising"). It's often sufficient to know the dimensions of a problem to solve it rather than knowing its exact causes.

Sometimes, however, analyzing the potential causes of problems and comparing their magnitudes to one another can make finding solutions easier. So if people are both intimidated by and angry at their insurance company, knowing why they're intimidated and angry can help determine which is the more important problem to tackle. If root causes are not obvious from the collected data, it may be appropriate to do additional research. Thus, if the exact sequence that someone goes through when he or she is trying to pick an insurance company is not obvious from the focus groups and is important to the product's functionality, a round of field-based task analysis may be appropriate. Likewise, if it's important to know the exact magnitude of people in the target audience who are intimidated by the process of buying insurance, a statistically valid survey can be run.

As with all user experience research, focus groups may produce more questions than answers, but the questions may be better than those asked before the groups were run.




Observing the User Experience. A Practioner's Guide for User Research
Real-World .NET Applications
ISBN: 1558609237
EAN: 2147483647
Year: 2002
Pages: 144

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net