Section 10.7. User Research Sessions


10.7. User Research Sessions

Face-to-face sessions involving one user at a time are a central part of the user research process. However, these sessions are also expensive and time-consuming. We've learned that you tend to get the most value out of these sessions by integrating two or more research methods. We typically combine an interview with either card sorting or user testing. This multimethod approach makes the most of your limited time with real users.

10.7.1. Interviews

We often begin and end user research sessions with a series of questions. Starting with a brief Q&A can put the participant at ease. This is a good time to ask about her overall priorities and needs with respect to the site. Questions at the end of the session can be used to follow up on issues that came up during the user testing. This is a good time to ask what frustrates her about the current site and what suggestions she has for improvement. This final Q&A brings closure to the session. Here are some questions we've used for intranet projects in the past.

Background

  • What do you do in your current role?

  • What is your background?

  • How long have you been with the company?

Information use

  • What information do you need to do your job?

  • What information is hardest to find?

  • What do you do when you can't find something?

Intranet use

  • Do you use the intranet?

  • What is your impression of the intranet? Is it easy or hard to use?

  • How do you find information on the intranet?

  • Do you use customization or personalization features?

Document publishing

  • Do you create documents that are used by other people or departments?

  • Tell us what you know about the life cycle of your documents. What happens after you create them?

  • Do you use content management tools to publish documents to the intranet?

Suggestions

  • If you could change three things about the intranet, what would they be?

  • If you could add three features to the web site, what would they be?

  • If you could tell the web strategy team three things, what would they be?

In determining what questions to ask, it's important to recognize that most users are not information architects. They don't have the understanding or vocabulary to engage in a technical dialogue about existing or potential information architectures. If you ask them if they like the current organization scheme or whether they think a thesaurus would improve the site's usability, you'll get blank stares or made-up answers.

10.7.2. Card Sorting

Want to get your hands on some of the most powerful information architecture research tools in the world? Grab a stack of index cards, some Post-it notes, and a pen. Card sorting may be low-tech, but it's great for understanding your users.

What's involved? Not a whole lot, as you can see in Figure 10-8. Label a bunch of index cards with headings from categories, subcategories, and content within your web site. About 20 to 25 cards is usually sufficient. Number the cards so that you can more easily analyze the data later. Ask a user to sort this stack of cards into piles that make sense to him, and to label those piles using the Post-it notes. Ask him to think out loud while he works. Take good notes, and record the labels and contents of his piles. That's it!

Figure 10-8. Sample index cards


Card-sorting studies can provide insight into users' mental models, illuminating the ways they often tacitly group, sort, and label tasks and content in their own heads. The simplicity of this method confers tremendous flexibility. In the earliest phases of research, you can employ exploratory, open-ended card-sorting methods like the one we just described. Later on, you can use closed card sorts in which users rely on your predefined labels to question or validate a prototype information architecture. You can also instruct users to sort the cards according to what's most important to them; they can even have a pile for "things I don't care about." The permutations are infinite. Consider the following dimensions of card sorting:


Open/closed

In totally open card sorts, users write their own card and category labels. Totally closed sorts allow only pre-labeled cards and categories. Open sorts are used for discovery. Closed sorts are used for validation. There's a lot of room in the middle. You'll need to set the balance according to your goals.


Phrasing

The labels on your cards might be a word, a phrase, a sentence, or a category with sample subcategories. You can even affix a picture. You might phrase the card labels as a question or an answer, or you may use topic- or task-oriented words.


Granularity

Cards can be high-level or detailed. Your labels might be main-page categories or the names of subsites, or you may focus on specific documents or even content elements within documents.


Heterogeneity

Early on, you may want to cover a lot of ground by mixing apples and oranges (e.g., name of subsite, document title, subject heading) to elicit rich qualitative data. This will really get users talking as they puzzle over the heterogeneous mix of cards. Later, you may want high consistency (e.g., subject headings only) to produce quantitative data (e.g., 80 percent of users grouped these three items together).


Cross-listing

Are you fleshing out the primary hierarchy of the site or exploring alternate navigation paths? If it's the latter, you might allow your users to make copies of cards, cross-listing them in multiple categories. You might also ask them to write descriptive terms (i.e., metadata) on the cards or category labels.


Randomness

You can strategically select card labels to prove a hypothesis, or you can randomly select labels from a pool of possible labels. As always, your power to influence outcomes can be used for good or evil.


Quantitative/qualitative

Card sorting can be used as an interview instrument or as a data collection tool. We've found it most useful for gathering qualitative data. If you go the quantitative route, be careful to observe basic principles of the scientific method and avoid prejudicing the outcome.

Due to the popularity of this research method, several companies have developed software to support remote card sorting (see Figure 10-9 for an example), so you don't even need to be in the same room as the users! Did we mention this method is flexible?

Figure 10-9. MindCanvas remote research software


Just as there are many ways to do card sorting, there are many ways to analyze the results. From a qualitative perspective, you should be learning and forming ideas during the tests, as users talk out loud about their reasoning, their questions, and their frustrations. By asking follow-up questions, you can dig into some specifics and gain a better understanding of opportunities for organizing and labeling content.

On the quantitative side, there are some obvious metrics to capture:

  • The percentage of time that users place two cards together. A high level of association between items suggests a close affinity in users' mental models.

  • The percentage of time a specific card is placed in the same category. This works well in closed sorts. For open sorts, you may need to normalize the category labels (e.g., Human Resources equals HR equals Admin/HR) to make this work.

These metrics can be represented visually in an affinity modeling diagram (see Figure 10-10) to show the clusters and the relationships between clusters. You may want to plug your data into statistical analysis software and have it generate the visuals automatically. However, these automatically generated visualizations are often fairly complex and hard to understand. They tend to be better for identifying patterns than for communicating results.

Figure 10-10. An automatically generated affinity model (prepared for Louis Rosenfeld and Michele de la Iglesia by Edward Vielmetti using InFlow 3.0 network analysis software from Valdis Krebs)


When you're ready to present research results to your clients, you may want to create a simpler affinity model by hand. These manually generated diagrams provide an opportunity to focus on a few highlights of the card-sorting results.

In Figure 10-11, 80 percent of users grouped the "How to set DHTML event properties" card in the same pile as "Enterprise Edition: Deployment," suggesting they should be closely linked on the site. Note that "Load balancing web servers" is a boundary spanner and should probably be referenced in both categories on the site.

Figure 10-11. A hand-crafted affinity model


When used wisely, affinity models can inform the brainstorming process and are useful for presenting research results and defending strategic decisions. However, it's important to avoid masking qualitative research with quantitative analysis. If you conducted only five user tests, the numbers may not be statistically meaningful. So although card sorts produce very seductive data sets, we've found them most useful for the qualitatively derived insights they provide.

10.7.3. User Testing

User testing goes by many names, including usability engineering and information- needs analysis. Whatever you call it, user testing is fairly straightforward. As usability expert Steve Krug of Advanced Common Sense likes to say, "It's not rocket surgery."

In basic user testing, you ask a user to sit in front of a computer, open a web browser, and try to find information or complete a task using the site you're studying. Allowing roughly three minutes per task, ask the user to talk out loud while he's navigating. Take good notes, making sure to capture what he says and where he goes. You may want to count clicks and bring a stopwatch to time each session.

Once again, there are endless ways to structure this research. You may want to capture the session on audio or video, or use specialized software to track users' clickstreams. You might use the existing site, a high-fidelity web-based prototype, or even a low-fidelity paper prototype. You can ask the user to only browse or only search.

Whenever possible, include a range of audience types. It's particularly important to mix people who are familiar and unfamiliar with the web site; experts and novices typically demonstrate very different behavior. Another important element is choosing the right tasks. These need to be clearly defined by your research agenda. If you're in an exploratory phase, consider distributing your tasks along the following lines:


Easy to impossible

It's often good to begin with an easy task to make the user feel confident and comfortable. Later, include some difficult or impossible tasks to see how the site performs under duress.


Known-item to exhaustive

Ask users to find a specific answer or item (e.g., customer support phone number). Also, ask them to find everything they can on a particular topic.


Topic to task

Ask some topical or subject-oriented questions (e.g., find something on microelectronics). Also, give them some tasks to complete (e.g., purchase a cell phone).


Artificial to real

Although most of your tasks will be artificial, try to build in some realistic scenarios. Rather than saying "find printer X," provide a problem statement. For example, "You're starting a home business and have decided to purchase a printer." Encourage the user to role-play. Perhaps she will visit other web sites, searching for third-party reviews of this printer. Maybe she'll decide to buy a fax machine and a copier as well.

As with content analysis, you'll also want to spread these tasks across multiple areas and levels of the web site.

User testing typically provides a rich data set for analysis. You'll learn a great deal just by watching and listening. Obvious metrics include "number of clicks" and "time to find." These can be useful in before-and-after comparisons, hopefully to show how much you improved the site in your latest redesign. You'll also want to track common mistakes that lead users down the wrong paths.

If you're a red-blooded information architect, you'll find these user tests highly energizing. There are few things more motivating to a user-sensitive professional than watching real people struggle and suffer with an existing site. You see the pain, you see what doesn't work, and you inevitably start creating all sorts of better solutions in your head. Don't ignore these great ideas. Don't convince yourself that creativity belongs only in the strategy phase. Strike while the iron's hot. Jot down the ideas during the research sessions, talk with your colleagues and clients between sessions, and expand on the ideas as soon as you get a spare minute. You'll find these notes and discussions hugely valuable as you move into the strategy phase.




Information Architecture for the World Wide Web
Information Architecture for the World Wide Web: Designing Large-Scale Web Sites
ISBN: 0596527349
EAN: 2147483647
Year: 2006
Pages: 194

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net