The Report


The written report is the primary deliverable of most user experience research. Its structure is the basis for how the presentation will flow and what other material, such as video, is necessary.

Pick a Format and Organize

Before the report is written, the report format should be discussed with its audience. When time is pressing, it's often fine to deliver the report in email. Other situations, such as a presentation to an executive board, may require a more formal paper report with illustrations and a fancy cover. Still others may be best served by HTML reports where observations are linked directly to problem areas. Show a sample report to its intended recipient and see if the format meets his or her needs.

Once a general format has been decided upon, arrange the findings in the most effective way for the audience. Secondary findings are clustered with primary results and ordered to support "big" ideas. Your results can be prioritized according to the three classic priority levels. "Nice-to-know" information is included only in the most complete version of the report, "should know" is in the general report, while "must know" is the kind of stuff that's put in email to the project lead when time is critical. Once prioritized, quotations are pulled from transcripts to support or elaborate on the findings.

start sidebar
Newspaper Style

Regardless of the format, all reports should be structured like newspaper stories. They're written with the assumption that some people will only have time to read the first couple of paragraphs, some will read a page or two, some will skim the whole thing, and some will closely read every word. Each of these audiences needs to be satisfied by the report's contents.

In classic newspaper style, the first sentence tells the most important fact ("The stock market rose dramatically today in spite of signs of weak consumer confidence," for example). The first paragraph tells the basic facts, the next couple of paragraphs elaborate on several key elements mentioned in the first paragraph, and the rest of the story provides background, finishing with a historical summary of the story.

Thus, when writing a report, never "save the best for last." Save the least important for last.

end sidebar

Example Report: A Usability Test for an Online Greeting Card Company

start sidebar
Executive Summary

Six Webcard users with a range of experience were invited to look at the existing Webcard interface and a prototype of the Gift Bucks interface. In general, people found Webcard to be easy to navigate and to contain many desirable features. The Gift Bucks interface was quickly understood, and the feature was exciting to most of the participants. Most of the problems with the site had to do with feature emphasis and information organization rather than navigation or functionality.

end sidebar

In this case, the most important observation is that the site interaction was basically good. People could use it and found the features interesting. It's followed by a short summary of the problem findings. If someone reads only this paragraph, he or she will have an idea of the most important discovery. This is followed by a short, clear explanation of each of the general problems. Since the goal of the project was to uncover deficiencies in the interaction, rather than the reasons people like it, the problems with the product are elaborated.

start sidebar
  • People had little interest in finding specific cards. They were interested in categories and were willing to change their mind to accept what was available. Even when they could not find a specific card, they still had a good browsing experience. This is good. Although the participants consistently failed to find specific cards because of the organization of the categories, they had little trouble navigating between the categories themselves. Specific card location was further hampered by the search interface, which was difficult to understand and was not a full-text search, as all the participants had expected.

  • Several desirable front door elements were ignored because people's attention was not drawn to them.

  • Most people didn't know what My Webcard was, or what benefits it held for them, even though they expressed interest in some of the benefits when presented individually.

  • The Gift Bucks interface was quickly understood and seen as straightforward although people wanted more information about the process. They often didn't see the links that would have led them to some of that information.

Desirable additional features included the ability to synchronize Microsoft Outlook with the Webcard address book, the ability to enter arbitrary Gift Buck amounts, and more options to organize cards.

end sidebar

The executive summary tells the whole story in broad strokes. The next section sets expectations and provides the necessary background to understand the subsequent observations. If the audience is well versed in the technique, it may only be necessary to sketch it out in a couple of short sentences. This audience had not been exposed to the technique, so a more thorough description was appropriate.

start sidebar
Procedure

We invited six people with electronic greeting card experience to evaluate the Webcard interface and comment on a prototype of Gift Bucks. They were selected from Webcard's user lists based on their recent usage of the site (they had to have used it in the past month), their online shopping activity (they had to have bought a present in the last two months), and their availability to come into the Webcard offices during working hours on January 20 and 21.

Each 90-minute interview began with a series of questions about the evaluators' Web usage, their experiences with online shopping, and their experiences with online greeting services. The moderator then showed them the current Webcard site and asked for their immediate impressions as they moved through it. After looking at it for a few minutes, the moderator asked them to search for an Easter greeting card for a friend. After several minutes of searching, the moderator asked them to return to the main page and go through the interface thoroughly, discussing every element on the front door and most of the catalog and personalization page elements. Their next task was to find a card with a picture of San Francisco. After spending several minutes on this task, they were shown a prototype of the card personalization interface with Gift Bucks attached and asked to discuss the new interface elements found therein.

The moderator concluded the interviews with an exercise to help the evaluators summarize their views of the product and brainstorm on additional features.

Throughout the process, they were asked to narrate thoughts aloud and were occasionally prompted to elaborate on specific actions or comments. In addition, they were prompted to discuss feature desirability and additional functionality when appropriate.

All the interviews were videotaped. The tapes were examined in detail for trends in participants' behaviors, beliefs, and statements. Partial transcripts were made of interesting or illustrative quotations. The observations were then organized and grouped. These groups provided the material from which more general trends were distilled.

end sidebar

This type of explanation may be useful in any case since the specifics may vary from one project to another, but it'll be most useful when the report audience needs to understand how the results were obtained.

Next, a weakness in the process was called out.

start sidebar
Evaluator Profiles
Note

Because of the short recruitment time and the requirements for a weekday downtown San Francisco visit, the evaluator pool is biased toward professionals working in downtown businesses, with several in ecommerce-related positions.

end sidebar

Describing participants isn't critical, but it's effective for reinforcing the reality of the research to people who did not observe it and provides context for interpreting their statements. Although everyone knows that real people were interviewed, details of the interviews bring home the participants' perspectives, their differences, and—most important—the realness of their existence. At least there should be a table summarizing how the participants fit the recruiting profile, or there can be a more extensive description.

start sidebar
Leah

Works in the marketing department of an ecommerce art site. Spends 10 or more hours on the Web per week, half of which is for personal surfing. Spends most of her online time investing, shopping, and finding out about events. Buys something online once a month, more often during holidays. Has bought wine, books, CDs, "health and beauty aids," about half of which are gifts (especially the wine). Doesn't often buy gift certificates. Likes electronic cards because they're spontaneous, they're good for casual occasions, and you can send one to a number of people at the same time. Would have balked at sending sympathy cards electronically, but recently got one that changed her mind. Sends electronic cards more based on the recipient than the sentiment: some people aren't online or (she feels) wouldn't have a positive reaction to electronic cards. Has only used Webcard and sends at least one card a month.

Etc.

end sidebar

Warning

To protect your participants' privacy and confidentiality, be extremely careful when revealing any personal information—especially names—in a report. When recruiting people from the general public who will likely never participate in research about this product again, it's usually OK to use their first names in the report. However, when reporting on people who are in-house (or who are friends and family of people in-house), it's often safest to remove all identifying information. Rename participants as "U1," "U2," and so forth.

Likewise, be wary when creating highlight tapes with people who may be known to the viewers. If someone is likely to be recognized, they should generally not be shown, and a transcript of their comments should be used instead.

Once the context has been established, the results are presented as directly and succinctly as possible, avoiding jargon.

The first two entries in this report describe user behavior that affects the product on a deep level and across a number of features. When the data support it, it's appropriate to generalize underlying causes, but overgeneralization or unsupported conclusions should be avoided.

start sidebar
Observations
Note

These observations were made based on a videotape review of the interviews. They are organized by topic and importance. Severity should not be inferred by the length of the entries; some things just take more words to explain than others.

General

  1. People don't appear to look for specific cards or show any attachment to specific cards. They browse for cards in a category and either find something that adequately matches what they're looking for or change their strategy and look for something else. Their criteria are loose and often defined by what they're not looking for. When initial directed attempts fail, the evaluators start looking at all the cards in a given category to see what grabs them. When nothing fulfills their basic level of acceptability, they tend to adjust their expectations until something does. There seems to be some amount of acceptance for sending somewhat inappropriate cards, as long as they're "close."

    "If I wanted to do a greeting and thought 'wouldn't that be cool if it was a San Francisco greeting?' and I didn't find it, I'd come up with some other idea."

    "I usually don't go past page 1. I don't want to spend a lot of time looking. I find something I like, click on it, send it, I'm done."

    "If I didn't find what I was looking for, I'd probably change my mind about what I wanted to send her."

  2. The participants spent no more than ten minutes looking for a card, averaging about five minutes per search. During that time, most change their strategy repeatedly to accommodate their desires based on the available inventory.

end sidebar

Note

Attaching severity ratings to observations ("1 means it's a showstopper, 2 means that it greatly impacts the user experience, etc.") is common practice, but I prefer to let the audience decide the importance of observations. What may be a blip in the user experience may have deep impact for the product and company. People scrolling a banner ad off the top of the page is a minor event in their experience of the product, but not if the ad represents the entire revenue stream for the product and raising clicks on it by 0.5% represents a 20% increase in revenue. It's possible that only people intimately familiar with the company's business model will appreciate the importance of observing that users behave this way and presumptuous that an outside consultant will know otherwise.

It's appropriate to describe the severity of problems, but don't equate severity with priority. Some problems may be quite severe but may play only a small role in the user experience if they're used infrequently. This should be taken into account when describing a problem.

start sidebar
  1. No one found the search page in under a minute, by which point most would have given up looking for it. When asked whether there was a search feature, several people said that there wasn't one. When pressed, everyone scanned the entire interface looking for it, examining every element until they found it. Additionally, despite the participants' extensive Webcard experience, no one appeared to have ever used it. However, as described in 1, none of the participants expressed much interest in finding specific cards, and all were happy simply browsing through the categories.

    "If there is a search page, I don't know where it is."

    "If there was a search box, that's how I would find that card."

end sidebar

click to expand
Figure 17.1: The Webcard homepage.

For the following observation, there are some obvious solutions that can be recommended. However, the decision of whether to suggest solutions alongside problems depends on the composition of the audience and the expertise of the analyst. This report's audience are the product's interaction designers and production staff, who know more about the task domain and the design constraints than I do. Since they're responsible for creating appropriate solutions, I chose to thoroughly describe problems and provide recommendations when appropriate, but let the design staff create appropriate solutions.

start sidebar
  1. Three of the evaluators said that some of the smaller text was too small. Specifically, the text in the expanded subcategories in the catalog page and the informational links on the Gift Bucks pages were said to be too small (and, in the case of the links on the Gift Bucks, often ignored when looking for assistance).

  2. Several of the people did not find the "home" link at the top of the page when asked to "go to the front door of the site" and preferred to navigate using the back arrows on their browser.

  3. When asked to "go to the front door of the site" without using the back button, more people chose to click on the "home" link rather than the logo to go to the homepage.

end sidebar

Sometimes it's as important to note what expected events do not occur as it is to note unexpected ones that do.

start sidebar
  1. Most of the left-hand side of the front door was ignored. After looking at the features in the top left-hand corner, most evaluators' attention quickly focused on the main menu, causing the features along the left-hand margin to go unread, even if they had been noticed in the initial scan of the page. The lower on the page the features were located, the less they were noticed, with the items near the bottom—"Make your own Valentine," "Sound Cards," and "What's new"—universally unnoticed and unread.

    "This is probably some of the most interesting stuff, but I didn't see it." [links at the bottom of the left-hand side]

    "This is a great a idea and it's lost!" [Make your own Valentine]

end sidebar

Once all the observations have been made, the report should wrap up with a conclusion and useful background information. The conclusion is a good place to discuss the larger issues raised by the research and to recommend broad-based solutions. It should help the audience pull back from the details and understand the product and research process as a whole.

start sidebar
Conclusion

The Webcard service is considered useful and generally well done by its audience. Although people's inability to find specific cards was ameliorated by their light attachment to any given card, this does not eliminate the problems with the search functionality and the information structure. As people's performance in the task to find a San Francisco-themed card shows, whole sections of the site are inaccessible because of an information structure that doesn't match people's expectations. We recommend using card sorting and observational methods to research users' expectations and "natural" organizational schemes and simplifying the search process by using a full-text search interface and carefully chosen keywords.

The Bonus Bucks addition was easy to use, but the actual functionality received a lukewarm response. We hypothesize that the denominations offered are too high. To the users, a Webcard is a quick, free way to express a lightly felt sentiment. A $25 or $50 gift is not seen in the same light and would likely not occur to most users. In fact, the larger denominations may put people off entirely. We recommend reducing the Bonus Bucks denominations to $10 maximum.

Etc.

end sidebar

Any additional information should be included in support of the main points. It's tempting to include every scrap of information that you've collected just in case it'll be of use to the readers, but don't inundate the audience with information.

start sidebar
Interesting Quotations

(full transcripts available upon request)

Leah

"I would have said that I would not have sent a sympathy message [with Webcard], but recently someone sent something to me and it was a beautiful greeting, so now I'm more open to that."

"It's not occasion based, it's more recipient based. There are people who are really open to electronic communication, and other people who are not."

"I already have a place I buy CDs online, I already have a place I buy books online. I don't need another place to buy those. If I was going to buy a gift on Webcard, I would want it to be something else that I might not already have a place to buy online."

Etc.

end sidebar

Once the report has been written, it should be tested. Like any product, it needs to be checked to see if it fulfills the needs of its audience. Maybe the development staff wants screen shots with problem areas explicitly marked. Maybe they would like the problems broken out by department or by user market. Sounding out the audience before the report is complete can verify that it's useful to them.

A couple of people from the report's audience should be shown a "beta" version of the report. A 10–15 minute interview after they've had a day or two to look at the report reveals a lot about how well it addresses the audience's needs. Does it cover the topics that they had expected? Does it prioritize the results appropriately? Do they expect other staff members to have any issues with it?




Observing the User Experience. A Practioner's Guide for User Research
Real-World .NET Applications
ISBN: 1558609237
EAN: 2147483647
Year: 2002
Pages: 144

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net