Justification


Without justification, there is no incentive to integrate. Improving the quality of the user experience is treated as a long-term investment. It's easy to forsake long-term strategies in a world driven by financial quarters and shaky valuations, and when budget compromises have to be made, user research is rarely considered relevant to the current crisis. It gets lumped in with other goody-goody endeavors such as preserving rain forests and adopting stray cats.

User-centered development needs to be positioned as a solution that makes immediate financial and corporate sense. It needs to be shown that it is a "must have" and not a "nice to have." How to do that requires having arguments and methods that justify its existence.

Reasons for User-Centered Processes

It's often necessary to keep arguing and debating throughout the introduction of user-centered methods into a development process. Since these are usually new ideas and new ways of working, they meet resistance and their benefits require clarification.

Though the actual emphasis will vary from situation to situation, several basic arguments have met with success in the past.

  • Efficiency. Products that people actually want don't have to be remade. Products that are designed around the way people work don't need to be changed. When there's a model of how people use a product, there will be fewer disagreements, less ambiguity, less development delay, and you'll know just where to add functionality (and where not to).

    All in all, it's a process that uses your company's resources more efficiently: it creates a clearer road map before launch; after launch it reduces the amount that servers and support staff are taxed.

  • Reputation. Users who have a positive user experience are more likely to keep using your product and to tell others to use your product. Products that match people's needs, desires, and abilities will create a level of satisfaction that goes beyond mere complacency and acquires an aura that extends beyond functionality. People associate themselves emotionally with the product, creating a bond that's much stronger than one that's merely based on rational functional trade-offs.

  • Competitive advantage. The more detailed the user model, the more easily it's possible to know which of their needs your product satisfies. This makes it possible to identify needs that are unfulfilled by the competition and drive innovation not based on technological capabilities (which may or may not have practical applications) but on real needs (which certainly do). Rather than reacting to user behavior, you can anticipate it and drive it.

  • Trust. When a product behaves according to people's expectations and abilities, they trust it more. Trustworthiness, in turn, leads to loyalty, satisfaction, and patience.

  • Profit. Ultimately, when a product costs less to make, costs less to maintain, attracts more customers, and provides better value to business partners (all things that are explicit goals of user experience research and design), it makes more money. This is discussed in greater detail in the "Pricing It" section.

These general improvements in turn imply specific short-term advantages that directly affect development.

  • The number of prelaunch redesigns is reduced.

  • There are fewer arguments between departments and between individual members of the development team.

  • Schedules are more likely to be met because surprises are reduced and development time can be more accurately predicted.

  • It's easier to share a vision of the product with the developers and the company as a whole.

  • It is easier to communicate product advantages to the end users.

  • The customer service load is reduced since fewer questions have to be answered.

  • Equipment load is more predictable because feature popularity can be estimated.

  • Quality assurance needs are reduced since typical usage patterns determine where to focus attention.

All these things can come together to save the company money, extend the value of its brand, and make it more responsive to market conditions. They are all excellent reasons to warrant creating a user research pilot program, and provide an intermediate step between the CEO's proclamation that the company is "Dedicated to customer service!" and a rigorous, comprehensive program of user-centered development.

Warning

Many important aspects of user experience cannot easily be measured. It's tempting to try and quantify the effects of good user experience design with lots of quantitative metrics, but trying to attach numbers to the effects of design can often cloud understanding by appearing to be objective (numerical) measurements, rather than revealing little about the phenomena they're supposed to be measuring. Satisfaction and trust, for example, can only be hinted at in surveys, but survey numbers are often presented as precise "scientific" measurements of them. Repeat visits, which are measurable in log files, are only part of loyalty though they're often used to stand in for it.

Measuring Effectiveness

Eventually, if the arguments are met favorably, these methods will begin to be integrated into the development process. Ideally, the results are obvious, and all the products and processes are so improved that it's unnecessary to convince anyone of the effectiveness of these ideas. Otherwise, it's necessary to be able to demonstrate the effectiveness of the processes. Even when change is obvious, some measurement is always required to determine how much change has occurred.

That's where metrics come in. Metrics abstract aspects of the user experience in a way that can be measured and compared. In an environment of speculation and opinion, they can clarify gray areas and verify hypotheses. They can play an important role in understanding the magnitude of problems and evaluating the effectiveness of solutions.

Choosing Metrics

It's like a finger pointing away to the moon. Don't concentrate on the finger, or you will miss all the heavenly glory.

—Bruce Lee, Enter the Dragon

A metric is something that can be measured that (it is hoped) is related to the phenomenon that is being studied. All energy should not be focused on measuring and affecting the metric while ignoring what it represents. As they say, the map is not the territory.

Metrics begin with goals. The ultimate goal is to evaluate the quality of the user experience that the product is providing. A secondary goal is to understand how much of a change the new techniques are making. The basis should be in the same goals that were defined when writing the research plan. Goals are problems with the product. Defining metrics is part of the process of understanding where the experience is failing. Just as some goals affect the end user's experience while others address the company's interests, certain metrics will be more important when tracking the effectiveness of changes, while others will convince stakeholders.

For example, one of the questions asked about shopping cart use in Chapter 6 is "What is the ratio of people who abandon the shopping cart versus those who complete a transaction?" This is a metric. Although it could be measuring several things (how fickle people are, how much they're willing to experiment with the product, etc.), it primarily measures how much they get frustrated.

start sidebar
SOME TYPICAL WEB SITE METRICS

The percent of visits that end on a "leaf" page (a page with information)

The ratio of people who start a shopping cart to the number who reach the "thank you for shopping with us" page

The duration spent on the site

The number of times the search feature is used in a single session

The number of times that top-level navigation pages are loaded (i.e., the number of times people back up)

The number of critical comments per day

The number of visits per month per visitor

end sidebar

All these, of course, have to be used with an understanding of the reasons for users' behavior. For example, an increase in the number of leaf pages could represent more interest in the material or more errors in finding what they're looking for.

A more systematic approach is to list the goals for the current project and cross them with general categories of the user experience, creating a metric at each intersection. This ensures that the major questions are examined from the important perspectives. So the Sport-i.com example given in Chapter 5 could look like this.

start sidebar
SOME SPORT-i.COM METRICS

Efficiency

Effectiveness

Satisfaction

Conversion of viewers to shoppers

The length of the clicktrace leading to a purchase

The ratio of visitors to purchasers

The ratio of positive comments to negative ones in surveys

Improved navigation

The frequency with which navigation navigation navigation pages are consulted

The number of different pages consulted

The ratio of positive comments to negative in surveys

Timeliness of information

The number of daily users

The ratio of casual visitors to daily users

The ratio of "timely" comments to "not timely" in surveys

end sidebar

As you can see, some of the metrics make a lot of sense in the context of the site and the data that can be collected, whereas others are more questionable. Regardless, doing the exercise with stakeholders forces everyone involved to think about the relationship of product goals to facets of the user experience, but that doesn't mean that you have to follow through and measure everything in the grid.

The exact metrics created will be an idiosyncratic mix of what needs to be tracked in order to understand the change's effectiveness combined with what's required to convince the company that the process is effective. Moreover, metrics don't have to be based on data collected in-house. It's possible to use external measurements. For example, because Google ranks the cost of keywords based on the frequency with which they're mentioned, it's possible to use Google prices as a metric of brand name penetration.

Note

Metrics can have hidden problems. If page views represent success, but several pages are cut out of a typical task, then your metric may actually go down, even as your product becomes easier and more satisfying.

Collecting and Comparing Metrics

There are two primary ways of measuring performance with metrics, and both have been covered in other sections of this book. Clickstreams can reveal aggregate behavior among large numbers of current users. Obtaining the results of a new metric just means processing the log files a little differently.

Metrics collected through usability testing, however, are a different matter. Normally, the exact number of participants who react in a certain way doesn't matter in usability testing, which is concerned with broad trends. But when trying to answer questions about specific proportions ("the percentage of users who can find the new site map versus 47% before the redesign," for example), user tests have to be conducted differently.

Unless you interview hundreds of people, a user test is going to be statistically insignificant, and its results can't be projected to the entire population. They can, however, be compared to the results of other tests. Two carefully conducted user tests, one before a change is made and one after, can be compared to get some results quantitatively to see if a change has happened.

As with any experiment design, it's important to minimize the variables that change. Three main things can change between the tests.

  • The population. There will likely be a different set of people in the second test. They should be a comparable group of people, recruited in circumstances that closely match those of the first test, but they won't be identical (even if they were the same people, they will have already been exposed to the product once, so they won't have the same reaction).

  • The test procedure. A change in the way that a prototype is presented creates a change in people's experience of it. Even small changes in wording or order can affect perceptions.

  • The prototype. Changes in the interface obviously change people's experience of the product. This is usually what's changed between comparative user tests.

Changing more than one of these factors usually introduces more change than can be effectively understood and managed in a test process, and renders any numbers collected to be highly dubious, if not meaningless. To maximize the validity of the process, it's also useful to test with more people than with qualitative usability testing. More people let you understand both the breadth of responses and their consistency. Jakob Nielsen recommends at least 20 people per test as opposed to 6 to 8 for a test without a quantitative component, and that's probably a good base number.

Compared to test management, measurements in user tests are pretty straightforward, if occasionally tedious. You count. Timing measurements used to understand the efficiency of an interface can be done using the tape counter on a video deck or a hand stop-watch. Error counts ("how many times the back button was used") can be done with simple hash marks.

For log files or surveys, the procedures are identical to those given in Chapters 11 and 13. The metrics need to be defined ahead of time, and the exact date and time when the experience changed needs to be known. In the case of large products, where many changes can happen simultaneously, knowing what else changed at the same time helps identify whether observed changes are incidental or caused by the interface alterations under scrutiny. For example, if a front door is redesigned and launched with great fanfare, a measurement of traffic to the homepage may be as much a measure of the marketing as it is of usability. In such a situation, a measure of completed tasks may be a more accurate assessment of the effectiveness of changes to the interaction.

Pricing Usability

Using changes in metrics to calculate return on investment (ROI) is very convincing, but notoriously difficult. Many factors simultaneously affect the financial success of a product, and it's often impossible to tease out the effects of user experience changes from the accumulated effects of other changes.

Note

The ideas in this section are heavily influenced by Cost-Justifying Usability, edited by Randolph G. Bias and Deborah Mayhew, 1994, Academic Press. It is a much more thorough and subtle presentation of the issues involved in pricing user experience changes than can be included here.

That said, if you can make a solid case, bottom-line financial benefits are the most convincing argument for implementing and continuing a user-centered design process. On the Web, this is somewhat easier than for packaged software, where many changes may happen simultaneously with the release of a golden master. Ecommerce sites have it the easiest. They have metrics that can be quickly converted to revenue.

  • Visitor-to-buyer conversion directly measures how many visitors eventually purchase something (where "eventually" may mean within three months of their first visit, or some such time window).

  • Basket size is the size of the average lump purchase.

  • Basket abandonment is a measure of how many people started the purchasing process and never completed it. Multiplied by basket size, this produces a measure of lost revenue.

Each of these measures is a reasonably straightforward way of showing that changes have happened in the site that make it more or less profitable.

For other kinds of services, one whose function is to sell advertising or disseminate information to employees, for example, different measures need to be found. Since salaried staff answer customer support calls and email, reducing the number of support calls and email can translate to direct savings in terms of staff reduction. However, support is a relatively minor cost, and reducing it generally increases the bottom line insignificantly. What's important is to find ways of measuring increases in revenue because a product was developed in a user-centered way.

For example, a news site is redesigned to make it easier to find content. The average clickstream grows from 1.2 pages to 1.5 pages, which represents a 25% increase in page views, which translate to a proportional increase in advertising revenue. Seems pretty cut-anddried, but say that a marketing campaign is launched at the same time. Both the usability and marketing groups can claim that their effort was responsible for the increased revenue. To justify the user experience perspective and separate it from marketing, the impact of usability can be estimated and ROI calculated. This creates a formula that can spark a discussion about the relative effects of advertising and usability, but at least the discussion can be on a relatively even plane.

start sidebar

Recently our site underwent a redesign that resulted in increased page views and advertising revenue. This came at the same time as a marketing campaign encouraging people to visit the site.

Our analysis of the usage log shows that the average length of a session was 1.2 pages for the 8-week period before the redesign. This means that people were primarily looking at the front door, with roughly 20% of the people looking at two or more pages (very few people looked at more than four).

Usability testing showed that users had a lot of trouble finding content that was not on the front door. One of the goals of the redesign was to enable them to find such content more easily.

For the 4 weeks after the redesign, the average clickstream was 1.5 pages, a 25% increase in per-session pages and page views. The marketing campaign certainly contributed to this increase, but how much was due to increased usability of the site? If we suppose that 30% of the increase was due to the greater ease with which people could find content, this implies that 7.5% of the increase in page views is a direct result of a more usable site.

Using the average number of monthly page views from the past year (1.5 million) and our standard "run of site" CPM of $10, this implies a monthly increase in revenue of $1125. If the marketing efforts were responsible for getting people to the site, but the new design was responsible for all the additional page views, the added accessibility would have been responsible for $3750 of the increase in monthly revenue.

However, deep use of the site is different from just visiting the front door and has an additional effect on revenue. The CPM for subsections is $16. At 30% effectiveness, this would imply an increase of $1800 in revenue per month, $21,600 per year, or roughly 10% of all revenue.

Our costs consisted of 80 hours of personnel time at approximately $50 per hour, or $4000 plus $1000 in incentive and equipment fees. Thus, the total cost to the company was approximately $5000. When compared to the annual return, this represents a 270% to 330% return on investment at the end of the first year.

end sidebar

In some cases, the ROI may be purely internal. If good user research streamlines the development process by reducing the need for postlaunch revisions, it could actually be creating considerable savings to the company that is impossible to measure in terms of direct revenue. Comparing the cost of revisions or delays in a development cycle that used user-centered techniques to one that did not could yield a useful measurement of "internal ROI."




Observing the User Experience. A Practioner's Guide for User Research
Real-World .NET Applications
ISBN: 1558609237
EAN: 2147483647
Year: 2002
Pages: 144

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net