All of our findings and guidelines are based on empirical evidence from two sources. First we rely on our testing of 716 Web sites with 2,163 users around the world. Most of this research was conducted in the United States, but we also ran sessions in Australia, Belgium, Canada, Denmark, Finland, France, Germany, Israel, Italy, Japan, Hong Kong, Korea, Switzerland, and the United Kingdom. Many of these studies were done for our consulting clients, so the details are confidential. But this vast research also provides general insights into user behavior, particularly when we observe the same findings in highly diverse industries.
Other studies were conducted in the process of writing research reports about special issues. Most of the guidelines from these studies are only important if you are working on the exact problem we researched, but we have also abstracted valuable general insights from the thousands of specific observations in these studies. Thus most of what we say here is based on general lessons from a huge number of Web sites and users, whether they were tested for proprietary projects or for our own studies.
Our second source of information was a special study that we conducted for this book. When we talk about "the study" here, we're referring to this smaller set of data.
How We Did the Book Study
We tested 69 users, 57 in the United States and 12 in the United Kingdom, for this book. Slightly less than half (32) were male and slightly more than half (37) were female, in an even distribution of ages from 20 to 60. Each was paid $100 for participating. We didn't test teenagers or senior citizens for this book, though we occasionally offer insights on these special groups, based on the separate studies we have conducted with them.
The users had a broad range of job backgrounds and Web experience. We screened out anybody working in technology, marketing, Web design, or usability because they rarely represent mainstream users. People who work in the biz know too much and have difficulty engaging with a design as regular users. Instead, they tend to criticize the design based on their personal design philosophy, which is invalid as usage data. In fact, whenever you hear a user throw around insider terminology like "information architecture" in testing, you probably have to discard most of what they say.
All the users had at least one year's experience using the Web. We almost never test people who are completely new to the Web because all we would find is that the Web in general and browsers in particular are difficult user interfaces that take some time to learn. We wouldn't learn much that would help us design better Web sites because completely new users wouldn't get very far into the sites.
The experience of brand-new users is generally not that important for Web sites anyway because new users rarely visit one of them on their first Internet expedition. Of course, this is not true if you are Yahoo!, AOL, MSN, or similar Internet service providers that make their sites the default homepage for their customers. However, we are not writing for these few exceptional sites. We are writing for mainstream corporate, e-commerce, news, and government sites, and others that are not among the Web's Top Ten most-visited list. It's actually good news if you're not on the Top Ten list because when users visit your site, they already will have learned the basics elsewhere of how to use the Web.
The standard rule for user testing is to employ the equipment that most users are likely to have. For this study, we tested on a Windows machine, running the latest version of Internet Explorer. The monitor had a screen resolution of 1024 by 768 pixels. For Internet connectivity, we used a broadband connection: Depending on the test location, it ranged from 1 to 3 megabits per second (Mbps).
The "Thinking Aloud" Method
In our studies, users are tested one at a time so that they don't bias each other. In each session, the test user sits by a computer while the test facilitator, and sometimes one or two additional observers, sits nearby. If there are many observers, it's better to test in a usability laboratory with two rooms that are separated by a one-way mirror so that the observers can be hidden from view. But with a small number of observers, it works equally well to have them sit behind the user so that they are out of sight and therefore out of mind.
Hearing a user's "thoughts" allows us to understand why they do what they do, and this information is invaluable in testing.
The book study was conducted with the "thinking aloud" method, which is our preferred approach for almost all usability tests. In this method, users are asked to think out loud as they work with the interface. Hearing a user's "thoughts" allows us to understand why they do what they do, and this information is invaluable. It's nice to know that users, say, clicked the wrong button and couldn't check out of an e-commerce site. But if you want to improve the checkout process and thus increase sales, you need to know why people click the wrong buttons.
We also made two video recordings of each test session: one of the computer monitor and one of the user's face and upper body. These recordings include a sound track with the user's comments. For most usability projects, you don't really need to review the recordings because the main design problems are obvious after the test sessions. For practicality, it is usually best to quickly fix the problems and increase a company's business as soon as possible. But for a research project like ours, it's good to be able to review them and make sure that we have an accurate record of everything the user did.
For part one of the book study, we systematically tested 25 Web sites that cover a range of genresfrom industries like automobiles and financial services to entertainment sites and intellectually oriented medical and cultural sites.
You can see from their homepages that the sites also exhibited a wide variety of design styles, from somewhat primitive to overly glamorous. All in all, our basic goal was to test a good cross-selection of current Web sites.
We didn't pick any site because we wanted to dig into that company's Web strategy. Similarly, our comments on these sites should not be construed as criticism of the companies or teams behind the sites. There are many reasons why a Web site might have a bad design, beginning with a lack of resources. For the purposes of this book, we don't really care why. No matter how understandable the reasons for a mistake, it's still a mistake that our readers should be warned against committing on their Web sites, and that's why we present it here.
Homepages for the 25 sites that were tested systematically in the usability study we conducted for this book. The homepages are shown as they would initially appear in a browser window on a 1024-by-768 monitor.
It should be noted that the book study was not funded by any of the organizations whose sites we tested. We covered all costs ourselves so that we could be free to speak the truth in reporting on them.
This part of the study was a scavenger hunt of sorts. We gave each user three or four specific tasks to do on each site. While these tasks would not uncover every last usability problem on a site (bigger sites especially offer much more functionality than a few tasks can cover), they were enough for our purposes: to assess how well sites support the most typical goals users have when visiting them.
Some of the tasks we asked users to perform were:
All of these tasks were eminently feasible to do on the Web sites in question. We almost never ask users to "do the impossible" on a particular site. We observe plenty of difficulties just from watching them try to do the tasks that a site is supposed to support, so that's all we test.
In the site-specific testing, users were told where to go and were expected to stay there while performing their tasks. This is the way most usability studies are conducted, and is great if you want to find out how elements of a particular site's design works. Of course, that's not the way users work in real life. People have the entire Internet at their fingertips and they'll often jump from one Web site to another to complete a task.
In most usability studies, people are told what site to use, but that's not the way users work in real life. So we also gave ours a range of tasks and told them to go anywhere they pleased.
For this reason, in part two of our testing we gave users a range of tasks and told them to go anywhere they pleased. We call these "Web-wide tasks" because users had the entire Web to choose from. These tasks represent a wide range of activities, from highly commercial pursuits to curiosity-based inquiries, and all can realistically be done on the Web today.
The main downside of this approach is that users go to different sites even when working on the same problem, so we didn't get to systematically test those sites with a range of users. The upside is that we get to see how people construct their solutions across multiple sites, as they do when they are not in the lab.
For this section of the testing, we gave each user one of 12 tasks:
What if a Site Has Changed?
We include many screen shots in this book because it's much easier to absorb abstract principles when they are illustrated with specific examples. During the year we spent writing the book, however, several of the sites that appear in screen shots here have been redesigned. In some cases, the companies conducted their own usability studies. In others, site representatives attended seminars in which we show video clips of user behavior from recent studies. Thus, even though they didn't know about this book, they still saw previews of our tests of their sitesin effect getting a free usability study!
In either case, this means that if you check out a site after seeing it in this book, it is very likely to look different. Does this make the analysis in this book less relevant to your project? Not at all. The principles and guidelines that a screen shot illustrates are relevant long after a site has changed. In fact, many findings from our user testing in 1994 continue to be seen in studies in 2006 and will probably be found again by unlucky testers in 2020 and beyond.
An example of this is the Web site of the British Broadcasting Corporation (BBC). Jakob and Marie Tahir happened to use bbc.co.uk in Homepage Usability: 50 Websites Deconstructed to illustrate the issue of easy access to archives of previously featured homepage stories. The lack of archives was one of the BBC site's usability problems in 2001 and it continued to be a problem in 2005, as the screen shot shows.
The day before this screen shot was taken, the BBC site featured a great recipe for roast goose with sage and onion stuffing and applesauce. How can you find this recipe? An experienced user might be able to conjure up the correct page through a search, but the average user would be at a complete loss.
As the second screen shot shows, the BBC was working on an archive in late 2005. If you visit the site now, the homepage may well feature the archive. If so, the BBC will finally have done what we suggested in 2001. Does that change the importance of the comments in Homepage Usability? It doesn't, because our guideline to provide an archive of homepage feature stories continues to be relevant for millions of other Web sites. Only 41 percent of corporate homepages follow this guideline, so 59 percent could improve their usability today if they pay attention to the mistake the BBC made in 2001.
Beta release of BBC's service to archive all stories that have run on the homepage so that users can find them later: Let's say that you remember seeing a mouthwatering photo of Rick Stein's Christmas goose when you visited the site yesterday, but you didn't have time to read the recipe. Returning a day later, you discover that the BBC has moved on to a story about Franz Ferdinand's favorite music (see previous screen shot). Even if you like this Scottish rock band, they are not going to help you cook your goose. If you happen to know that the URL for BBC's beta-release archive is www.bbc.co.uk/homearchive, you can easily find the story by scrolling down the list of yesterday's features until you recognize the photo. Hopefully, by the time you read this, the BBC will have promoted the homepage archive from a beta test to a regular feature with a link on the homepage itself.