People hate to wait. You're the fourth person in a six-person line at the supermarket . You spot a clerk moving toward the closed register in the next lane. Is she going to open it? If you bail out too early and she's just looking for bags, it's the back of the line for you. Wait too long and the clerk could call over the next person in line. What do you do? On the Internet, this kind of choice is simple. If the page you're waiting for takes more than a few seconds to open, you just bail out to another site. No bodies to jostle, no icy stares from the slower crowd . Just exercise your freedom of choice with a twitch of a finger. To hell with the owners of the slower site you just left. Survival of the fittest, right? It's all rosy ”unless, of course, you happen to be the owner of that slower site and it's a part of your business. In that case, it's a good thing you have this book. In survey after survey, the most common complaint of Internet users is lack of speed. After waiting past a certain "attention threshold," users bail out to look for a faster site. Of course, exactly where that threshold is depends on many factors. How compelling is the experience? Is there effective feedback? This chapter explores the psychology of delay in order to discover why we are so impatient, and how fast is fast enough.
The study of this psychology is called Human-Computer Interaction (HCI). This chapter focuses on the speed aspects of HCI. How does delay affect user satisfaction? Why do we become so frustrated when we have to wait? This chapter distills this research into understandable language and web page design guidelines.
With the rapid expansion of the web and increasing bandwidth, you would think that the problem of slow system response would have gone away. As you learned in the Introduction, the opposite is true: Consumer sites are actually becoming slower. [1] In fact, Zona estimates that over $25 billion in potential sales is lost online due to web performance issues. HCI research is just as relevant today as it was a decade ago.
Speed: A Key Component of UsabilitySpeed is a key component of usability, which helps determine system acceptability. [2] How acceptable a system is determines its adoption rate. With over half of the IT projects deployed in the U.S. abandoned or underutilized , [3] it is important to make systems and sites (many of which are big IT projects themselves ) that people actually use.
Shackel's Acceptability ParadigmPart of our psyche, it seems, is devoted to understanding whether a particular system will have a big enough payoff to warrant the necessary expenditure of our time and energy. Brian Shackel characterized this paradigm as "system acceptability," which is a tradeoff between three dimensions:
All of these factors are weighed against each other and the cost of using the system (see Figure 1.1). Seen through Shackel's lens, when users make decisions about using a web site, they weigh how useful it will be, its perceived ease of use, its suitability to the task, and how much it will cost them both financially and socially . That's why sometimes we are willing to put up with difficult sites if the reward for doing so is large enough. Figure 1.1. Shackel's Acceptability Paradigm.
Traditionally, HCI research has focused on the quantification of Shackel's second dimension, usability. There is compelling evidence, however, that the utility of a technology should first be measured before any usability analysis occurs. [4] , [5] If you can't accomplish a task, it doesn't matter how easy the system is to use. Likability, Shackel's third dimension of acceptability, is most closely associated with "flow," [6] or emotional appeal .
User Experience and UsabilityThe relative importance of usability changes over time. At first, usability has a strong effect on system use. As users gain more experience, they become more confident and believe they can accomplish more tasks with a desired level of performance (also known as self-efficacy [7] ). As a result, ease of use fades in importance and utility, and likability increase in relative importance. Usability then indirectly influences usage through utility (usability -> utility -> usage).
Designers tend to favor ease of use over utility. Davis found that utility has far more influence on usage than usability, however. "No amount of ease of can compensate for a system that does not perform a useful function." [8]
Speed plays a key role in all of these dimensions, especially usability and likability, so it is an important determinant of system acceptability and usage. In other words, how responsive your site is will in large part determine its adoption rate, which in turn affects your bottom line. A Brief History of Web PerformanceSoon after the birth of the web, HCI researchers started studying online environments. Networked environments like the Internet add another dimension to the mix ”namely, network latency. Unlike the closed computing environments that HCI researchers studied in the past, on the Internet the delay between requesting a resource and receiving it is unpredictable. The more resources a page has (graphics, multimedia), the less predictable the response rate. Initially researchers studied the effects of fixed response times on user satisfaction. Later studies simulated variable response rates for more real-world results. Their metrics changed from user satisfaction and performance to measures such as attunability , quality of service, quality of experience, and credibility. In the late 1990s and early 2000s, researchers started looking at Shackel's likability dimension by studying the effects of download delays on user perceptions of web sites, flow states, [9] and emotional appeal.
Users form negative impressions from web site delays. Users perceive fast-loading pages to be of high quality, while they perceive slow-loading pages to be of low quality and untrustworthy. A user's tolerance for delay also decreases with experience. These topics are covered in more depth later in this chapter. In fact, slow-loading web pages can cause users to believe that an error has occurred, because the computer has not responded in an appropriate amount of time. [10]
Affective ComputingSome researchers theorize that if a computer could respond "supportively" to delay-induced frustration, any negative emotional effects could be mitigated. According to researchers who have studied "affective computing," computers can respond to human emotions in order to lower frustration levels. [12]
Using galvanic skin response and blood volume pressure, Scheirer found that random delays can be a cause of frustration with computers. [13] Rather than ignoring their frustration (the most common condition) or letting them vent, a supportive approach gave users the most relief from frustration. [14] Perhaps we'll soon hear something like: "I'm sorry I'm so slow, Dave. Would you like me to speed up this web site?"
User-Rated Quality ModelsMore recently, researchers have been attempting to create a grand unified theory of web site quality from a user's perspective. How do users rate web sites? Why do they return to particular web sites and buy products? WebQual , an overall measure of web site quality, is composed of twelve distinct measures derived from existing research. WebQual can accurately assess the overall perceived quality of web sites. Response time and emotional appeal both play a major role in perceived web site quality. [15]
Automated Quality TestingWebTango researchers have developed an automated web site quality rating tool. [16] Their system, which is empirically based, automatically measures web site structure and composition in order to predict how experts will rate sites. Based on web designs judged by experts (Webby Awards), their 157-factor model, which includes page performance, had an average accuracy of 94 percent when quantifying good, average, and poor pages. However, some of the measures of good design are counterintuitive (for more Bobby accessibility errors, see http://bobby.watchfire.com/).
Essentially a mining tool, WebTango analyzes existing web pages to create profiles of good and bad designs, and then applies this data to the design of new sites. This interactive "quality checker" is analogous to a grammar checker for web sites (see Figure 1.2). Figure 1.2. Web site structure: From information to experience design.
Response Time and User SatisfactionShneiderman posed the question best: "How long will users wait for the computer to respond before they become annoyed?" [19] Researchers say "it depends." The delay users will tolerate depends on the perceived complexity of the task, user expertise, and feedback. Variability also plays an important role in delay tolerance. Users can tolerate moderate levels of delay variability, up to plus or minus 50 percent of the mean.
A number of studies have attempted to quantify computer response times versus user satisfaction. Robert Miller found three threshold levels of human attention: [20]
Miller proposed that the ideal response time is around two seconds. Shneiderman agreed with Miller that a two-second limit is appropriate for simple tasks, as long as the cost is not excessive. Shneiderman found that users "pick up the pace" of computer systems, that they were more productive at shorter response rates, and that they "consistently prefer the faster pace," below 1 to 2 seconds. Although users prefer shorter response rates, the optimum system response time (SRT) depends on task complexity. Fast SRTs cause input errors while longer response times tax short- term memory and frustrate users. Users want consistency in response times. Because surfing the web is mainly a low-complexity activity, users prefer faster response rates. Usage studies empirically confirm this need for speed; most pages are viewed for less than a second and few for more than 10 seconds. [21]
An Interview with Ben Shneiderman, Ph.D.I talked to Dr. Ben Shneiderman, one of the leading experts on HCI, to find out more about the relationship between speed and user satisfaction on the web. Andy King: How does speed relate to usability and success on the web? Ben Shneiderman: Usability plays a key role in web success stories design, graphics, navigation, organization, consistency, etc. all play important roles. Speed is also vital ”it's hard to get users to like a slow interface, and satisfaction grows with speed. Google is a good example of an excellent service that is even more valuable and appreciated because it is fast. Speed is the strongest correlate of user satisfaction. King: Why do we prefer shorter response times? Shneiderman: Lively interaction keeps the engagement high. For most people, wasted time, especially while just waiting for something to happen, is annoying. King: What happens when we exceed our attention threshold (8 to 12 seconds)? Shneiderman: Users not only grow frustrated, but they forget their next step, and have to reconstruct their intentions often making mistakes that only exacerbate their frustration. King: What do you think of the flow construct for user satisfaction on the web? Shneiderman: Rapid movement through complex sequences of actions that move users toward a desired goal contributes to the flow experience. Users should be working just at the right level of challenge, accomplishing something they desire . There is a great thrill of finding what you want, and getting it rapidly so you can move on to the next step. [22]
Negative Impressions and Perceived QualityThe speed at which your pages display can affect user perceptions of the quality, reliability, and credibility of your web site. Ramsay, Barabesi, and Preece studied the effects of slow-loading pages on user perceptions of web sites. [24] Using delays of two seconds to two minutes (with an interval of 19.5 seconds), they asked users to rate pages on eight criteria including "interesting content" and scannability. They found that pages with delays of 41 seconds or longer were perceived to be significantly less interesting and harder to scan. Note that the pages in this study loaded incrementally.
Perceived UsabilityJacko, Sears, and Borella studied the effects of network delay and type of document on perceived usability. They found that perceived usability of web sites was dependent on the length of delay and on the media used in web documents. When delays are short, users prefer documents that include graphics. When delays lengthen, however, users prefer text-only documents because graphics are viewed as contributing to the delay. As users become more experienced , their sensitivity to delay increases , increasing the need for "delay reduction mechanisms." [25]
Perceived Quality of ExperienceMorris and Turner found that perceived quality of experience (Shackel's utility dimension) affects the adoption rate of IT. [26] How users perceive the quality of a system can affect how much they will actually use it.
They found that interface "enhancements" (graphics, animation, sound, etc.) had little effect on quality of experience "although these features may be aesthetically pleasing they do little to remove actual barriers to the users' goal attainment ." Perceived Quality of ServiceThe speed at which your pages display affects their perceived quality and reliability. Bouch, Kuchinsky, and Bhatti investigated the effects of delay on perceived QoS in order to find an acceptable QoS level for e-commerce transactions. They tested delays from 2 to 73 seconds for both non-incremental and incrementally loaded pages. [27] Users rated latency quality versus delay on a scale of high, average, to low (see Table 1.1).
Table 1.1. Web Page Quality Rating versus Delay
The results show a mapping between objective QoS and the users' subjective perception of QoS. Pages that displayed quickly (<= 5 seconds) were perceived to be of high quality with high-quality products. Pages that displayed slowly (> 11 seconds) were perceived to be of low quality and untrustworthy. In fact, slower pages caused some users to feel that the security of their purchases may have been compromised, and they abandoned their transactions. Figure 1.3 shows the actual data behind Table 1.2 for the non-incremental display. This figure plots the number of low, average, and high ratings versus latency. The range where high ratings turn to low is between 8 to 10 seconds for non-incremental downloads, closely matching what Nielsen and others have found. Figure 1.3. Latency quality ratings show a drop-off at around 8 to 10 seconds.
Users tolerated nearly six times the delay for pages that displayed incrementally, although this tolerance decreased with usage. Test subjects rated pages as "average" with delays up to 39 seconds, and "low" with delays over 56 seconds. The researchers also tested user requirements for speed by allowing them to click "increase quality" if they found the web page delay to be unacceptable. The average tolerance was 8.6 seconds with a standard deviation of 5.9 seconds. They attribute this large deviation in acceptable download times to contextual factors like web experience and user expectations. The longer users interact with a site, the less they tolerate delays. Users will tolerate longer delays with tasks they perceive to be computationally complex. Users expect database access or complex calculations to take longer than displays of cached or static pages. Users form a conceptual model of system performance, which influences their tolerance for delay. CredibilityFogg et al. found that slow-loading pages reduce ease of use, which reduces credibility (or trustworthiness and expertise). Only difficult navigation was found to hurt credibility more. [28]
Bailout Rates and Attention ThresholdsThe bailout rate is the percentage of users who leave a page before it loads and start looking for a faster, more engaging site. In their first "Need for Speed" study of 1999, Zona Research found that pages larger than 40K had bailout rates of 30 percent. [29] Once the designer reduced the same page to 34KB, the bailout rate fell to between 6 and 8 percent, a dramatic decrease for just a few kilobytes. When fat pages were reduced to the recommended maximum of 34K, readership went up 25 percent. [30] These are averages, and users with faster connections and processors will experience faster downloads, but they can also become frustrated.
Zona's second study, "Need for Speed II," took into account dynamic transactions in order to modify the so-called "8-second rule." [31] They recommend that web site designers of dynamic sites cut an additional 0.5 to 1.5 seconds off connection latency in order to stay at the same level of abandonment compared with static web pages. As the web moves from a "plumbing" (pipes delivering pages) to a "transaction" (a series of dynamically generated pages) model, they argue that "cumulative frustration" plays an important role in user satisfaction.
Cumulative Frustration and Time BudgetsUsers can change the way they browse a site as they request and view additional pages. As they become more proficient, their learning "spills over," and users reduce their expected number of page views on returning visits. Clickstream-based analysis suggests that visitors trade off the number of pages requested and the time spent at each page. [32] Users may set "time budgets" for particular tasks, even though the tasks may take multiple pages to complete.
Provide Feedback for Longer TasksWithout effective feedback, users will wait only so long for your pages to load. For longer delays, you can extend the time that users are willing to wait with realistic feedback. Displaying your pages incrementally, a crude form of feedback, can extend user tolerance for delays. Myers found that users prefer percent-done progress indicators for longer delays. [33] These linear progress bars lower stress by allowing experienced users to estimate completion times and plan more effectively. Such progress bars are commonly used in download managers.
Bickford found that with no feedback, half of his test subjects bailed out of applications after 8.5 seconds. Switching to a watch cursor delayed their departure to an average of 20 seconds. "An animated watch cursor was good for over a minute, and a progress bar would keep them waiting until the Second Coming." [34]
Dellaert and Kahn found that wait time negatively affects consumer evaluation of web sites, but that this effect could be mitigated by providing information about the delay. [35] Delay information reduces uncertainty between expected and actual delays. For longer delays, they found that countdown feedback, a form of percent-done indicator, was better than duration feedback.
They also found that delays before viewing pages are less frustrating than delays while viewing pages. In other words, any delay after a page has loaded ”for example, a sluggish response while users are interacting with the page ”is worse than delays before a page has loaded. Response times below two seconds are ideal, but current bandwidths make this speed impractical , so we settle for 8 to 10 seconds. What does this mean for web page design? Page Design GuidelinesPage size and complexity have a direct effect on page display speed. As you learned in the Introduction, the majority of current users are at 56Kbps or less. That trend will continue until at least 2004, with international users lagging behind until 2007. Table 1.2 shows the maximum allowable page size needed to meet three attention thresholds at different connection speeds (derived from Nielsen, Designing Web Usability , 2000). Table 1.2. Maximum Page Size for Various Connection Speeds and Attention Thresholds
You can see that 34KB is about the limit of total page size to achieve the 10-second attention threshold at 56Kbps. Under 30KB would be an appropriate limit for 8.6 seconds at 56Kbps. This is total page size, which includes ads and graphics. Assuming a 10KB banner ad and some small graphics, your HTML should be at most around 20K. Designers who violate the 30KB limit will pay for their largess with lost sales, increased bailout rates, and increased bandwidth costs.
So what have we learned from all this? Speed of response is certainly one factor in user satisfaction on the web. Consistency of response times is another. But some researchers say that modeling human behavior in real-time environments with fixed performance metrics (like response times below 10 seconds) is too simplified. What we need is a more holistic approach. AttunabilitySome HCI researchers say that it is not so simple: users "attune" to a particular system's response rate regardless of its duration. [42] Ritchie and Roast say that user satisfaction with web performance is more complex than simple numeric response times. Users form a mental model of systems they are dealing with based on system response characteristics. To form this model, users perform a "selection and adjustment [of ] subjective time bases , and adapting the rate at which the environment is monitored to meet its particular pace." [43] Attuning is the process of forming this mental model and adapting our expectations to a particular system's response rate.
Consistent response times and adequate feedback help users attune to a system's pace. Inconsistent response times and poor feedback reduce the "attunability" of a particular system, and "temporal interaction errors" ensue. Thus "the less variable the duration of a particular task, the more likely that users can attune to the environment" [44] and the more accurately users can distinguish tasks of differing duration.
Humans can attune to a remarkably varied range of response rates, anything from years to seconds. Everyone knows that postal mail takes a matter of days, that Domino's delivers pizza within minutes, and that traffic lights change in a matter of seconds. The web is different, however. The Chaotic WebOn large decentralized networks like the web, the effects of latency can exceed the effects of improvements in performance. Conventional performance engineering and evaluation are not possible in this environment. Chaotic large-scale systems like the web can introduce non-deterministic delays. An external object in a web page can take anywhere from tens of milliseconds to what seems like an eternity (30 to 40 seconds or longer) to download. Rigid performance metrics such as response times under 10 seconds can be less important than consistent response rates. To meet the needs of users, you need to provide an environment with characteristics to which they can attune. Consistency of response times and feedback allows users to better "attune" to system delays. Consistent Response RatesThe key to attunability is to minimize the variability of delays. Variability is the difference between the slowest and fastest possible response rates. "The larger this variation, the less well system delays can be associated with a task," and the lower the system's attunability. [45] By minimizing this range, you allow users to model your system more easily and adjust their performance expectations.
Design for AttuningDesigning for attuning implies the adoption of transparency as an architectural principle. [46] By offering feedback mechanisms as pages and objects download, you can ensure that users will minimize "temporal interaction errors" associated with inconsistent response times.
The idea is to offer feedback that matches user expectations. Linear progress bars, which match user expectations, can be used to give users real-time feedback. Server load, cache state, and file sizes can be displayed with server-side includes. All of these performance cues are designed to let the user know how the system is performing and form a mental model. Here is an example SSI for file size cue: <a href="thisfile.zip">download this file</a> (<!--#config sizefmt="abbrev" --> <!--#fsize file="thisfile.zip" -->) This code automatically displays the file size of the referenced file so the user can gauge how long it will take. The antithesis of this concept is the Windows file copy animation. The system portrays the activity as an animation of pages flying across at a constant rate, independent of the actual progress being made. This is like a spinning watch cursor, which has no relation to the progress bar. The non-linear progress bar stalls near the end of the scale, while pages keep flying (see Figure 1.4). A better solution would be to create a linear progress bar, and change the animation to filling up a page or removing it entirely. Figure 1.4. The non-linear Windows file copy animation.
Users "attune" to the speed of the web's response. If your pages are slower than average or are inconsistent in size, users tend to tune out and go elsewhere. Optimizing the size of your pages and making them respond consistently can help users establish a rhythm as they surf through your site. Throw in a compelling experience, and some sites can attain the most elusive of web site goals, flow. You'll learn more about flow in Chapter 2, "Flow in Web Design." SummaryThe research suggests that without feedback, the length of time that users will wait for web pages to load is from 8 to 12 seconds. Nielsen recommends the average of 10 seconds. Bickford, Bouch, and Shneiderman found that most users will bail out of your page at around 8 to 8.6 seconds. Without feedback, that is the limit of people's ability to keep their attention focused while waiting. If you provide continuous feedback through percent-done indicators, users will tolerate longer delays, up to 20 to 39 seconds, although their tolerance decreases with experience. Users will be more forgiving if you manage their delay experience with performance information. They will also tolerate increased delays if they perceive the task to be computationally complex, like a database access. Try to minimize response time variability by keeping page response times uniform to maximize attunability. This research suggests the following web page design guidelines:
Web designers exceed these limits at their peril. Users associate slow-loading pages with inferior quality products and services, compromised security, and low credibility. Lower user satisfaction can lead to abandoned web sites and shopping carts.
|