Common Pitfalls


The success of the performance test hangs on the test scripts. As we've mentioned several times already in this chapter, the scripts must represent a user 's activities accurately, or the test provides no bearing on the production web site. Often test teams create the scripts in haste and with little input from the marketing team or the existing web sites on how the users actually use the site. Test accuracy also frequently suffers from simple test script mistakes. Use this list of common test script errors to avoid simple but expensive scripting mistakes.

Inaccuracies

Scripts frequently contain incorrect or extraneous URLs, or they specify the URLs in the wrong sequence. In one case, a script writer forgot to hit the pause button on the script recorder before visiting a popular external news site. For a while, the team inadvertently tested how quickly this external site returned a page! Always read the scripts after recording them, or use your test tool's "playback" feature to confirm what the script actually does.

Hard-Coded Cookies

If the web site under test sets cookies, these cookies may appear in the recorded scripts. Some test tools require the test writer to prepare a variable to explicitly replace the cookie captured in the recording session. The variable allows the script to receive a different cookie value during the test, rather than using the recorded value. If the site uses HTTP session cookies from an application server, cookie substitution is a must. Many test tools now handle cookies automatically, but this is a very important function to verify.

Unsuitable Think Times

If the person recording the script gets a phone call or goes for a cup of coffee while recording the script, expect to see a very long delay in the test script. Conversely, the person recording the script may be so familiar with the test scenario she clicks along much faster than a normal user would. Always check the think times in the script.

As we discussed in a previous section, some of the better test tools allow you to remove the think times or to randomize their values via the test console without explicitly changing the think time values in the script. For example, Figure 7.6 shows the capabilities LoadRunner provides. At runtime, you choose whether to use or ignore think times in the script. The tool also allows you to reduce or increase the think times recorded in the script, or to set think time boundaries. Even if your tool provides this level of flexibility, you need to read your scripts carefully to understand the recorded think time. Of course, if you plan to ignore think times, this becomes less critical. However, few tests run exclusively without think times because they provide a more accurate representation of a user's interaction with the web site.

Figure 7.6. Example of customizing think times. 2002 Mercury Interactive Corporation.

graphics/07fig06.gif

No Parameterization

We still find examples where teams fail to parameterize their scripts to support dynamic data. Of course, when this happens, every simulated user exercises the exact same path . This does not properly exercise the site, particularly the database interactions. As we discussed earlier, use dynamic data in your scripts to fully exercise your web site.

Idealized Users

Do not fall into the trap of assuming that users interact with the site just as the designers intended. As we mentioned before, users rarely log out of a web site; they just move on to the next site. However, almost every performance test we've ever seen includes the lowly logout function as part of the most heavily weighted script. Not only does this put improper emphasis on the performance of the logout function in your testing, but it may also lead to a gross underestimation of user pressure on your web site. Take the time to learn how users interact with your web site, and build your scripts to reflect this.

Oversimplified Scripts

Typically you produce an oversimplified script when you don't have any real data from your existing web site, so you implement some of the use cases generated from your earliest application designs. These make for easy scripts to create, but they provide very limited performance value. For example, if you test only those user interactions that execute simple reads from the underlying database, they tend to execute more quickly than updates or creates, and skew the numbers for how your web application performs in production. If you execute only specific reads instead of commonly used searches returning several hundred database records, again the site appears to execute more quickly, and handle more interactions than will actually be possible under production conditions.

Use your simple scripts to build more complex scripts and scenarios, as we discussed earlier. These more complex interactions give you a better understanding of your web site's production behavior.

Myopic Scripts

Likewise, if you test only the complex portions of the application, or test only the trivial portions of the application, your performance test won't give you an accurate representation of production behavior. For instance, don't just test the easy stuff like the static data. But on the other hand, don't just test the hard stuff like the dynamic content. Too much static content testing gives you an overinflated sense of the web site's capacity. Conversely, testing only dynamic elements might lead you to build too much capacity for the site.



Performance Analysis for Java Web Sites
Performance Analysis for Javaв„ў Websites
ISBN: 0201844540
EAN: 2147483647
Year: 2001
Pages: 126

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net