Getting Started


Pet Store Overview

Important! We do not recommend Pet Store as a performance benchmark application for running under load conditions. See Sun's Java web site for suitable performance benchmarks, such as ECperf. [2] However, Pet Store contains classic e-Commerce functions such as Browse, Search, Sign-in, Add-to-Cart, Checkout, and Update Account, and it provides a good example for discussing test script concepts. The home page for Pet Store, shown in Figure 7.1, contains the following choices:

[2] ECperf is a performance benchmark developed under the Java Community Process. See <http://java.sun.com/j2ee/ecperf/>.

  1. Browse from one of five categories (Fish, Dogs, Reptiles , Cats, Birds)

  2. Search

  3. Sign-in

  4. Shopping cart

  5. ? (Help)

Figure 7.1. Java Pet Store demo home page. Sun Microsystems 2002. Reprinted by permission from Sun Microsystems.

graphics/07fig01.gif

If the user wants to browse and chooses one of the categories, a list of pets in this category returns. Similarly, if the user selects Search, a list of pets containing the search attribute returns. Figure 7.2 shows the high-level browse and search hierarchy for Pet Store. As shown in this diagram, the browse for Fish displays four fish, Angelfish, Goldfish, Koi, and Tiger Shark, whereas the browse for Reptiles only returns two reptiles, Iguana and Rattlesnake. Search returns a variable number of pets, depending on the search criteria entered.

Figure 7.2. Java Pet Store hierarchy

graphics/07fig02.gif

Pet Store functions resemble those of many e-Commerce applications. In addition to the Browse and Search capabilities, the application provides the capability to make purchases, which requires a log-on (called Sign-in by Pet Store). A purchase requires standard information such as address, credit card, and number of items. With this background on how Pet Store works, let's start developing test scripts for a web site.

Determining User Behavior

As mentioned previously, the best starting point in developing test scripts is analysis of an existing web site's traffic patterns. Your HTTP server typically generates logs of all requests made through it. For example, Listing 7.1 shows a subset of an access log from IBM HTTP Server taken from a run of the Pet Store web application. The HTTP server logs URL requests , including any parameters. [3]

[3] Assuming your web site logs requests at this level. Not every web site does, making this level of log analysis impossible .

The first log entry references a query on the Fish category. A bit further down, the log contains an entry for a purchase of an item labeled EST-3. By examining the logs, we begin to see the activities the users typically perform against the web site, as well as the ratio of one type of activity compared to another. Listing 7.1 also shows some of the difficulty in examining logs. This log represents a very small snippet of activity, yet it is hard to identify particular users or to correlate static requests with dynamic requests. Analysis tools exist to automatically scan logs and organize their information into usage patterns. These tools often prove useful when trying to extract this information for large web sites.

Listing 7.1 Sample HTTP server log
 127.0.0.1 - - [28/Dec/2001:17:26:30 -0500] "GET /estore/control/category?cate-               gory_id=FISH HTTP/1.0" 200 6467 127.0.0.1 - - [28/Dec/2001:17:26:34 -0500] "GET /estore/control/white               HTTP/1.0" 200 0 127.0.0.1 - - [28/Dec/2001:17:26:34 -0500] "GET /estore/control/prod-               uct?product_id=FI-SW-02 HTTP/1.0" 200 6388 127.0.0.1 - - [28/Dec/2001:17:26:34 -0500] "GET               /estore/images/button_cart-add.gif HTTP/1.0" 200 481 127.0.0.1 - - [28/Dec/2001:17:26:38 -0500] "GET /estore/control/white               HTTP/1.0" 200 0 127.0.0.1 - - [28/Dec/2001:17:27:00 -0500] "GET /estore/control/productde-               tails?item_id=EST-3 HTTP/1.0" 200 6038 127.0.0.1 - - [28/Dec/2001:17:27:00 -0500] "GET /estore/images/fish4.gif               HTTP/1.0" 200 7501 127.0.0.1 - - [28/Dec/2001:17:27:04 -0500] "GET /estore/control/white               HTTP/1.0" 200 0 127.0.0.1 - - [28/Dec/2001:17:27:04 -0500] "GET /estore/control/cart?               action=purchaseItem&itemId=EST-3 HTTP/1.0" 200 7760 127.0.0.1 - - [28/Dec/2001:17:27:04 -0500] "GET               /estore/images/button_checkout.gif HTTP/1.0" 200 534 127.0.0.1 - - [28/Dec/2001:17:27:04 -0500] "GET               /estore/images/button_remove.gif HTTP/1.0" 200 1189 127.0.0.1 - - [28/Dec/2001:17:27:04 -0500] "GET /estore/images/cart-               update.gif HTTP/1.0" 200 489 127.0.0.1 - - [28/Dec/2001:17:27:16 -0500] "GET /estore/control/white               HTTP/1.0" 200 0 127.0.0.1 - - [28/Dec/2001:17:27:16 -0500] "GET /estore/control/checkout               HTTP/1.0" 200 6981 127.0.0.1 - - [28/Dec/2001:17:27:16 -0500] "GET               /estore/images/button_cont.gif HTTP/1.0" 200 1275 127.0.0.1 - - [28/Dec/2001:17:27:18 -0500] "GET /estore/control/white               HTTP/1.0" 200 0 127.0.0.1 - - [28/Dec/2001:17:27:18 -0500] "GET /estore/control/placeor-               der HTTP/1.0" 200 6617 127.0.0.1 - - [28/Dec/2001:17:27:19 -0500] "GET               /estore/images/button_submit.gif HTTP/1.0" 200 455 127.0.0.1 - - [28/Dec/2001:17:27:24 -0500] "GET /estore/control/white               HTTP/1.0" 200 0 127.0.0.1 - - [28/Dec/2001:17:27:24 -0500] "POST /estore/control/verify-               signin HTTP/1.0" 200 10038 127.0.0.1 - - [28/Dec/2001:17:27:29 -0500] "GET /estore/control/white               HTTP/1.0" 200 0 

The more realistic the scenario, the better the test scripts. Pulling data from logs generally gives you the best understanding of how users interact with an existing web site. (Of course, don't forget to account for new features or functions in your web site test scenarios.) Log analysis works well for existing web sites; it proves less useful for new or significantly modified web sites. In these cases, the results of usability testing often provide better scenario data.

Also keep in mind that the smallest details matter in building a scenario: Does the user enter the web site through the home page or bookmark another entry point? Does the user log out of the web site or just leave for another site? Accurately representing your users in your test scripts influences the throughput rate and user loading that your test achieves. This in turn corresponds to how accurately you portray your web site's production performance.

A Typical Test Script

Using the data on how you expect users to walk through your site, you create test scripts to simulate these paths. A test script represents an execution path through a set of web pages. During the test, this same script is executed over and over again, simulating multiple users. In order to simulate many users executing many different paths through a web site, a collection of scripts is typically used. In this chapter, we call this collection of scripts the test scenario . Using the data on how you expect users to walk through your site, you actually create test scripts to simulate these paths. Typically, we create test scripts by "recording" a user's activities while performing a series of tasks at the web site.

Many test tools provide script recording capability. In this chapter, we use two common products, LoadRunner, from Mercury Interactive Corporation, and SilkPerformer V, from Segue Software, Inc., to provide our examples. Chapter 8, Selecting the Right Test Tools, covers test tools and script generation capabilities in more detail, and Appendix C also contains a broader list of vendor tools.

A typical test script consists of a variety of things, including URLs, think times, and cookies.

  • URLs ” The test script contains the actual URL requests it issues against the web site. Some of the URLs represent the simulated user's explicit requests, and some represent embedded elements (such as gifs and JavaScript) in the HTML pages returned in response to the simulated user's requests.

  • Think Times ” A user on your web site typically reads a returned web page prior to issuing another request. During testing, a test tool makes page requests and does not require the same amount of time to analyze a page and make the next request. The test tool, in fact, doesn't require any time to send the next request; it can do so immediately.

    To accurately simulate the user environment, you want the test tool to wait just like a real user before sending the next request. Most tools allow you to specify a think time or range of think times between requests.

    Some tools actually capture the user's think time while recording the script. However, providing your own think time value usually yields better results. (Script recorders sometimes click through the script too quickly, or take coffee breaks between pages during the recording session.)

  • Cookies ” If your web site uses cookies, your scripts must also utilize and process cookies correctly. Some web applications or functions require the cookies they generate to function properly. Failure to return a required cookie usually results in testing errors.

    Typically, the test tool records any cookies the user receives from the server during the script recording session. During script execution, the test tool retrieves cookies for each virtual user and returns them to the server on each request, as required.

Let's look at an example of a recorded test script. Listing 7.2 shows a subset of a LoadRunner test script for Pet Store. [4] The test script specifies a sequence of two URL requests and also a think time of two seconds between submitting the URL requests. The first URL retrieves the Pet Store home page, and the second URL requests a selection of Fish.

[4] Thanks to Leo Cai for his assistance with all the LoadRunner example scripts.

Listing 7.2 Example script. 2002 Mercury Interactive Corporation.
 //subset of Pet Store browse script //first URL request to Pet Store home page    web_url("language",            "URL=http://ruthless/estore/control/language?language=English",            "TargetFrame=",            "Resource=0",            "RecContentType=text/html",            "Referer=",            "Snapshot=t1.inf",            "Mode=URL",            LAST); //think time    lr_think_time( 2 ); //browse for fish   web_url("nav-fish.gif",           "URL=http://ruthless/estore/control/category?category_id=FISH",           "TargetFrame=",           "Resource=0",           "RecContentType=text/html",           "Referer=http://ruthless:80/estore/control/language?language=English",          "Snapshot=t2.inf",          "Mode=URL",          LAST);    return 0; } 


Performance Analysis for Java Web Sites
Performance Analysis for Javaв„ў Websites
ISBN: 0201844540
EAN: 2147483647
Year: 2001
Pages: 126

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net