Building Test Scenarios


After completing the individual test scripts, put them together to build the test scenarios. Building a test scenario usually involves two steps:

  1. Putting together shorter scripts into larger, representative user scripts

  2. Defining a set of larger user scripts to represent a typical workload mix

First, take the short scripts, and string them together to represent meaningful single-user paths through your site. For example, in an e-Commerce site, each user does not log on, search for one item, put it in the shopping cart, and check out. Instead, some users log on, do a few searches, and then leave the web site. Others make purchases after completing some searches, and still others check the shipping status of outstanding orders.

After establishing the various paths the users execute through the web site, you next establish how frequently each path executes relative to the overall web site traffic. For example, if the test plan specifies that 95% of the users browse on the web site but only 3% actually make purchases, use this weighting when you assign scripts to your virtual users. In this case, 95% of the virtual users execute the Browse script, but only 3% execute the Purchase script.

Putting Scripts Together

User execution paths typically consist of several smaller scripts bundled into larger scripts, which actually model typical usage patterns at your web site. These scripts often resemble programs themselves , with "initialization" steps executed just once, followed by programming logic (including loops and conditional statements), with a clean-up step at the end. Remember, a single script executes multiple times and represents many users; thus, the script requires variation to be realistic.

As an example, let's consider a Browse and Purchase script. This script needs variety in the number of browses, the number of items selected for purchase, and the browse-to-purchase ratio. In building a Browse and Purchase script, the first step might be a login, which occurs only once in the script's initialization section. The main body of the script contains the smaller, atomic Browse, Add-to-Cart, and Checkout scripts. Test tools differ in how they generate variation within these larger scripts. For example, you might create a simple loop around the small Browse script with the tool's scripting language and use a random number to control the loop's iterations. If the average user browses for five items, choose a random loop control variable between 3 and 7 to provide variation.

To simulate a higher percentage of browsers than purchasers , place the small Add-to-Cart script inside a conditional block, and use a random number to trigger the conditional. For example, if the only 25% of users browsing actually add items to the cart, test for a match with a random number between 1 and 4.

Finally, execute the small, atomic Checkout script once at the end of the larger script. Of course, few customers actually make purchases, even if they've placed items in the cart. So, we place the small, atomic Checkout script inside a conditional and execute it randomly about 5% of the time if the user places items in the cart. The checkout occurs outside the browse loop because the user buys stuff just once per visit. Here's some pseudo-code for the logic behind a Browse and Purchase script consisting of several smaller atomic scripts.

 //initializationdo once  Login; //body   cartItems = false;   numofbrowses = random number(3..7) //pick random # of times to execute browse script   For i=1 to numofbrowses     Do        Browse;        toCart = random number (1..4) //for placing in the cart 25% of the time        If toCart = 1 //place item in cart          Add-to-Cart;          cartItems=true;     End;   toPurchase = random number (1..20) //for purchasing 5% of the time   If toPurchase = 3 and cartItems=true // buy this item   Checkout;  //purchase all items in cart //cleanupdo once   Logout;  //note- you may want to leave this out since customers frequently skip the logout 

Use Weighted Testing

We discussed weighted testing in the previous section, but given the importance of this topic, let's discuss it in a bit more detail here. Weighted testing tries to duplicate the normal usage patterns of web site visitors . This helps us properly weight the influence of each function's performance in overall web site performance.

As discussed in Chapter 6, use data from your existing web site or from your marketing and usability studies to define the appropriate weighting for your scenarios. Pull this information together during the test planning phase, and refer to it as you build your scripts and scenarios. For example, let's assume that studies show 80% of our e-Commerce site visitors browse the on-line catalog, and each of these browsing users visits an average of five pages. We need a performance test to simulate these users.

You also want a set of scripts to represent different types of users. While one set of users might browse through categories, another set of users might fast track by using search. Represent each major user path as a separate major script. When you execute the test, run an appropriately weighted mix of these scripts. If you expect the majority of your users to browse, give this script a higher weighting by assigning more simulated users to execute it. For example, the home page for Pet Store allows you to browse from one of five main categories (Fish, Dogs, Reptiles, Cats, and Birds) or to initiate a search. If 80% of your users browse, while 20% use the search function, set up your test runs so that 80% of your virtual users run the Browse script, and 20% run the Search script.

But what about users who are actually making purchases? Don't they normally browse or search before buying something? To handle these cases, create two purchase scripts: Browse and Purchase and Search and Purchase. These scripts combine the smaller, atomic Sign-in and Purchase scripts with Browse and Search. Counting our new Purchase scripts, our web site performance test uses four test scripts to simulate user behavior. Based on the browse, search, and buy data provided in the test plan, you define the weightings for each script as shown in Table 7.1.

Since only 5% of our users actually make purchases, we pull these numbers out of the total browsing and searching users. This is somewhat arbitrary in this example, but easy to change if we obtain more detailed purchase patterns. The test scenario proportions the virtual users in accordance to this weighting. For example, in a test for 100 virtual users, you specify 77 virtual users to perform the Browse script, three virtual users for the Browse and Purchase script, and so forth. Getting the scenario proportions correct is very important. Test teams occasionally build a test to stress only one function, such as the item purchase function. Such tests often result in a web site poorly tuned for its production usage patterns.

Table 7.1. Example Weighting for Pet Store Scripts
Script Weighting
Browse 77%
Browse and Purchase 3%
Search 18%
Search and Purchase 2%

Exercise the Whole Web Site

Of course, time and money always limit the scope of the your test, but often the infrequently accessed features dramatically impact the performance of the entire web site. Try to test computationally expensive paths, such as search functions. Your site's search function might not frequently entertain visitors, but when it does, it might bog down the site. In general, the more test coverage, the better. Start with your most frequently accessed web site functions, and work your way down to those seldom used. If you run out of time or money, at least your coverage extends to the most traveled areas of the site. However, over time, you probably want to accumulate tests to cover the entire web site. For example, in our examples, we discussed the Browse, Search, and Purchase paths through Pet Store. Other paths include MyAccount, New User, and Help functions.



Performance Analysis for Java Web Sites
Performance Analysis for Javaв„ў Websites
ISBN: 0201844540
EAN: 2147483647
Year: 2001
Pages: 126

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net