Load Tests


Load tests are used to verify that your application will perform as expected while under the stress of multiple concurrent users. You configure the levels and types of load you wish to simulate and then execute the load test. A series of requests will be generated against the target application and Team System will monitor the system under test to determine how well it performs.

Load testing is most commonly used with web tests to conduct smoke, load, and stress testing of ASP.NET applications. However, you are certainly not limited to this. Load tests are essentially lists of pointers to other tests, and they can include any other test type except for manual tests.

For example, you could create a load test that includes a suite of unit tests. You could stress-test layers of business logic and database access code to determine how that code will behave when many users are accessing it concurrently, regardless of which application uses those layers.

As another example, ordered tests can be used to group a number of tests and define a specific order in which they will run. Because tests added to a load test are executed in a randomly selected order, you may find it useful to first group them with an ordered test, and then include that ordered test in the load test. You can find more information on ordered tests in Chapter 13.

Creating and configuring load tests

In this section, we describe how to create a load test using the New Load Test Wizard. You'll examine many options that you can use to customize the behavior of your load tests.

As described in the web testing section of this chapter, a test project is used to contain your tests, and like web tests, load tests are placed in test projects. You can either use the New Test option of the Test menu and specify a new or existing test project or you can right-click on an existing test project and choose Add image from book Load Test.

Using the New Load Test Wizard

Whether from a test project or the Test menu, when you add a new load test, the New Load Test Wizard is started. This wizard will guide you through the many configuration options available for a load test.

Scenarios and think times

A load test is composed of one or more scenarios. A scenario is a grouping of web and/or unit tests along with a variety of preferences for user, browser, network, and other settings. Scenarios are used to group similar tests or usage environments. For example, you may wish to create a scenario for simulating the creation and submission of an expense report by your employees, whereby your users have LAN connectivity and all use Internet Explorer 6.0.

When the wizard is launched, the first screen describes the load test creation process. Click Next and you will be prompted, as shown in Figure 15-12, to assign a name to your load test's first scenario.

image from book
Figure 15-12

Note that the wizard only supports the creation of a single scenario in your load test, but you can easily add more scenarios with the Load Test Editor after you complete the wizard.

The second option on this page is to configure think times. You may recall from the "Web Tests" section earlier that think time is a delay between each request, which can be used to approximate how long a user will pause to read, consider options, and enter data on a particular page. These times are stored with each of a web test's requests. The think time profile enables you to turn these off or on.

If you enable think times, you can either use them as is or apply a normal distribution that uses the stored times as a basis. We recommend using the normal distribution if you want to simulate the most realistic user load, based on what you expect the average user to do.

You can click on any test step on the left-hand side to jump to that step. Click Next to advance to each next step. Click Finish when you are satisfied with the settings and wish to create the load test.

Load patterns

The next step is to define the load pattern for the scenario. The load pattern, shown in Figure 15-13, enables simulation of different types of user load.

image from book
Figure 15-13

In the wizard, you have two load pattern options: Constant and Step. Constant load enables you to define a number of users that will remain unchanged throughout the duration of the test. Use a constant load to analyze the performance of your application under a steady load of users. For example, you may specify a baseline test with 100 users. This load test could be executed prior to release to ensure that your established performance criteria remain satisfied.

A step load defines a starting and maximum user count. You also assign a step duration and a step user count. Every time the number of seconds specified in your step duration elapse, the number of users is incremented by the step count, unless the maximum number of users has been reached. Step loads are very useful for stress-testing your application, finding the maximum number of users your application will support before serious issues arise.

Note

A third type of load profile pattern, called "Goal Based," is available only through the Load Test Editor. See the section "Editing load tests" for more details.

We recommend you begin with a load test that has a small constant user load and a relatively short execution time. Once you have verified that the load test is configured and working correctly, increase the load and duration as you require.

Test Mix

Now select the tests to include in your scenario, along with the relative frequency with which they should run. Click the Add button and you will be presented with the Add Tests dialog shown in Figure 15-14.

image from book
Figure 15-14

By default, all of the tests, except manual tests, in your solution will be displayed. You can constrain these to a specific test list with the "Select test list to view" drop-down. Select one or more tests and click OK. To keep this example simple, only add the web test you created earlier in this chapter.

Next, you will return to the Test Mix step to assign frequencies for each test, as shown in Figure 15-15.

image from book
Figure 15-15

Use the sliders to assign the chance, in percentage, that a virtual user will select that test to execute. You may also type a number directly into the numeric fields. Use the lock checkbox to freeze tests at a certain number, while using the sliders to adjust the remaining "unlocked" test distributions. The Distribute button resets the percentages evenly between all tests.

Note

Remember that distribution percentages do not indicate how much time will be devoted to each test. Whenever a new virtual user "runs" during the load test, one test is randomly selected from the mix according to these percentages.

For example, assume you have a few tests and distribute percentages evenly between them. If one of those tests takes considerably longer than the others, your load test will spend more time running that test even though the number of times each test runs will be similar. Keep the time that each test takes to run in mind as you assign distributions.

Browser Mix

The next step, applicable only when web tests are part of the load test, is to define the distribution of browser types that you wish to simulate. Team System will then adjust the headers sent to the target application according to the selected browser for that user.

As shown in Figure 15-16, you may add one or more browser types and then assign a percent distribution for their use. Like the Test Mix step described earlier, you can use sliders to adjust the percentages, lock a particular percent, or click the Distribute button to reset to an even distribution.

image from book
Figure 15-16

As with the Test Mix settings, each virtual user will select a browser type at random according to the percentages you set. A new browser type is selected each time a test is chosen for execution. This also applies to the Network Mix described next.

Network Mix

After selecting the browser type(s) you wish to simulate, you then specify the kinds of network connectivity you expect your users to have. This step is shown in Figure 15-17.

image from book
Figure 15-17

Select one or more network types, such as LAN, T1, and Dial-up, and then assign a percentage distribution, as with the browser mix.

Performance counter sets

A vital part of load testing is the tracking of performance counters. You can configure your load test to observe and record the values of performance counters, even on remote machines. For example, your target application is probably hosted on a different machine from the one on which you're running the test. In addition, that machine may be calling to other machines for required services such as databases or Web services. Counters from all of these machines can be collected and stored by Team System.

A counter set is a group of related performance counters. All of the contained performance counters will be collected and recorded on the target machine when the load test is executed.

Note

Once the wizard is complete, you can use the editor to create your own counter sets by right-clicking on Counter Sets and selecting Add Custom Counter Set. Right-click on the new counter set and choose Add Counters. Use the resulting dialog box to select the counters and instances you wish to include.

Select machines and counter sets using the wizard step shown in Figure 15-18. Note that this step is optional. By default, performance counters are automatically collected and recorded for the machine running the load test. If no other machines are involved, simply click Next.

image from book
Figure 15-18

To add a machine to the list, click Add Computer and enter the name of the target machine. Then, check any counter sets you wish to track to enable collection of the associated performance counters from the target machine.

Run settings

The final step in the New Load Test Wizard is to specify the test's run settings, as shown in Figure 15-19. A load test may have more than one run setting, but the New Load Test Wizard will only create one. In addition, run settings include more details than are visible through the wizard. We cover these aspects of run settings in the section "Editing load tests" later in this chapter.

image from book
Figure 15-19

First, select the timing details for the test. Warmup Duration specifies a window of time during which, although the test is running, no information from the test is tracked. This gives the target application a chance to complete actions such as just-in-time compilation or caching of resources. Once the warmup period ends, data collection begins and will continue until the Run Duration value has been reached.

The sampling rate determines how often performance counters will be collected and recorded. A higher frequency (lower number) will produce more detail, but at the cost of a larger test result set and slightly higher strain on the target machines.

Any description you enter will be stored for the current run setting. You can also use the Maximum Error Details field to specify how many identical errors will be stored before truncation occurs.

Finally, the Validation Level setting indicates which web test validation rules should be executed. This is important because the execution of validation rules is achieved at the expense of performance. In a stress test, you may be more interested in raw performance than you are that a set of validation rules pass. There are three options for validation level:

  • Low: Only validation rules marked with Low level will be executed.

  • Medium: Validation rules marked Low or Medium level will be executed.

  • High: All validation rules will be executed.

Click Finish to complete the wizard and create the load test.

Editing load tests

After completing the New Load Test Wizard, or whenever you open an existing load test, you will see the Load Test Editor, as shown in Figure 15-20.

image from book
Figure 15-20

The Load Test Editor displays all of the settings you specified in the New Load Test Wizard. It allows access to more properties and options than the wizard, including the capability to add scenarios, create new run settings, configure SQL tracing, and much more.

Adding scenarios

As you've already seen, scenarios are groups of tests and user profiles. They are a good way to define a large load test composed of smaller specific testing objectives.

For example, you might create a load test with two scenarios. The first includes tests of the administrative functions of your site, including 10 users with the corporate-mandated Internet Explorer 6.0 on a LAN. The other scenario tests the core features of your site, running with 90 users who have a variety of browsers and connections. Running these scenarios together under one load test enables you to more effectively gauge the overall behavior of your site under realistic usage.

The New Load Test Wizard generates loads tests with a single scenario, but you can easily add more using the Load Test Editor. Right-click on the Scenarios node and choose Add Scenario. You will then be prompted to walk through the Add Scenario Wizard, which is simply a subset of the New Load Test Wizard that you've already seen.

Run settings

Run settings, as shown on the right-hand side of Figure 15-20, specify such things as duration of the test run, where and if results data is stored, SQL tracing, and performance counter mappings.

A load test can have more than one run setting, but as with scenarios, the New Load Test Wizard only supports the creation of one. You might want multiple run settings to enable you to easily switch between different types of runs. For example, you could switch between a long-running test that runs all validation rules and another shorter test that runs only those marked as Low level.

To add a new run setting, right-click on the Run Settings node or the load test's root node and choose Add Run Setting. You can then modify any property or add counter set mappings to this new run setting node.

SQL Tracing

You can gather tracing information from a target SQL Server or SQL Express instance though SQL Tracing. Enable SQL Tracing through the run settings of your load test. As shown in Figure 15-20, the SQL Tracing group has four settings.

First, set the SQL Tracing Enabled setting to True. Then click the SQL Tracking Connect String setting to make the ellipsis button appear. Click that button and configure the connection to the database you wish to trace.

Use the SQL Tracing Directory setting to specify the path or UNC to the directory in which you want the SQL Trace details stored.

Finally, you can specify a minimum threshold for logging of SQL operations. The Minimum Duration of Traced SQL Operations setting specifies the minimum time, in milliseconds, that an operation must take in order for it to be recorded in the tracing file.

Goal-based load profiles

As you saw in the New Load Test Wizard, you had two options for load profile patterns: Constant and Step. A third option, Goal Based, is only available through the Load Test Editor.

The goal-based pattern is used to raise or lower the user load over time until a specific performance counter range has been reached. This is an invaluable option when you want to determine the peak loads your application can withstand.

To access the load profile options, open your load test in the Load Test Editor and click on your current load profile, which will be either Constant Load Profile or Step Load Profile. If the Properties window is not already open, press F4 or right-click on the node and choose Properties.

In the Properties window, change the Pattern value to Goal Based. You should now see a window similar to what is shown in Figure 15-21.

image from book
Figure 15-21

First, notice the User Count Limits section. This is similar to the step pattern in that you specify an initial and maximum user count, but you also specify a maximum user increment and decrement and minimum user count. The load test will dynamically adjust the current user count according to these settings in order to reach the goal performance counter threshold.

By default, the pattern will be configured against the % Processor Time performance counter. To change this, enter the category (e.g., Memory, System, etc.), the computer from which it will be collected (leave this blank for the current machine), and the counter name and instance — for example, if you have multiple processors.

You then need to tell the test about the performance counter you selected. First identify the range you're trying to reach using the High End and Low End properties. Set the Lower Values Imply Higher Resource Utilization option if a lower counter value indicates system stress. For example, you would set this to True when using the system group's Available MBytes counter. Finally, you can tell the load test to remain at the current user load level when the goal is reached with the Stop Adjusting User Count When Goal Achieved option.

Storing load test run data

A load test run can collect a large amount of data. This includes performance counter information from one or more machines, details about which test passed, and durations of various actions. You may choose to store this information in a SQL Server or SQL Express instance or an XML file.

To select a results store, you need to modify the load test's run settings. Refer back to Figure 15-20. The local run settings have been selected in the Load Test Editor. In the Properties window is a setting called Storage Type. The valid settings for this are None, Database, and XML.

In order to use the Database option, you must first configure an instance of SQL Server or SQL Express using a database creation script. The script, LoadTestResultsRepository.sql, is found under the \Common7\IDE directory of your Visual Studio installation directory. You may run this script any way you choose, such as with Query Manager or SQL Server 2005's SQLCMD utility.

Once created, the new LoadTest database can be used to store data from load tests running on the local machine or even remote machines. Running remote load tests is described later in the "Distributed Load Tests" section of this chapter.

Executing load tests

There are several ways to execute a load test. You can use various windows in Visual Studio Team System, the Load Test Editor, Test Manager and Test View, or you can use command-line tools. For details on using the command line, see "Command-line Test Execution."

In the Load Test Editor, you can click the Run button at the upper-left corner or right-click on any load test setting node and select Run Load Test.

From the Test Manager and Test View windows, check or select one or more load tests and click the Run Tests button. In Test View, you may also right-click on a test and select Run Selection. See Chapter 13 for more information on the Test Manager and Test View windows.

Viewing and interpreting load test results

If you ran your test from either Test Manager or Test View, you will see the status of your test in the Test Results window, as shown in Figure 15-22.

image from book
Figure 15-22

Once the status of your test is In Progress or Complete, you can double-click to see the Load Test Monitor window, shown in Figure 15-23. You may also right-click and choose View Test Results Details. When a load test is run from the Load Test Editor, the Test Results window is bypassed, immediately displaying the Load Test Monitor.

image from book
Figure 15-23

You can observe the progress of your test and then continue to use the same window to review results after the test has completed.

At the top of the screen, just under the file tabs, is a toolbar with several view options. First, you can select between Graphs and Tables view. We describe each of these views in a moment. The Show Counters Panel and Show Summary Panel buttons are used to toggle those panels on and off.

Graphs View

The most obvious feature of the Load Test Monitor is the graph, which is selected by default. This plots a number of selected performance counters over the duration of the test.

The tree in the left-hand, or Counters, pane, shows a list of all available performance counters, grouped into a variety of sets — for example, by machine. Expand the nodes to reveal the tracked performance counters. Hover over a counter to see a plot of its values in the graph. Double-click on the counter to add it to the graph and legend.

Note

Selecting performance counters and knowing what they represent can require experience. With so many available counters, it can be a daunting task to know when your application isn't performing at its best. Fortunately, Microsoft has applied their practices and recommendations to predefine threshold values for each performance counter to help indicate that something might be wrong.

As the load test runs, the graph is updated at each snapshot interval. In addition, you may notice that some of the nodes in the Counters pane are marked with a red error or yellow warning icon. This indicates that the value of a performance counter has exceeded a predefined threshold and should be reviewed. For example, Figure 15-23 indicates threshold violations for the % Processor Time counter. In fact, you can see small warning icons in the graph itself at the points where the violations occurred. We'll use the Thresholds view to review these in a moment.

The list at the bottom of the screen, called the legend, shows details of the selected counters. Those that are checked appear in the graph with the indicated color. If you select a counter, it will be displayed with a bold line.

The right-hand table, called the Plot Points pane, shows the value of the currently selected counter at each sampling interval during the test. Remember, you can adjust this interval using the Load Test Editor by changing the Sample Rate property under Run Settings.

Finally, the bottom-left view, or Summary pane, shows the overall results of the load test.

Tables View

When you click the Tables button, the main panel of the load test results window changes to show a drop-down list with a table. Use the drop-down list to view each of the available tables for the load test run. Each of these tables is described in the following sections.

Tests table

This table goes beyond the detail of the Summary pane, listing all tests in your load test and providing summary statistics for each. Tests are listed by name and containing scenario for easy identification. You will see the total count of runs, pass/fail details, as well as tests per second and seconds per test metrics.

Pages table

The Pages table shows all of the pages accessed during the load test. Included with each page are details of the containing scenario and web test along with performance metrics. The Total column shows the number of times that page was rendered during the test. The Page Time column reflects the average response time for each page. Page Time Goal and % Meeting Goal are used when a target response time was specified for that page. Finally, the Last Page Time shows the response time from the most recent request to that page.

Transactions table

A transaction is a defined subset of steps in a web test that are tracked together. For example, you can wrap the requests from the start to the end of your checkout process in a transaction named Checkout for easy tracking. For more details, see the section titled "Adding Transactions" under "Editing a web test" earlier in this chapter.

In this table, you will see any defined transactions listed, along with the names of the containing scenario and web test. Details include the count, response time, and elapsed time for each transaction.

SQL Trace table

The SQL Trace table will only be enabled if you previously configured SQL Tracing for your load test. Details for doing that can be found in the "SQL Tracing" subsection of "Editing load tests" earlier in this chapter.

This table shows the slowest SQL operations that occurred on the machine specified in your SQL Tracing settings. Note that only those operations that take longer than the Minimum Duration of Traced SQL Operations will appear.

By default, the operations are sorted with the slowest at the top of the list. You can view many details for each operation, including duration, start and end time, CPU, login name, and others.

Thresholds table

The top of Figure 15-23 indicates that there are "6 threshold violations." You may either click on that text, or select the Threshold table to see the details. You will see a list similar to the one shown in Figure 15-24.

image from book
Figure 15-24

Each violation is listed according to the sampling time at which it occurred. You can see details about which counter on which machine failed as well as a description of what the violating and threshold values were.

Errors table

As with threshold violations, if your test encountered any errors, you will see a message such as "4 errors." Click on this text or the Errors table button to see a summary list of the errors. This will include the error type, such as Total or Exception, and the error's subtype. SubType will contain the specific Exception type encountered — for example, FileNotFoundException. Also shown are the count of each particular error and the message returned from the last occurrence of that error.

If you configured a database to store the load test results data, you can right-click on any entry and choose Errors. This will show the Load Test Errors window, as shown in Figure 15-25.

image from book
Figure 15-25

This table displays each instance of the error, including stack and details if available, according to the time at which they occurred. Other information, such as the containing test, scenario, and web request, is displayed when available. Hover over any error to see the error's full text.



Professional Visual Studio 2005 Team System
Professional Visual Studio 2005 Team System (Programmer to Programmer)
ISBN: 0764584367
EAN: 2147483647
Year: N/A
Pages: 220

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net