Performance Testing Your Application

Performance Testing Your Application

With an increased movement toward the use of Web services driving an ever-increasing amount of traffic, the key challenge that emerges is one of ensuring optimal application performance. This book will offer the solutions to meet this challenge by presenting an in-depth approach for determining the following key performance-related factors:

  • Calculating maximum scalability

  • Quantifying average client response times under load

  • Identifying bottlenecks that prevent performance gains

  • Addressing these bottlenecks to tune for optimum performance

Additionally, this book will present alternative approaches for estimating Web application capacity with a methodology developed by Microsoft. This methodology has been dubbed Transaction Cost Analysis (TCA). The TCA methodology assists with estimating capacity planning needs for Web applications by associating server resource costs such as CPU to typical user operational costs. In this manner, one can estimate and prepare for site capacity needs prior to large traffic spikes that can occur as a direct result of large marketing or news events.

At the 10,000-foot level, the performance testing life cycle presented in this book consist of:

  • Planning Performance Analysis

  • Creating Effective Stress Scripts

  • Executing Stress Tests

  • Analyzing performance data to identify and address performance bottlenecks

Each of these steps will be discussed in the chapters that follow. To reiterate, performance analysis requires an extremely in-depth approach, coupled with experience and knowledge regarding the technologies utilized.

Figure 1-2 shows the performance analysis methodology this book will discuss.

figure 3-2 our performance analysis methodology cycle

Figure 1-2. Our performance analysis methodology cycle

Planning Performance Analysis

This step involves gathering key preliminary information that will structure and focus the testing approach. Data collected in the planning phase should, at a minimum, provide two things: 1) the details necessary to duplicate the production application environment as closely as possible, and 2) an understanding of how the application is used, including indicators of critical performance issues. Useful performance information sources can include marketing forecasts, production IIS logs, production performance logs, and functional specifications for the application. The quality of the performance data collected in advance of the actual performance testing is critical. It will help determine the requirements for the test environment and will be used in all phases of the analysis from staging the environment to deciphering performance test results. We present a detailed approach to planning in Chapter 2.

Creating Effective Stress Scripts

After gathering the required information and preparing your test environment, the next step is to create stress scripts that accurately simulate the expected production traffic. This is most effectively accomplished using historical data from the production site that is combined with expected data from the marketing or business analysts. Creating bulletproof stress scripts using Microsoft s Application Center Test (ACT) tool will be detailed in Chapter 3.

Executing Stress Tests

After bulletproof scripts have been created to simulate peak client load, stress testing begins. At this point, it is critical to have verified script functionality to ensure the scripts simulate production site traffic as closely as possible as the quality of the stress test is directly tied to the quality of the scripts. In addition, using a methodology dubbed smoke testing, the optimal load should be identified prior to running the actual stress tests, which generate the performance data that will aid in pinpointing the bottlenecks. Details for executing stress tests, including key focal elements while smoke testing, are presented in Chapter 3.

Analyzing Performance Results

After the stress tests have been run and data generated, the analysis phase begins. The first concern is to verify that the stress test ran through the simulation successfully because the quality of the data is only as good as the quality of the test. The analysis phase is the most technically in-depth step in effective performance analysis methodology, so starting with high quality data is critical to deriving high quality results and conclusions. The majority of time budgeted for performance analysis should be concentrated in this phase. It is for this reason that three chapters of this book are dedicated to analysis concepts. Chapter 6 offers an in-depth approach to analyzing the Web tier. Chapter 7 offers an in-depth approach to profiling managed code. And finally, Chapter 8 offers an in-depth approach to identifying bottlenecks on the data or SQL tier. Experience has shown that the SQL tier can be a common place for bottlenecks if the code at this tier is not designed and tuned properly. Bottlenecks on the SQL tier are pivotal because it is more difficult to scale out databases through clustering versus the available options for scaling out the Web tier. Of course, there are entire books written on these technologies alone. The testing methodology in this book will focus on efficiently identifying performance bottlenecks and strategically offering tuning approaches aimed at achieving better performance.

Identifying Performance Bottlenecks

Bottlenecks that can affect end user response times include application and server throughput, end-to-end Internet connection speed, and Internet congestion. Server throughput (the rate at which the server can process client requests) should not be a problem given that high performance hardware is a readily available commodity that is relatively inexpensive when compared with site development costs. Like server hardware, network bandwidth is a readily available commodity, and with adequate network saturation monitoring, network capacity can easily be purchased prior to growth in traffic. In the same sense, the user connection to the Internet is a commodity; however, average connection speeds remain very low for the majority of the user community and will stay that way until prices for broadband connections become more affordable.

Despite the fact that bandwidth, servers and Internet connectivity are commodities, it only makes good business sense to apply these commodities efficiently and effectively after improving application code performance not as a prelude to or substitute for performance testing and tuning. Only after the application has been fully tuned to utilize these existing commodity resources in the most optimal fashion, does it make sense to invest more money in hardware and bandwidth.

Given the criteria of optimizing prior to expanding resource usage, where in the application can the greatest performance gains be achieved using proper performance testing and tuning techniques? The greatest impact is derived from improving the performance of the application code itself. Why? The initial costs of the development team, managers, and testers are significant enough to mandate the most efficient usage of this resource time. Creating optimal, efficient code is most effectively achieved through budgeting time and resources for performance testing and tuning during the traditional development life cycle rather than upgrading hardware and software after release to react to production problems. Building the code correctly the first time can save hard dollars in support costs and soft dollars in user acceptance given an unexpected spike in traffic. In the .NET world, the focus centers on the ability of an application to quickly process a client request and return the results while simultaneously processing millions of other requests. Here is the key area where attention to adequate application performance can make the most impact on end-user response times, regardless of the hardware platform and available bandwidth. With adequate performance testing techniques, application throughput can be accurately predicted allowing site administrators to prepare for the worst-case scenarios.

Verifying Performance Tuning Results

The final step to a performance/stress analysis is to clearly communicate your stress results to the application stakeholders. This should be done in a manner that effectively enables them to understand and improve performance based on the information presented in the analysis. Proof-of-concept testing (re-running the stress tests after analysis and tuning and comparing the performance results side by side) is the most effective way to communicate performance improvement results. It is not sufficient to speculate that your tuning efforts have improved performance; it has to be objectively and conclusively proven! Have response times decreased, and scalability increased? Have server resources required to serve the same level of client requests decreased significantly? These are the questions that a thorough performance analysis will address.



Performance Testing Microsoft  .NET Web Applications
Performance Testing Microsoft .NET Web Applications
ISBN: 596157134
EAN: N/A
Year: 2002
Pages: 67

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net