Performance Testing

I l @ ve RuBoard

Performance testing is a critical step in developing of any large-scale system. As applications are built, component upon component, performance problems abound because these components can interact in unintended ways. This might be surprising, given that you might have already functionally tested the system. Unfortunately, simple functional testing will not give you a complete picture of the interactions of components in your application. Only by performance-testing ­ your application can you complete this picture. The additional stress on your application's infrastructure caused by performance testing will help you find most component interaction problems that would otherwise be missed in simple functional testing.

You can easily generate performance numbers for individual operations, and you can also look at the performance of individual methods in your application. You'll get raw performance numbers that you can use to optimize parts of an application. This information is valuable , but it doesn't tell the full story. It will not tell you how these components or methods interact when simultaneous calls are made. This is where stress testing comes into play.

A stress test simulates the activity of multiple simultaneous requests on a system. For example, if you want to test a Web application to determine the peak requests per second, you can do this only by simulating multiple clients all hitting the application at the same time. This will produce stress on your application and give you an indicator of the scalability of the system.

Tools of the Trade

Many tools are available for testing and analyzing an application, including built-in tools and third-party development tools. I'll introduce a number of useful and often critical tools for performance testing. Most are available on any Windows platform, but I'll occasionally mention third-party tools as well. I'll cover the following major sets of tools:

  • Windows Task Manager

  • Performance monitor

  • Debuggers

  • Profilers

  • Performance and stress tools

Windows Task Manager

Windows Task Manager is a sort of one-stop shop for information about the health of your machine. It is also a great first- tier performance analysis tool. Windows Task Manager comes with every copy of Windows, it's always a click away, it doesn't interfere greatly with the system, and it provides a wealth of useful information in a compact and simple way.

You can bring up Windows Task Manager in three ways:

  • Press Ctrl+Shift+Esc

  • Right-click on the taskbar and choose Task Manager

  • Press Ctrl+Alt+Delete and click the Task Manager button

Figure 12-1 shows Windows Task Manager with the Performance tab selected. Quite a lot of information is available, including the CPU usage and memory statistics. The CPU usage information tells you quite a lot. The green lines on the graph indicate the amount of processor time actively being used. The red lines tell you how much of the processor's time is spent in the kernel (usually doing system tasks or I/O operations). If your system has more than one processor, you'll see a separate graph for each processor. This is handy for figuring out how well an application behaves on a multiprocessor system.

Figure 12-1. The Performance tab of Windows Task Manager.

graphics/f12pn01.jpg

The memory indicators can also help you understand how an application is using its memory. The graph shows a runtime history of the memory of the system (admittedly for a narrow window of time, but it's still quite handy).

Another useful feature comes in the form of the Networking tab, which is shown in Figure 12-2. This tab was added to Windows Task Manager in Windows XP. It offers a simple and concise view of how much of your network bandwidth is currently being consumed.

Figure 12-2. The Networking tab of Windows Task Manager.

graphics/f12pn02.jpg

I'll go into further detail about Windows Task Manager information later in the chapter.

Performance Monitor

Performance monitor is one of the most valuable monitoring tools available. The Performance monitor console, shown in Figure 12-3, allows you to view and log performance counters. You can use these counters to monitor the health of any machine on your network ”you get information about the operating system, services, applications, network usage, and so forth.

The Performance monitor console is easily accessible. You can open it in three ways:

  • Type perfmon at the command line or in the Run dialog box.

  • From the Start menu, choose All Programs, Administrative Tools, and then click on the Performance icon.

  • Open a saved Performance monitor MSC file.

    Figure 12-3. The Performance monitor.

    graphics/f12pn03.jpg

The Performance monitor lets you do the following:

  • Monitor many performance counters, both on the local machine and on other machines on your network, all within the same console

  • Control the sampling rate for the counters you're monitoring.

  • Adjust the display scale of any of the counters to make them easier to read.

  • Set up performance counter logs. You can then sample a large series of data to disk over a period of time. You can analyze the results at your convenience. This is a more powerful option than just using the Performance monitor, which is limited to displaying only the latest 100 samples.

  • Designate performance counter alerts. This allows you to monitor performance counters and cause some action to occur if any of your criteria are exceeded.

Every system comes with a number of performance counters already defined. Applications and services can also define their own counters, enabling anyone (with the appropriate permissions) to monitor various performance metrics of a system. Figure 12-4 shows the Add Counters dialog box from Performance monitor. You can select performance counters and also designate the name of the machine you want to monitor.

Figure 12-4. The Add Counters dialog box.

graphics/f12pn04.jpg

Popular performance counters include those for processor usage, memory, connections, throughput, response time, SQL Server, and ASP.NET. Appendix C and Appendix D describe many of the common built-in performance counters that you should be aware of.

Debuggers

Debuggers are designed to find functional problems with an application, but they can also serve other purposes. When you consider a debugger for performance testing, your first requirement should be that it be lightweight. My preference is to use a command-line debugger such as cordbg because it requires little overhead and tends to not interfere too severely with the running of the application. Attaching a debugger will change the performance characteristics of your application ”usually slowing it down. Using a debugger that requires as few resources as possible (both memory and processor) will minimize the effect the debugger has on the system under test.

It might seem counterintuitive that attaching a debugger can occasionally result in a performance increase. How can this be? The answer is pretty simple: thread synchronization and shared resources. You might have a race condition in your application, and attaching the debugger will allow the system enough time to avoid the problem. For example, say your WaitHandle.Wait statements have too long a timeout. Shorting the timeout period might allow your application to avoid the race condition (or lengthen it).

Note

Performance testing and tuning is an art. Strange things can happen when you apply familiar tools and techniques to performance analysis. Everything that happens is a potential clue, even if the result is counterintuitive.


Debuggers can tell you a lot of other things. If your application is throwing exceptions under stress, this can indicate that something is wrong. You can then catch one of the exceptions and discover where it's coming from in your code. However, in some situations exceptions are perfectly normal behavior ”for example, the ThreadAbortException generated by the Response.Redirect method used in ASP.NET. A normal call to Response.Redirect will cause a ThreadAbortException . It turns out that this is the most efficient way to end the execution of an ASP.NET page, so it should not be considered an error. Unfortunately, this can be confusing to someone who is not used to seeing this.

A good debugger can also provide information about an application just by breaking into the code. Say your application is periodically hanging for 20 seconds and then resuming processing of requests. Using a debugger, you can break into the application during this down period and generate a list of the threads and their stack traces. This might provide some insight into where the application is getting stuck.

Profilers

Profilers allow you to see in-depth where the processing is spending time in your code. They locate the slowest or most time-consuming parts of your system and show you how often methods are called. Profiling tools are not a panacea, however. They will not tell you everything you need to know. When I start with an application that is not performing well, I typically go through the list of usual suspects (exceptions and database connection leaks) before I even think of bringing up the profiler.

You must also be deliberate about the use of your profiler. Depending on the features it supports, you might need to be sure that the numbers you're getting reflect what you think you're getting. I've seen situations in which the application startup time is so significant that it drowns out all other information, thereby skewing the profiling results. If you're aware of this possibility, however, you should be able to adjust your profiling strategy accordingly to omit bad or unhelpful data.

You can take advantage of the built-in .NET profiling features, but this is not for the faint-hearted. It requires a good knowledge of COM, stacks, and performance analysis. You must create a COM object that implements the ICorProfilerCallback COM interface, and then you must register the object with the common language runtime (CLR). If you're interested in pursuing this further, you can look at the two profiling samples included with the .NET Framework SDK. They can be found under the Program Files\Microsoft Visual Studio .NET\FrameworkSDK\Tool Developers Guide\Samples\profiler directory.

Note

If you're not in the mood to build your own profiling tool ”and you're not alone on this, trust me ”you can use a third-party tool to do the job. Look in stores that cater to developers, or search  online for companies that develop .NET profiling tools.


Performance and Stress Tools

You have a lot of options when it comes to third-party performance and stress tools. You might be familiar with names such as Rational Software and other companies that provide testing suites. These tools vary in price rather dramatically ”you'll need to find the best fit based on your budget.

These tools generally fall into two categories: user interface testing tools and Web testing tools. Microsoft does not provide a user interface testing tool. One popular choice is the Rational Visual Test tool, which also supports .NET Windows Forms.

Plenty of Web testing tools are available. Microsoft provides a free Web testing tool called the Web Application Stress Tool (WAS). You can use it to generate both performance and stress tests against a Web site. WAS suffers from some limitations, but these have been addressed in the Web testing tool provided with Visual Studio .NET Enterprise Edition, which is called Application Center Test (ACT). ACT provides a more feature-laden architecture that allows you to customize your tests and develop sophisticated testing that includes complex Web Forms and XML Web Services. Appendix B provides an overview of the ACT application.

Note

WAS is available through the Windows 2000 Internet Information Services Resource Kit. Alternatively, you can download it from the MSDN Web site at http://webtool.rte.microsoft.com.


Performance Test Planning

Planning your performance testing is extremely important. Going through a formal process helps you document the performance requirements for your application and sets general expectations. I suggest that you start with the following three steps:

  1. Draw up a list of basic scenarios. These scenarios should be specific user operations that can be performed. (They might more appropriately be considered usage scenarios if your application does not have a user interface.) Show this list to the various people involved, and try to reach agreement on the nature of the scenarios. You can also get input on what other scenarios people might deem important. Organize this list based on the importance of these scenarios to the success of the product.

  2. Build a testing framework to test and simulate various user loads on your application. You need a tool, custom or otherwise, that can give you accurate and consistent performance results. Ideally, you'll be able to use your simulation tool to implement your user scenarios so you can ultimately provide a constant level of stress on your application that accurately approximates real-world traffic.

  3. Using your testing framework, get performance numbers for a single-user scenario as a benchmark. If your performance is acceptable for a single user, you'll have a reference to determine where your worst performance degradation occurs. These benchmark numbers will also allow you to compare different builds of an application. As the development team does more and more work, having a baseline to compare against will enable you to identify problems quickly. It will also allow you to validate fixes or optimizations in new builds.

You reduce the problem of testing a large system to a manageable size by breaking down all of the possible user actions into a set of well-defined and well- understood usage scenarios. This is a simple way to prioritize performance or scalability problems. If you simply walk through your classes and interfaces looking for performance issues, you're unlikely to improve the overall user experience. By focusing on specific user scenarios, you can directly affect the perceived performance of your system.

Furthermore, once you define these scenarios, you can break them down into suboperations. This process requires you to think through all of the parts of the application that are affected by your user scenarios. Once you have the set of low-level operations, you will have narrowed down where any performance problems can lie. You can use this as a checklist for your investigations.

Note

When you break down your scenarios, you'll typically identify common problem areas. If you're seeing similar performance degradation in these scenarios, common components are the likely source of the problem. On the other hand, if you have two scenarios with some common components but one has a performance problem and the other doesn't, it is quite likely that the problem does not reside in the common components.


Deciding on Acceptable Results

Generating performance numbers is all well and good, but you need a set of performance and scalability goals to compare them against. This is easier said than done. Anyone can tell you that more is always better. The more requests or users that a single system can handle, the lower the cost to the potential customer.

You first have to know what is unacceptable before you can tackle the problem. Consider the development of a Web-based online store. If the site must be able to handle, say, 1000 concurrent users and your application will support only 100 per server, you must put together a server farm of server boxes to handle the load. Because most server farms are designed to peak at about 80 percent usage (you always need a safety margin to handle the freak loads that happen from time to time), you'll need around 13 servers ”an expensive proposition when a single server can run into the tens of thousands of dollars. If, on the other hand, you increase the application's ability to serve users by 20 percent, the company will need to buy only 11 machines (based on 120 peak users and each server running at 80 percent of capacity).

The value of supporting a higher user load is obvious, but where do you set the bar? I think it's reasonable for your design team to set some goals ahead of time ”goals that are reasonable, conservative, and subject to modification. You ultimately cannot know what is possible until you have a working version of the application available to test. Then you can get real performance numbers and determine where your application needs to be. Granted, if you're lucky enough to have existing competitors or previous versions of your product, you can produce a reasonable set of performance goals. In all other cases, you'll have to wing it.

If your application is not meeting your performance objectives, you can spend the time to go through the necessary performance optimization steps, but you might reach a point at which further optimization is unlikely or unfeasible. It is not unreasonable to apply the 80-20 rule (also known as the Pareto principle, in which 20 percent of effort produces 80 percent of the results) to performance tuning. You should tune performance for your critical scenarios first. Then you should draw up a further list of items, in order of priority. Performance testing will not only give you a look at your application's performance (response times), but it will also provide insight into how well your application will scale as the user load increases .

Building a Performance Profile

I generally prefer to generate performance results under a wide variety of conditions. Figure 12-5 shows a scalability chart for simulated users vs. processors. You can see how you should expect an application's performance to increase as more processors are added to the system. This theoretical result is not universally true, but you should attempt to demonstrate your application's performance based on the processor, memory, or storage usage.

Figure 12-5. A theoretical scalability profile of a Web application.

graphics/f12pn05.jpg

Building such graphs helps you determine how your application is performing. An application should not generally exhibit erratic scaling behavior. You should target smooth scaling curves with a gentle performance degradation after the peak has been achieved.

I l @ ve RuBoard


Designing Enterprise Applications with Microsoft Visual Basic .NET
Designing Enterprise Applications with Microsoft Visual Basic .NET (Pro-Developer)
ISBN: 073561721X
EAN: 2147483647
Year: 2002
Pages: 103

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net