Performance Tuning

I l @ ve RuBoard

Here's a cardinal rule of development: regardless of the amount of effort you put into the design and function testing, you'll always encounter surprises when it comes to performance. This is just a fact of life. No design is without its flaws, and performance problems are often due to mistaken assumptions made quite early in the development process.

Whenever I start a performance investigation, I do a " sanity check" to gauge potential problem areas. From there, I proceed to a more detailed investigation using various tools. If that doesn't get me what I need, I can go even further, using lower-level tools to debug and profile critical areas of the application. The general idea is to start at the top and work your way into more and more detailed checks. This allows you to avoid digging into the wrong places and wasting your time and energy on red herrings.

The Sanity Check

The sanity check is all about getting a feel for the application. This usually entails running the application under stress and seeing what effect this has on the system. In my experience, Windows Task Manager is usually sufficient for this. I look at processor usage, memory behavior, and network usage. All of these simple variables can give you insight into what's happening in the application.

Processor Usage

The processor usage profile gives you an indication of how the application is executing. The profile generated by an application under stress can be very telling. I break down this profile into several categories.

Pegged processor

In this situation, the processor is completely maxed out at 100 percent usage or close to 100 percent. This might or might not be a bad thing. It depends on what your user load is. If I were running a Web site and simulating 10 simultaneous users, I'd probably say this is a bad thing. However, if the load or tasks being accomplished are known to be computationally intensive (graphics rendering or something similar), this percentage could be unremarkable.

Single processor pegged (multiprocessor machine)

In a machine with more than one processor, Windows Task Manager will display the activity of all the active processors. If you get the impression from the data that your application is not fully utilizing the processors, you might see what looks like a high load that is constantly switching from one processor to another. In my experience, in this situation you'll notice that your processor usage will not greatly exceed what you'd expect from a single processor. (For example, on a two-processor system, the usage will not greatly exceed 50 percent, and on a four-processor system it will not greatly exceed 25 percent.) If you notice this pattern, you might want to look closely at how (if at all) your application implements threading.

Cyclic processor

If the processor usage profile is undulating, almost like a wave, the performance tests are probably not providing a sufficiently uniform stress load. However, this might also indicate a threading synchronization or shared resource issue in the application.

Erratic Processor

Abrupt transitions in the profile can indicate locked resources or shared resource issues such as a hot spot in a database. For example, SQL Server performs both page and row-locking features. If a single row is being updated, it is locked by the server; if multiple clients try to access that same record, you might experience a slowdown because of the locking behavior.

Stalled Processor

If hits are coming in but the machine doesn't seem to be doing anything, you might have a resource constraint or something might be blocking your threads. The most common culprit is a connection leak for your SqlConnection object pool ”somewhere in the application code, the Close method is not being called. Or you might have a deadlock condition with your threads. If you have a shared resource that, for some reason, wasn't released (perhaps a thread terminated prematurely and didn't release a lock), you can easily cause your application to grind to a halt.

Memory Behavior

The behavior of an application's memory usage can indicate several things, but primarily resource usage ”that is, is your application releasing resources properly, is it releasing resources too soon, and is there a possibility of reusing resources instead of constantly creating and disposing of them?

Stable memory usage

A stable memory size can be good or bad. If your application's memory commitment doesn't scale with increasing activity, your application might be starved for resources. (Perhaps you have a pool or several pools whose size needs to be increased.) On the other hand, the memory usage might be stable but excessive in size (hundreds of megabytes), which might point to inefficient resource usage (unnecessary allocation of large structures).

Undulating memory usage

If your memory usage keeps moving up and down, this might suggest an inefficient use of resources. An application might be constantly creating and disposing of objects that can be shared. Creating and destroying objects is expensive. Reusing objects rather than re-creating them can be a big performance and efficiency win. Consider whether the application makes appropriate use of object pooling.

Another possibility is that the application is performing unnecessary or large recursion operations. Recursion (a method invoking itself repeatedly) is a common programming practice that can enable simple and elegant solutions. But some problems might require too many function calls (or levels of recursion). Calling a method repeatedly takes up a fair bit of memory and time. If this is happening, you might need to go back and reimplement the offending code to use an iterative solution (loop structure) instead.

Ever-increasing memory usage

This trend means that some resources are not released explicitly. Remember that the runtime does not make any guarantees about garbage collection. If your application is allocating unmanaged memory through PInvoke or COM Interop that is not being released, you have a problem. Generally speaking, your application's memory should not continuously increase with a constant simulated user load. There is the inevitable shakeout period as the application is started up (where you'll see memory increasing as objects are allocated and pools are filled), the application should reach a point where everything stays more or less constant.

Network Usage

The network usage of the system is an important indicator of an application's performance. When you're testing a system, you can generally ignore any potential network issues because the environment is closely controlled.

Low

If you're not seeing a lot of network usage, your application might not be very chatty. On the other hand, if you'd expect to see more usage based on the user load, the problem might be due to resource sharing in the application itself.

High

Are you sending too much information across the wire? If your application uses Remoting or XML Web services, you might want to check to see what is being sent. Serialization of objects can lead to unintentionally sending an excessive amount of information.

That's All, Folks

Remember that any of these checks can be informative or not. Every application is different. Sometimes indicators that look suspect might just be part of your application's performance signature. You should not assume that there is a perfect profile for an application ”there isn't. We all want applications that make the best use of the resources available, but sometimes the problem has less to do with the application than with the system it's running on. Also, results are often subject to interpretation, and investigations often proceed on hunches rather than solid data.

Attaching a Debugger

Now you have a qualitative feel for what an application is doing. But what you really need is something quantitative. In my experience, nothing says quantitative quite like a debugger. By attaching a debugger to a running process, you can quickly see if any unexpected exceptions are being generated. I usually start by running through each user scenario individually with the debugger attached. I try to pick up on any exceptions that shouldn't be happening. When the application is "clean" of any unwanted exceptions, I try running a set of user scenarios at a low-stress level. If you have major threading or resource sharing issues, they'll typically show up almost immediately when you apply any real stress to an application. Keeping the stress level low tends to keep things understandable without deluging the user with an avalanche of potentially duplicate exceptions.

Once you can reasonably say that the application is running fine at a low-stress level (I'd give it some time to make sure), it's time to ratchet things up. Increasing the stress level can help you discover any additional threading or resource sharing issues. Watch out for timeout exceptions, which can suggest that object requests waiting in queues are timing out before they can be processed . This is not uncommon with SqlConnection objects in the Open method, but you could also see timeout exceptions with your own pooled objects.

Performance Counters

Performance counters are the developer's best friend. You can access the same information in Windows Task Manager, but the Performance monitor console offers a greater wealth of information. You can monitor the size of your SQL connection pools, application response times, and so forth. You can also monitor any custom performance counters that your application makes available.

Monitoring the performance counters in conjunction with using the debugger can allow you to peek into your application when strange things appear to be happening. I already mentioned the example of a stalled application (breaking in and viewing the state of all of the application's threads). There are many other conditions where this is useful. Remember to see Appendix C and Appendix D for performance counters that might be of use to you.

Low-Level Analysis

Low-level performance analysis is usually my last resort ”for several reasons, not least of which is that it's the most involved part of all performance tuning. When you get down to the low-level analysis, you're talking about using application profiling and trace and debug statements to track down performance issues. You might also use specialized custom performance counters provided by your application. And you might also need to use a low-level debugger such as WinDbg to view inactive threads.

Profiling is probably the one of the best low-level tools you can use. I've found it to be invaluable in finding hot spots in an application. If an application is not doing what you expect, you can use a profiling tool to find out a lot of information about an application ”possibly more that you would ever want to know.

I l @ ve RuBoard


Designing Enterprise Applications with Microsoft Visual Basic .NET
Designing Enterprise Applications with Microsoft Visual Basic .NET (Pro-Developer)
ISBN: 073561721X
EAN: 2147483647
Year: 2002
Pages: 103

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net