Quantifying Performance


What's the definition of performance and an optimized system , and why are they so important?

Is high performance all about a warm-and-fuzzy feeling we get when our environments and platforms are operating in an ultra -tuned fashion, or is it more about the direct or indirect financial implications of a well- tuned platform? Maybe it's a bit of both, but the quantified and tangible reasons for performance being so important are primarily financial considerations.

Consider the following example: You're operating a 1,000-user environment on a pair of Sun F4800 servers, and the average response time for each user transaction is around five seconds. If your business stakeholders informed you they were going to heavily market the application you manage ”with a suggested 100-percent increase in customers ”then you'd probably initially think of simply doubling the Central Processing Unit (CPU), memory, and physical number of systems. Or you'd simply look at how the application operates and is tuned.

From personal experience, most people focus on increasing the hardware. System load from customer usage and computing power typically correlate with one another. The problem with this scenario, apart from the de facto upgrade approach being costly, is that it doesn't promote operational best practices. Poor operational best practices (whether it's operations architecture or operations engineering) are some of the worst offenders in spiraling Information Technology (IT) costs today. Operations engineering and performance go hand in hand.

An operations methodology on one side is about the processes and methodologies incorporated into the scope of your operations architecture. These processes and methodologies are what drives proactive performance management and optimizations programs, all of which I'll discuss later in this chapter.

The other side of this is a performance methodology. The performance methodology is basically the approach you take in order to implement and drive an increase in application, system, or platform performance.

Figure 1-1 highlights the two methodologies, operations and performance. Although they're both the key drivers in achieving a well-performing system, too much or too little of either can incur additional costs.

click to expand
Figure 1-1: The operations and performance methodologies intersect

Figure 1-1 shows a fairly proportioned amount of both operations and performance methodologies. Note the cost scale underneath each methodology.

Let's say that Figure 1-1 represents a model that works to achieve a 25-percent improvement in performance each quarter (every three months). If you wanted to increase this to a 75-percent improvement in performance in a quarter, both operations and performance methodologies would come closer to one another (as depicted in Figure 1-2), and the costs associated with conducting performance management and operational process changes would increase.

click to expand
Figure 1-2: The operations and performance methodologies intersect, with an increase in performance

Although there's no problem with attempting to achieve this type of performance increase, costs for both methodologies increase. The question that needs to be answered is, "How much cost is acceptable to achieve a 75-percent increase in performance?"

Would it mean that because of the performance, the volume of transactions could increase, amounting to more customers or users utilizing the application and system? Would this increase potential sales, orders, or basic value to your end users? If this is the case, then the sky is the limit!

Take the following example: Suppose each user transaction netted $100. If your system was able to facilitate 500 transactions per day, then an increase in performance of 75 percent (using Figure 1-2 as the example) would increase the transactions from 500 to 875 transactions a day. This would equate to an additional $37,500. However, to achieve that additional 75 percent of transactions, what does it cost you in additional resources (such as staff hours and staff numbers ) and overhead (such as additional time spent analyzing performance logs and models)?

This is the dilemma you face in trying to achieve application and system optimization: How much is too much?

One of the key messages I try to convey in this book is that performance management isn't simply about turbo-charging your environment (applications, systems, or platforms); it's about smart performance management. Sometimes ten seconds is acceptable for a response time for customers and end users; it may cost a further 10 percent of your operations budget to get to six-second response times, but it may cost 90 percent of your operations budget to get to five-second response times.

Therefore, performance management is about understanding the requirements of performance as well as understanding when enough performance is enough.

Note  

This question of "When is enough, enough?" is somewhat analogous to disaster recovery realization analysis. For example, if a company is considering disaster recovery (in its true sense) as a part of its operational effectiveness and availability program, there's little value involved when a particular application or system is used once a month and takes only several hours to rebuild from scratch. The cost associated with utilizing a disaster recovery site may cost less than $1 million, but the financial impact of a "disastrous event" occurring would only be $25,000. In this case, there's little reason to implement disaster recovery planning. In summary, the cost of trying to tune your platform to provide an additional second of response time costs more than the value that the additional performance provides or creates.

Getting back to the earlier thread, not looking at the WebSphere or application platform in order to scale and increase performance opens you up to myriad potential disasters down the track.

Another problem with this kind of de facto upgrade approach is that your developers may fall into the dangerous trap of expecting unlimited processing power if they're used to working in an environment where simply coding to functional specification and practicing for code optimization are the norm.

Note  

That said, if you've already read later chapters or have previously conducted an optimization and tuning review of your WebSphere environment, upgrading hardware to achieve scalability or boosting performance could very well be the right choice. Just remember, any upgrade to CPU and memory will require changes to your WebSphere application server configuration. As soon as you add either of these two components into a system, the overall system load characteristics will change.

As I've hinted at earlier, negative system and application performance doesn't have just a negative effect on your customers or users, it also has a negative effect on the budgets in your IT department and your manager's perception of your effectiveness.

The bottom line is that poorly performing applications are expensive. The number-one rule in operational effectiveness for managing performance is all about proactive management of your system resources.

Many of you reading this book will be able to tune a Unix or a Microsoft server, but the purpose of you purchasing this book is you wanting (or needing) to be able to also tune, optimize, and ultimately scale your WebSphere environment.

Now that you have a synchronized view of what performance really is, you'll look at the art of managing performance.




Maximizing Performance and Scalability with IBM WebSphere
Maximizing Performance and Scalability with IBM WebSphere
ISBN: 1590591305
EAN: 2147483647
Year: 2003
Pages: 111
Authors: Adam G. Neat

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net