History of Capacity Planning

3 4

In the early years of multiple-user computers, the concepts of capacity planning and performance were not widely understood or developed. By the early 1970s, a sizing project simply involved finding customers who were running an application that "ran like" the target customer application. Finding these customers was difficult, and matching companies or organizations and their application use was even more challenging.

In the mid-1970s, customers and application suppliers developed an analysis methodology, running a specific benchmark or workload to guess at the optimal initial size of a machine. They built an application similar to that of the customers in question and ran it on similar hardware to gather performance statistics. These statistics were then used to determine the best-size machine to meet the customer's needs. This process also enabled "what if" scenarios to be run with the benchmark to determine what size machine would be required if more users, application processes, or data were added to the system. The one drawback to this process was that it was expensive. These early benchmarks, originally developed to simulate customers' usage patterns, began to be used mostly by system vendors as marketing tools, to sell systems and to compare the relative performance of competing hardware offerings.

During this period, analysts were developing methods of predicting usage on an existing system. On the surface, this process seemed less challenging, but it proved to be just as difficult because tested methodologies did not exist, nor were there tools available to collect the necessary data. Computer scientists such as Dr. Jeffrey Buzen, the father of capacity planning, were still developing theories on usage and determining how to perform these calculations.

By the 1980s, the early benchmark simulations had evolved into standard benchmark loads, such as the ST1 benchmark, the TP1 benchmark, and the Debit/Credit benchmark, but the emphasis was on finding the fastest performing hardware for promotional usage instead of on developing a standard application workload that could be used to size and maintain systems. Customers still could not use these benchmark offerings for system hardware comparisons because their situations were all different. Customer demand led to the formation of a computer industry consortium, the Transaction Processing Performance Council. The council specified standardized transaction loads for over 45 hardware and software manufacturers. These benchmarks could often show relative capabilities of hardware and database software; unfortunately, they were not useful for sizing an application workload.

NOTE


The council benchmarks were not useful for sizing because they did not reflect a real workload; more often they were designed to show performance, such as how many transactions were going through the system at a given time. The transactions were of short duration and did very little work, so very large quantities of them could be processed. These large quantities of processed transactions would give the impression that the systems these benchmarks were running on were very powerful, when in reality they only seemed that way because of the workload design.

At the same time, client/server computing and the use of relational database technology was maturing, and the need for predicting the initial size of a system and for capacity planning was growing. Most modern applications are now written based on client/server architecture. Servers are usually used as central data storage devices, and the user interface is usually run locally on a desktop machine or on a remote Web site. This cost-effective strategy for using expensive server processing power takes advantage of the GUIs with which customers are already accustomed. With heavy utilization of the servers running database applications, these servers are now the focus for most sizing projects and capacity planning studies.

To date, the application simulation benchmark remains the most common method used for sizing servers, and collecting historical performance data and using capacity planning techniques on this data is still the most accurate way of predicting the future capacity of a machine. Although the process is expensive and time-consuming, customers can achieve a fairly significant degree of accuracy if they simulate the exact usage of the server. However, because large projects may require a multimillion-dollar investment on the part of the customer or the vendor, only the largest customers can usually gain access to systems for this kind of testing. Clearly a method is required to perform in-depth, accurate system sizing and capacity planning for small to average-size systems. For such systems, some easy calculations and a general knowledge of system usage are all you need to be able to size and predict usage to 90 percent accuracy.



Microsoft SQL Server 2000 Administrator's Companion
Microsoft SQL Server 2000 Administrators Companion
ISBN: B001HC0RPI
EAN: N/A
Year: 2005
Pages: 264

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net