Storage Planning and Capacity Planning

team lib

Planning and installing applications continues to require a big effort within IT, especially when it comes to applications that are considered enterprise-level (a term which has never seemed appropriate, by the way). Nevertheless, the term was employed by large system players (read mainframes) to differentiate their solutions from those of the small system players (read PCs) in the early days of the user revolution. Commonly known as the PC wars, it was a time in which many business users made attempts to buy their own IT infrastructures for pennies on the mainframe.

It was about this time that sophisticated capacity planning and workload management started to go out the doorout the door, out the window, and out of IT standards and practices. All in all though, those tools and activities provided a certain amount of pride and gratification when the systems (read mainframesagain) were upgraded and everything not only worked, but worked the same as it had before, and had improved user response time, balanced utilization, and a batch window that was completed two hours earlier.

Well, we traded those simple days for ubiquitous and prolific computing resources based on everyones desktop, job, travel agenda, and meeting rooms. We traded the security of centralization that made the capacity planning a manageable and gratifying entity for a Wild West distributed environment that reflected the scalability of the box we were operating on, be it UNIX or Windows. The mantra was: If the ones were running dont work, are too slow, or cant support the network, we can always get new ones. What could be simpler? A capacity plan based at the box level. If it doesnt work, just get another one. They dont cost much. We dont need to consider no stinking workloads, we can process anything, and if it grows, well just add more boxes . The same goes for the network.

A wonderful idea, and to some degree it workeduntil the applications started to take advantage of the distributed network features of the boxes, as well as the increasing sophistication and power of the boxes. Yes, the little boxes grew up to be big boxes. Others grew to be giants, the size of mainframes. It should be pointed out, however, that many boxes had genes that stunted their growth, keeping them from becoming enormous mainframe-like boxes. Even then the box applications grew in sophistication, resource utilization, and environmental requirements.

Note 

The gene mentioned here was referred to as the Gatesonian gene, whose scalability would not grow beyond the confines of a desktop.

Interestingly enough, we seem to have come full circle. Although nothing essentially duplicates itself, our circle of activities has landed us back near the realm of thegasp! workload. Its not like workloads ever really went away, we just havent had to recognize them for a while, given we all got caught up in the box level capacity planning activities, known as the BLCP practice.

BLCP is not unlike the IT practice of going into installation frenzy during certain popular trendsfor example, ERP, relational databases, e-mail, office automation, web site, intranet, and now the dreaded CRM. Such things cause us to be driven by current application trends, leaving us to distinguish between CRM performance, ERP utilization, and e-mail service levels. To a great degree, we handled these installation frenzies with the BLCP (box-level capacity plan); we can implement anything by using the right box, as well as the right number of boxes.

Several things within the applications industry have rendered the BLCP practice obsolete. First and foremost is the sophistication of the application. Today, applications are distributed, datacentric, hetero-data enabled, and, the up-and-coming process, de-coupled. Second, the infrastructure has become specializedfor example, there is the network, the server, the desktop (for instance, the client), and the storage. Third, the infrastructure itself is becoming further de- coupled . More specifically , storage is taking on its own infrastructure, with storage network freeing its processing bounds and restrictions from the logic of the application, as well as the network overhead of the client/server model, becoming itself immunized against the Gatesonian gene.

Note 

Hetero-data is the characteristic of an application that requires multiple data types to perform its services.

Finally, the common services that support the application logic infrastructure are quickly becoming both commodity oriented and public in nature. We are moving into a future where applications are programmed by end users to employ web services. These services access lower level infrastructures and subsequently complex and proprietary support products operating within a particular information technology structure. Users will likely take it for granted that adequate resources exist.

These conditions will further render the storage entity with its own operating infrastructure. However, this will require that it evolve as a more complex structure, supporting an even more complicated set of common services that includes application logic, network processing, and common I/O facilities. Each of these entities will have to understand and configure themselves to effectively operate with existing and future workloads that will dictate the processing and physical requirements necessary for just minimal performance.

Thats why workloads are important.

 
team lib


Storage Networks
Storage Networks: The Complete Reference
ISBN: 0072224762
EAN: 2147483647
Year: 2003
Pages: 192

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net