Modeling Performance and Capacity Requirements

team lib

One objective of the capacity planning activities is to estimate as accurately as possible the future needs of users. This generally can be accomplished from modeling the potential workloads and configurations for adequate fit. In general-purpose capacity planning terms, there are levels of modeling that can fit the needs of most data-center requirements. These solutions range from expensive and specialized software tools that model workloads and potential performance based upon hardware configurations, to vendor benchmarking that generally uses subsets of the actual data-center workloads, and the official performance councils that provide third-party benchmarks of vendor solutions.

The first set is the capacity planning modeling tools. Although these provide the greatest detail, their accuracy depends on the data applied to the model, and the expertise in analyzing the output of the benchmark results. Unfortunately, these tools have yet to be developed for storage networks, but through creative efforts could probably be modeled given sufficient time and money. Even then, the model would be an integrated simulation of workloads that share common storage arrays, or in the case of NAS workloads, that provide additional latencies as remote network drives .

The second set is the vendor benchmarks, although these by their very nature will be suspect given their inability to replicate the specifics of an individual data center. These simulations dont always have the disparate facilities that make up production data centers, and as a result, the benchmark may be skewed toward the vendors solution. Wouldnt that be a surprise? However, vendors benchmarks provide valuable insight into understanding the potential capacity and performance of an expensive storage infrastructure installation. The additional aspect is that many first- tier vendors have user benchmark centers where they test potential customer solutions as well as conduct their own interoperability testing.

The third set is the third-party benchmarks by non-profit corporations that sponsor testing and performance benchmarks of real-life configurations. These companies are likened to the insurance safety councils that perform crash tests. The performance councils take off-the-shelf equipment from vendors and build a real-life configuration in order to run a simulated workload based upon end-user applications, such as OLTP and data warehouse transactions. In other words, they test out the configuration in real-life scenarios so as to validate all the factors a data center would consider when purchasing the configuration. Two are relevant to the storage industry: the Transaction Processing Performance Council (TPC) and the Storage Performance Council (SPC).

The TPC provides benchmark testing of computer configurations using standard transactional sets. These benchmarks execute transactions that characterize database queries which simulate everything from simple queries to complex data warehouse queries that access multiple databases. The tests are run on vendor-supplied hardware and software configurations that range from homogenous hardware systems to heterogeneous software operating environments and database systems. The test results are generally published and available for purchase through the council. This allows data centers to monitor different levels of potential configurations at arms length while obtaining information about potential cost-to-operating environment requirements. This provides an evaluation of storage from an integrated view, as storage configurations become part of the systems overall configuration.

The SPC is specific to storage and is the new kid on the block when it comes to evaluating vendor storage configurations. This is the most specific and productive modeling available to date for storage networking and capacity modeling. Their job is to be the insurance safety council for the storage industry and protect the data center from products that continue to be problematic in real-life implementations , while providing an effective feedback mechanism for vendors who strive for better goods.

The SPC-specific objectives are meant to provide both the data center and systems integrators with an accurate database of performance and price/performance results spanning manufacturers, configurations, and products. They also use these experiences to build tools that help data centers analyze and effectively configure storage networks.

They do this through a series of configuration requirements, performance metrics, and tests. The services can analyze small subsets of storage, from JBOD and RAID storage arrays to large-scale SAN configurations. However, all configurations must meet the following criteria prior to testing:

  • Data Persistence Storage used in an SPC test must demonstrate the ability to preserve data without corruption or loss. Equipment sponsors are required to complete audited tests that verify this capability.

  • Sustainability A benchmark configuration must easily demonstrate that results can be consistently maintained over long periods of time as would be expected in system environments with demanding long- term I/O request throughput requirements.

  • Equal Access to Host Systems All host systems used to impose benchmark- related I/O load on the tested storage configuration must have equal access to all storage resources.

  • Support for General Purpose Applications SPC benchmarks provide objective and verifiable performance data. Specifically prohibited are benchmark systems whose primary purpose is the performance optimization of the SPC benchmark results without corresponding applicability to real-world applications and environments.

Vendors who submit their products to these benchmarks must have their systems available to ship to customers within 60 days of reporting the SPC benchmark tests.

Probably the most valuable aspect of the SPC benchmarks is the actual test. The SPC has developed two environments that depict many of the workload demands we have previously discussed (see Chapter 17). The following describes two test scenarios that are run.

  • SPC1 IOPS (I/Os per Second) Metric An environment composed of application systems that have many users and simultaneous application transactions which can saturate the total I/O operations capacity of a storage subsystem. An OLTP application model makes up the benchmark where the success of the system rests on the ability of the storage system to process large numbers of I/O requests while maintaining acceptable response times to the end users.

  • SPC1-LRT (Least Response Time) Metric This environment depicts a batch type of operations where applications are dependent on elapsed time requirements to complete. These applications provide multiple I/O requests, which are often serial in naturein other words, they must complete in a predefined order. The success of the storage system in these processing environments is dependent on its ability to minimize the response time for each I/O request and thereby limit the elapsed time necessary.

The SPC carefully audits and validates the results of benchmarks. Configurations and testing criteria is audited and validated either onsite or remotely through an audit protocol. This serves to provide the vendor with audit certification that the tests and configurations meet the SPC standards and testing criteria. A peer review is conducted upon the completion of benchmark results. Results are considered validated and become official upon the completion of the 60-day peer review process if no compliance challenges have been brought forward. Official results are available to SPC members on their web site and open to certain publication rights.

 
team lib


Storage Networks
Storage Networks: The Complete Reference
ISBN: 0072224762
EAN: 2147483647
Year: 2003
Pages: 192

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net