Summary

team bbl


This chapter started with microbenchmarks, worked through component benchmarks, looked at some relatively simple application benchmarks, and discussed complex workload modeling benchmarks. If every performance analyst were expected to use every one of these benchmarks when setting up a workload, no one would ever have time to set up a workload. So how can you use all this knowledge without getting bogged down by all the options available?

First, it is most useful to consult some of the more rigorously maintained benchmarking sources, such as http://www.tpc.org/ or http://www.spec.org/, to identify benchmarks and workloads that compare well with the workload you intend to run. By comparing the various configurations, performance metrics, costs, and applications, you can prescreen and use the benchmarking results to help influence your purchase and initial configuration options.

Next, in those cases where you choose to use some component in your environment that is not directly part of a benchmark, you can use a microbenchmark to see how that component compares to one used in a published benchmark. For instance, suppose you have evaluated web-serving results and chosen a particular configuration and web server for your environment. Further, as a performance analyst who is familiar with your proposed workload, you may realize that the new 2.5 gigabit Ethernet over fiber is the solution you will need to meet your web-serving goals. However, you are curious as to how much benefit that might give you over the published numbers for 100 megabit Ethernet. Because your configuration is otherwise very similar to a published result, you may be able to use a benchmark like NetPerf to compare throughput and latency of 100 megabit and 2.5 gigabit Ethernet. Although the raw numbers might suggest that 2.5 gigabit is 25 times faster, you expect that latency may chew up some of that bandwidth, and it may not be possible for your hardware to achieve maximum throughput. So, using a benchmark can help you develop a relative comparison that can set an upper bound on what type of performance improvement your environment might achieve from changing the networking interconnect. Of course, this provides some guidance, but as you've seen, the more complex environment may keep you from reaching even that upper bound. So, you may choose to set up the full web-serving test with your hardware configuration with your vendor as part of your acceptance test.

This is a rather simple example, but the intention is that with some knowledge of the underlying constraints, you can use simpler benchmarks to model aspects of more complex workloads or benchmarks. With experience, you should be able to build a repertoire of benchmarks that help you analyze or project the performance of future workloads as well.

One caveat: Even with the published benchmarks, vendors use many advanced techniques that demonstrate the performance and price/performance ratio of their products. A performance analyst should carefully analyze the benchmark configuration of any publication and compare that configuration to his own. Results may not be directly comparable if there are changes in the way database schema are used or hardware is configured, or if a particular software stack is different. The performance analyst may want to develop a relationship with the vendor to help understand how the benchmark results may compare to his local configuration and situation.

    team bbl



    Performance Tuning for Linux Servers
    Performance Tuning for Linux Servers
    ISBN: 0137136285
    EAN: 2147483647
    Year: 2006
    Pages: 254

    flylib.com © 2008-2017.
    If you may any questions please contact us: flylib@qtcs.net