Benchmarking to Improve Your Workload

team bbl


Benchmarking can be as simple as using a watch to measure the elapsed time of a single command or as complex as multitier workloads spanning multiple machines, requiring hard data consistency, multiple measurement points, complex derived metrics, and rigorous evaluations and publication standards. As a result, some benchmarks can be run on your home laptop; others require a highly trained staff with complex multimillion-dollar hardware configurations. If your goal is to use benchmarking to help improve your own workload, you need to decide if it makes sense to create your own benchmark or if it would be appropriate to make use of a standard benchmark.

The best approach to evaluate your workload is to run your own applications on your own system and measure the key metrics for your workload. However, most end-user workloads do not include standardized measurement points for bandwidth, latency, or overall user response times. Ideally, workloads would be designed with these measurement points in place, but that is rarely the case. In some workloads, underlying components may include some measurement points such as a database, which may have some capability to report transactions per second reference rates for various database tables. However, these measurement points may represent only a small subset of the overall workload and may not provide a realistic view of user response times for your end users.

So, if your workload doesn't have great measurement points at hand, what other options are available to measure its throughput? One option is to create a benchmark based on your workload. However, creating your own benchmark is often proportional in difficulty to the general complexity of the workload you want to model. Benchmark creation can be expensive, complex, difficult to validate, and hard to keep up-to-date as your workload changes. It often requires a significant investment to maintain such a benchmark, and such effort is not recommended for the faint of heart.

Another option is to identify an existing benchmark that models a workload similar to your own. Although a number of complex benchmarks model complex but common real-world workloads, not all possible workloads could possibly have a preexisting benchmark. In those cases, a set of benchmarks model common components of various complex workloads, such as file system benchmarks, file serving benchmarks, mail server benchmarks, networking benchmarks, and so on. Some of these benchmarks are available within the open source community; others are available from commercial or independent nonprofit organizations. These benchmarks typically provide standardized baseline measures under a wide variety of operating conditions, including different processor types, different operating systems, different disk configurations, various tuning parameters, and so on. Component benchmarks are often referred to as microbenchmarks; larger benchmarks are often referred to as application benchmarks or enterprise benchmarks, depending on their focus.

Running a benchmark a single time is quite uninteresting. The real value in running a benchmark is the capability to archive the relevant operating parameters such as the hardware configuration, software configuration, tuning variables, and so on and then compare successive runs with past results. Comparing and contrasting machine configurations, cost, and key performance parameters allows you to make intelligent decisions about potential upgrades, tuning possibilities, or additional software that might improve the overall performance and user response times of your workload. Existing benchmarks typically have gone to great pains to ensure great portability, consistent results from run to run, comparable results in similar hardware and system environments, and generally informative metrics about common aspects of traditional workloads. All this effort on standardization and consistency of benchmarks makes them powerful tools to evaluate potential improvements in your workload or even hardware or software solutions in light of your proposed workload.

    team bbl



    Performance Tuning for Linux Servers
    Performance Tuning for Linux Servers
    ISBN: 0137136285
    EAN: 2147483647
    Year: 2006
    Pages: 254

    flylib.com © 2008-2017.
    If you may any questions please contact us: flylib@qtcs.net