Summary

team bbl


The benchmarks conducted on varying hardware configurations reveal a strong, setup-based correlation among the I/O scheduler, the workload profile, the file system, and ultimately I/O performance. The empirical data discloses that most tuning efforts result in reshuffling the scheduler's performance ranking. The ramification is that the choice of an I/O scheduler has to be based on the workload pattern, the hardware setup, and the file system used. To reemphasize the importance of the discussed approach, an additional benchmark is executed utilizing a Linux 2.6 SMP system, the JFS file system, and a large RAID-0 configuration, consisting of 84 RAID-0 systems (five disks each). The SPECsfs benchmark is used as the workload generator. The focus is on determining the highest throughput achievable in the RAID-0 setup by substituting the I/O scheduler only between SPECsfs runs. The results reveal that the noop scheduler can outperform the CFQ, as well as the AS scheduler. The result reverses the order and contradicts the ranking established for the RAID-5 and RAID-0 environments benchmarked in this study. On the smaller RAID systems, the noop scheduler cannot outperform the CFQ implementation in any random I/O test. In the large RAID-0 environment, the 84 rb-tree data structures that have to be maintained (from a memory as well as a CPU perspective) in CFQ represent a substantial, noticeable overhead factor.

The results show that no I/O scheduler consistently provides the best possible I/O performance. Although the AS scheduler excels on small configurations in a sequential read scenario, the nontuned deadline solution provides acceptable performance on smaller RAID systems. The CFQ scheduler exhibits the most potential from a tuning perspective on smaller RAID-5 systems, because increasing the nr_requests parameter provides the lowest response time. Because the noop scheduler represents a rather light-weight solution, large RAID systems that consist of many individual logical devices may benefit from the reduced memory, as well as CPU overhead encountered by this solution. On large RAID systems that consist of many logical devices, the other three implementations have to maintain complex data structures as part of the operating framework. Further, the study reveals that the proposed PID-based and tunable CFQ implementation reflects a valuable alternative to the standard CFQ implementation. The empirical data collected on a RAID-5 system supports that claim, because true fairness on a per-thread basis is introduced. Future work items include analyzing the erratic performance behavior encountered by the AS scheduler on XFS while processing a metadata-intensive workload profile. Another focal point is an in-depth analysis of the inconsistent nr_requests behavior observed on large RAID-0 systems. Different hardware setups will be used to aid this study. The anticipatory heuristics of the AS code used in Linux 2.6.5 are the target of another study, aiming at enhancing the adaptiveness of the (status quo) implementation based on certain workload conditions. Additional research in the area of the proposed PID-based CFQ implementation, as well as extending the I/O performance study into larger I/O subsystems, represents other work items that will be addressed in the near future.

    team bbl



    Performance Tuning for Linux Servers
    Performance Tuning for Linux Servers
    ISBN: 0137136285
    EAN: 2147483647
    Year: 2006
    Pages: 254

    flylib.com © 2008-2017.
    If you may any questions please contact us: flylib@qtcs.net