|
The goal of this chapter is to quantify I/O performance in Linux 2.6 under varying workload conditions. These workload conditions are in environments ranging from single-CPU, single-disk setups to SMP systems that utilize large, sophisticated RAID systems. Incorporating different I/O workload patterns, the focus is on quantifying the baseline, as well as the optimized performance behavior under different Linux 2.6 I/O scheduler and file system configurations. The analysis results in the establishment of performance metrics that outline the I/O performance behavior and guide the configuration, setup, and fine-tuning process. The point of convergence in the analysis is the entire I/O stack, incorporating the major software and hardware I/O optimization features that are present in the I/O path. The I/O stack in general has become considerably more complex over the last few years. Contemporary I/O solutions include hardware, firmware, and software support for features such as request coalescing, adaptive prefetching, automated invocation of direct I/O, or asynchronous write-behind polices. From a hardware perspective, incorporating large cache subsystems on a memory, RAID controller, and physical disk layer allows for a very aggressive utilization of these I/O optimization techniques. The interaction of the different optimization methods that are incorporated in the different layers of the I/O stack is neither well understood nor quantified to an extent necessary to make a rational statement on I/O performance. A rather interesting feature of the Linux operating system is the I/O scheduler. Unlike the CPU scheduler, an I/O scheduler is not a necessary component of any operating system per se. Therefore, it is not an actual building block in some of the commercial UNIX systems. Before looking at the Linux 2.6 I/O schedulers and their performance, we first discuss the benchmark environment and workload profiles used throughout this chapter. |
|