|< Day Day Up >|| |
"Linux swapping" is not true swapping; the term refers to a time in the past when Linux used to swap out an entire process when it was not in use so that another running process could have its memory. This was a very expensive way of managing memory, because there was a big hit on the amount of context switching that occurred. The current swap algorithm uses a paging mechanism. This means that only those pages in memory that are no longer in use will be swapped out to the swap device, rather than the entire process.
The kswapd daemon takes any page that has been marked as dirty and swaps it out to the swap device. The kswapd daemon wakes up during two time frames: every 5 minute, to check the dirty list to free up memory, and on demand, each time a process uses more than the last 20% of memory. When a process uses memory that is in the last 20%, kswapd will run to take any pages that have been marked as being dirty and write those pages out to the swap device to get memory use down below 80%.
On VM, we have the advantage of a very fast device that can be used as a swap device: a VM virtual disk, which is essentially a RAM disk that can be defined for the Linux guest. This reserves the amount of swap space that the Linux guest might require, but does not take all of the disk's amount of central storage until it is needed.
This means that the virtual disks are ready to go but are not consuming central storage until Linux starts swapping. Should there be different amounts of swapping, it is best to have multiple virtual disks defined as swap devices, as only the virtual disks that are used will take up central storage. Any amount of swap space that is not used will not tie up central storage (except for the small amount that the virtual disk itself is not using).
For additional discussion on this topic and recommendations on using Linux swapping and virtual disk swapping, see 11.11, "Linux swapping" on page 288.
As we lowered the available virtual and overall storage, processor requirements to support swapping to the virtual disk increased. This has a cost: in our measurement of the 256 MB virtual machine experiment with a 1500-user load, swapping to virtual disk was about 1000 per second; see Figure 12-8. The number of virtual disk I/Os is the measurement for swapping to virtual disk. For the four 15-minute intervals reported, there was an average of 850 K virtual disk I/Os, or almost 1000 per second over the 900-second reporting interval.
Figure 12-8: ESAUSR3 report showing virtual disk I/O for VM guest for 196 MB run
The cost of performing this activity is charged to the kswapd daemon. In looking at the Host Application report in Figure 12-9 on page 308, the cost is about 9% of an engine to perform nearly 1000 swap I/Os per second. In addition to the Linux swapping cost, the CPU usage of VM should be added in.
Figure 12-9: ESAHSTA report for 196 MB run showing kswapd CPU requirement
The goals of the follow-on experiments were to reduce the machine size first to 196 MB, then to 128 MB. The run with 196 MB provided equivalent response time, with the swap rate averaging about 10% higher, and the kswapd daemon also about 10% higher.
The 128 MB experiment proved the case that the swap rate is linearly proportional to the CPU required by the kswapd daemon. At the one complete interval shown, swapping was 4 million for the interval, or over 4000 per second. The processor utilization was close to 200% out of the two processors, as compared to about 130% out of two processors for the previous run.
Figure 12-10: ESAUSR3 showing Virtual Disk I/O 128 MB run
Figure 12-11: ESAHST1 Process Analysis showing kswapd CPU for 128 MB run
There is a trade-off between reducing storage and the CPU requirement for swapping. Too much swapping increases CPU, but defining too much storage for a virtual machine will increase the overall real storage requirement. This will be a constant area for analysis and planning in order to utilize your current resources most effectively.
|< Day Day Up >|| |