Page File Fragmentation

 < Day Day Up > 



Paging files present a problem for defragmentation software. Before going any further, though, let us take a look at what the page file is and what it does. NT, for example, supports up to 16 paging files on a system. These files are used for virtual memory; as Windows NT and its applications use memory in excess of the physical RAM, the Virtual Memory Manager writes the least recently used areas of memory to the paging files to free RAM. If a program accesses these areas of memory, the Virtual Memory Manager reads them from the paging file back to RAM where the program can use them. The paging file, then, is used to swap pages to and from memory to supplement the use of physical RAM; however, access to disk-based "memory" is quite slow compared with physical RAM — paging files operate in the millisecond range but physical RAM runs in nanoseconds.

Page-file fragmentation can impact system performance in a couple of ways depending on system configuration and use. It can prevent some files from being created contiguously. This is because the page-file fragments, being scattered all over the disk, may break up the free space so there is less space to hold the larger files. It can then take longer to write new files and read existing ones. Further, paging activity can be slowed down, depending on the degree of page-file fragmentation. If the page-file fragments are larger than 64 KB each, fragmentation is not a problem because only 64 KB (the system limit) is read at a time. So even if you have a 100-MB page file in one large contiguous location, it would still take 1500 I/Os to read the entire file. But, if the fragments are smaller than 64 KB, then multiple I/Os are required just to read one chunk of data. Additionally, the data itself is not necessarily contiguous; for example, if a program has 60 KB stored in the page file, the data may reside in three widely separated 20-KB pieces. And, with an extremely fragmented page file (say, 100 MB in size with 10,000 fragments), you will notice a very noticeable lag every time it is accessed, as each page-file fragment averages only 10 KB. Normally, however, a page file is fragmented into somewhere between 1000 and 4000 fragments, so this performance drag is not as drastic on an average system.

Defragmenting the paging file into a single location not only speeds up paging performance but also provides more consolidated free space to defragment the whole system; however, there is a problem. Once the system starts up, these files are always open and cannot be moved or deleted. At startup, the Windows NT system process duplicates the file handles for the paging file so that the files will always be open and the operating system will prevent any other process from deleting or moving them. For this reason, paging files are a challenge for defragmentation software. In order to safely defragment the paging file, defragmenters must defragment them at system boot time before the Virtual Memory Manager gets a chance to lock them down. While this is a desirable feature, regularly rebooting a system to defragment it is not a desirable situation, so the best solution is to keep the rest of the file system defragmented to mitigate any fragmentation problems caused by the existence of paging files.

As an active paging file is always held open by the NT operating system, it is impossible for online defragmenters to access it. Paging file fragmentation can be addressed either offline or by using Diskeeper's Frag Guard feature, which functions by monitoring the area on the disk at the end of the paging file and ensuring that enough space is available for it to expand.



 < Day Day Up > 



Server Disk Management in a Windows Enviornment
Server Disk Management in a Windows Enviornment
ISBN: N/A
EAN: N/A
Year: 2003
Pages: 197

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net