Chapter 7: Disk Performance and Fragmentation

 < Day Day Up > 



Many turn to hardware upgrades in order to improve the speed and responsiveness of Windows NT/2000. From the latest hard drives and CPUs to more RAM, IT executives are constantly striving for upgrades that will open the door to increased performance. More recently, defragmentation techniques have gained popularity as an effective means of boosting performance at a fraction of the cost of hardware upgrades. Additionally, fragmentation is a significant factor in system instability. Users who keep their servers and workstations fragment free experience far fewer crashes, hangs, and system instability issues.

This chapter examines what fragmentation is, its impact, safety considerations, and the best ways to defragment a network. It also includes details on recent reports explaining the total cost of ownership (TCO) benefits of regularly defragmenting a network, how this may affect future hardware upgrade decisions, and the relationship between fragmentation and system stability. Finally, the chapter addresses manual defragmenters such as the one built in to Windows 2000 and how to use them and compares them to third party products.

History and Origins of Fragmentation

During the early days of Windows NT, it was believed that the design of NTFS would effectively eliminate concerns about fragmentation that had plagued earlier operating systems. Taking a look at the history and origins of fragmentation might provide a clue to the origins of this belief. Fragmentation first appeared about thirty years ago, right after that dark age when computers existed without disks or operating systems. By the late 1960s, disks appeared that were able to store thousands of bytes — a revolutionary concept at the time. One early computer, the PDP-11, had an operating system called RT-11 that introduced the concept of storing files in a formal file structure. The downside was that all files had to be contiguous. Disks with plenty of space, but lacking one free space large enough to accommodate a new file, were "full." With frequent file deletions, it was not unusual to have a disk reach the point where no more files could be created even though the disk was little more than half full. The solution was SQUEEZE, a command that compacted files at the beginning of the disk.

When a new operating system (RSX-11) came along that allowed multiple simultaneous users of the same PDP-11 computer, SQUEEZE became a problem as using it meant that all users had to stop working while it ran. This drawback led to the creation of a file structure that could locate parts of a file in different places on the disk. Each file had a header that gave the location and size of each section of the file, so the file itself could be in pieces scattered around the disk. Thus, fragmentation became a feature, not a bug, in these early systems.

The fragmentation approach of RSX-11 was carried over into the Open VMS operating system, and when its principle designers moved over to Microsoft, they built the NT file systems on this same fragmentation model. The problem, though, is that, as hard drive capacities and CPU speeds grew exponentially, disk speeds did not keep pace. In today's client/server world, where thousands of files are being written and deleted from disks repeatedly, the end product is files split into thousands of pieces that exert a significant toll on system I/O.



 < Day Day Up > 



Server Disk Management in a Windows Enviornment
Server Disk Management in a Windows Enviornment
ISBN: N/A
EAN: N/A
Year: 2003
Pages: 197

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net