Section 10.3. Windows Server 2003

   

10.3 Windows Server 2003

Windows Server 2003 continues the trend of each successive Windows NT release building on the strengths of the previous release and adding more storage- related features of its own. To appreciate the amount of work put in by Microsoft, consider Figure 10.5.

Figure 10.5. Windows Storage Feature for Windows Server 2003 and Beyond

graphics/10fig05.gif

The unshaded boxes in Figure 10.5 represent storage features that are expected to ship with Windows Server 2003 and have already been described in this chapter. The shaded boxes show components shipping after Windows Server 2003. The release vehicle for these components is undecided. Judging by the past practices adopted by Microsoft, the possibilities include the following:

  • Ship in the next major release cycle of Windows NT (currently code-named Windows Longhorn).

  • Ship via Windows Resource Kit or a service pack.

  • Ship via release to the Web ”for example, download from the Microsoft Web site.

  • Ship via ISVs and IHVs; for example, Microsoft makes the software available to partners who bundle it with their offerings.

10.3.1 Storport Driver Model

As explained in Chapter 1, the Windows NT operating system provides for a layer of device drivers to achieve efficient and scalable I/O operations.

For a complete description of the Windows storage I/O stack and all the layers , please refer to Chapter 1. For now, suffice it to say that one of these layers is the port driver layer. The port driver is responsible for receiving I/O requests from upper layers, preparing a SCSI command data block (CDB), and passing the request to the device. Port drivers are responsible for maintaining information that allows communication with the device. While preparing CDBs and interpreting the command results, port drivers use the services of a miniport driver that is expected to be written by the device vendor.

Prior to Windows Server 2003, Microsoft provided a SCSIPort driver and expected vendors to write SCSIPort miniport drivers that handled their SCSI and Fibre Channel devices. A particular Windows installation may have multiple miniport drivers corresponding to devices from multiple vendors . The SCSIPort driver routes the request to the appropriate vendor-written SCSI miniport driver.

There are several problems with this situation:

  • The model assumes that Fibre Channel devices have capabilities similar to SCSI devices, which is just not true! Further, the model assumes that newer SCSI-3 devices are similar to older SCSI devices, which again is patently untrue.

  • In the interest of simplifying the task of writing a miniport driver, the SCSIPort driver follows a single threading model without any support for full duplex communication. This may prevent the system from attaining the desired and achievable I/O throughput.

  • The port driver has some information that it does not pass to the miniport, instead requiring the miniport to collect this information laboriously via multiple calls. In particular, this is true for scatter/gather lists.

  • Out of sheer frustration, Fibre Channel device vendors resorted to either writing a monolithic driver that encompassed the functionality of the port and miniport drivers or replacing the port driver with their own port driver. Because this process required that the functionality of the port driver be reverse-engineered, the attempts, at best, have worked with some varying degrees of success. Of course, things become rather interesting when a different vendor comes along and tries to make its miniport run with a port driver written by another vendor.

With Windows Server 2003, Microsoft has introduced a new driver model with a Storport driver. HBA vendors are now expected to write miniports that link with the Storport driver rather than the SCSIPort driver. To keep this effort to a minimum, Microsoft has kept the Storport model backward compatible with the SCSIPort model. So vendors who want to put in minimal work may easily reap some (but not all) of the advantages of the new model. To take complete advantage of the new model, these vendors will have to do some more work beyond a simple recompile and relink.

The new architecture has the following major advantages:

  • Storport enables higher performance by allowing full duplex I/O. The drawback is that the miniport driver now needs to worry about serializing execution, where appropriate. Thus the higher performance is achieved at the cost of higher complexity.

  • Storport optimizes the interface between the port and miniport drivers. For example, the new interface allows a miniport to collect scatter/gather I/O information with a single callback rather than using multiple callbacks in a loop. Scatter/gather list is a generic term applied to a situation in which I/O is initiated into multiple separate (and disjointed ) buffers simultaneously .

  • Storport improves the interface to meet requirements of high-end storage vendors, particularly Fibre Channel and RAID vendors. For example, the old SCSIPort model allowed for very little in terms of queue management. Newer devices need sophisticated management. The Storport model allows for 254 outstanding requests per logical unit per adapter. The maximum number of outstanding queues per adapter is limited simply by the number of logical units per adapter, as illustrated in Figure 10.6.

    Figure 10.6. Volume Shadow Copy Service

    graphics/10fig06.gif

  • Storport allows for a tiered hierarchy of resets to recover from error conditions. Whereas the older model did a highly disruptive (bus) reset, resets in this architecture are done in a minimally disruptive fashion (LUN, then target, and then if all else fails, bus reset).

  • The new model allows for an enhanced interface dedicated to manageability.

  • The new model provides an interface that removes the requirement to create a "ghost device." The SCSIPort model does not allow an application to query capabilities if no unit is mounted. So vendors created a ghost device, simply to be able to make certain queries. The Storport model removes the requirement to create a ghost device by supporting query capabilities even when no miniport unit is yet mounted or attached.

  • All of this is provided in as minimally disruptive a fashion as possible because Storport is literally backward compatible with SCSIPort. Vendors can choose to recompile and relink their existing code to work with Storport (instead of SCSIPort) with very little effort. They will benefit from the new model but will not reap all the advantages if they adopt this minimal-effort path . Legacy SCSI devices can continue to run with the existing SCSIPort driver.

For further details, see Chapter 2.

10.3.2 Volume Shadow Copy Service

Windows Server 2003 introduces the volume shadow copy service and related infrastructure. Volume shadows are also popularly referred to as snapshots . The different terminology is intended to respect intellectual property rights. The volume shadow copy architecture is shown in Figure 10.6. The architecture consists of four types of components, three of which typically have multiple entities present. The four components are

  1. Volume shadow copy service

  2. Shadow copy writers

  3. Shadow copy requestors

  4. Snapshot providers

This service enables the creation of consistent, point-in-time copies of data to be created and managed. The highlights of the service include the following:

  • The volume shadow copy service is written by Microsoft and provides APIs for backup applications to request the creation of snapshots. This service provides the coordination necessary to ensure that all I/O operations are properly held, caches have been flushed, and both system I/O and application I/O are frozen to allow a point-in-time copy to be accomplished.

  • Shadow copy writers are basically applications such as SQL or Microsoft Exchange with some volume shadow copy service code integration. It is desirable for all applications, including databases and enterprise resource planning applications, to have a shadow copy writer provider for integration with the shadow copy service. Microsoft is expected to make available Active Directory, SQL Server, and Microsoft Exchange providers.

  • Shadow copy requestors are backup applications or applications that cause a volume copy to be created ”for example, create a copy of data for testing with beta software. Some of these will be written by Microsoft, and some are expected to be written by other ISVs. The backup application that ships natively with Windows Server 2003 is also a shadow copy requestor .

  • Snapshot providers are the entities that actually create the snapshot or volume copy. Microsoft provides a default software provider that creates a volume copy using a copy-on-write technique. Storage unit vendors are expected to write more providers. The architecture allows for a variety of schemes to be used for creation of the snapshot, including breaking a mirror in hardware.

Note that the infrastructure provided by Microsoft allows for the creation of only a single snapshot. However, the infrastructure does cater to an ISV's requirements for creating snapshots one at a time, for organizing and managing multiple snapshots, and for allowing for read-only mounting of any given snapshot.

More details about the volume shadow copy service are available in Chapter 5. The volume shadow copy service SDK is available from Microsoft on a nondisclosure basis only.

10.3.3 Virtual Disk Service

The virtual disk service (VDS) is a management interface that ships with Windows Server 2003 and is meant to provide an abstraction for disk virtualization, no matter where the virtualization is accomplished.

Before describing some architectural details of the virtual disk service, it is worthwhile to step back and consider the motive behind this service. The grand vision is to allow a storage administrator to programmatically, through a batch file and through a management GUI, specify functionality such as the following:

  • Obtain a storage volume, make it RAID 5, and make it at least 10GB in size

  • Obtain a storage volume, make it 10GB in size, and make it a simple volume with no RAID features

The idea is that storage administrators routinely allocate storage for making a snapshot and backing up the snapshot, and later release the snapshot storage volume back into the free pool. VDS provides a way to accomplish such tasks , irrespective of where the virtualization is done and works for all kinds of different storage hardware, as well as all kinds of storage interconnects.

Figure 10.7 shows the architecture of the virtual disk service. The shaded boxes represent components written by Microsoft that ship with the server operating system. IHVs are expected to write hardware providers. VDS provides an abstraction so that management applications can be written via a single interface, no matter what the characteristics of the underlying storage hardware happen to be. VDS also allows for a management application to remain unmodified, yet useful, even when new storage hardware ships after the management application has shipped. Storage hardware vendors can innovate and be assured that their new hardware will be discovered and managed, thanks to the integration with VDS.

Figure 10.7. Virtual Disk Service

graphics/10fig07.gif

Microsoft will ship two VDS providers that cater to basic disks and dynamic disks. (For a description of basic disks and dynamic disks, see Chapter 6.)

Storage hardware vendors are expected to write VDS providers. Each VDS provider is simply a COM server invoked by the VDS service. Hardware providers are expected to make available the following kinds of functionality:

  • LUN discovery

  • LUN creation, destruction, and other LUN management

  • COM objects for LUNs, the storage controller, the drive, and even the provider itself

  • Remote access so that a management application can run on a workstation and communicate with the VDS service running on a server

The interfaces to write a provider are currently (as of this writing) available from Microsoft on a nondisclosure agreement basis. See Chapter 7 for more details about the virtual disk service.

10.3.4 Multipath I/O

Multipath I/O is a high-availability solution for the Windows NT server family that Microsoft is introducing with Windows Server 2003 and also making available for Windows 2000 Service Pack 2 and higher. Microsoft provides a multipath development kit to OEMs, IHVs, and ISVs that they use to develop and distribute their solution to end users. The highlights of the solution are as follows:

  • It works on both Windows 2000 and Windows Server 2003.

  • It is a fairly complicated architecture that involves three device drivers written by Microsoft and a minidriver written by the vendor.

  • It provides for failover, failback, and load balancing. It also allows for up to 32 alternate paths to the storage unit.

  • It is based on PnP notifications and needs no static predefined configuration definitions.

  • It is compatible with Microsoft Cluster Server.

The vendor-written minidriver (called a device-specific module, or DSM ) is responsible for

  • Identifying multiple paths to the same storage unit

  • Assigning an initial path (using load balancing or a preferred path or other algorithm)

  • Upon an I/O error, deciding if the nature of the error dictates a permanent error or whether the I/O operation is worth retrying

  • Deciding if the error requires a failover operation and which alternate path should be used

  • Detecting conditions that warrant a failback operation

  • Performing device-specific initialization

  • Handling select commands such as Reserve and Release, and deciding if the commands should be sent down all I/O paths or just certain select paths

Multipath I/O is described in more detail in Chapter 9.

10.3.5 Improved Manageability

Windows 2000 introduced a trend to have both GUI and command-line tools for systems management. Windows Server 2003 continues that theme with command-line tools available to do the following:

  • Manage file system features, including defragmentation

  • Manage volume shadow copy service

  • Manage volumes

  • Manage Remote Storage Services (RSS)

Windows Server 2003 also accelerates the trend introduced in Windows 2000 to provide performance and management information using WMI. More parts of the operating system have been modified to provide management information using the WMI architecture. Storport, volume shadow copy service, and Dfs, are some examples.

10.3.6 SAN-Aware Volume Management

Readers familiar with UNIX will readily understand that Windows has no equivalent to the UNIX mount table. Thus, Windows attempts to mount any volume it happens to see. If the file system on a volume is not recognized, the raw file system claims ownership of the volume. Before a Windows server joined a SAN, then, the administrator had to carefully use LUN masking, zoning, and other management techniques to ensure that the Windows server would be able to see only a limited amount of storage (that belonged to it). Windows Server 2003 changes this situation.

Microsoft has changed the Mount Manager driver (described in Chapter 6) to be more SAN friendly. Specifically, the Mount Manager can be configured to mount only volumes that it has previously seen and to ignore any new volumes it sees. The easiest way of managing the configuration settings is to use the mountvol command-line utility.

10.3.7 SAN Application Enabling

Windows Server 2003 introduces several features that enable ISVs to write powerful storage management applications for a SAN environment. These include the following:

  • Volumes can now be mounted in a read-only state.

  • Applications can now use a new API that, when used in conjunction with the volume shadow copy service, allows an application to perform a read from a specified volume shadow (popularly also referred to as a snapshot ). ISVs thus are able to create versatile applications that can deal with N-way mirrors and can check data integrity.

  • A new API allows an application to set the valid length of a file. ISVs can thus write a distributed file system, as well as backup and restore applications, to perform streaming block-level copies to disk and then set the valid length of the file.

  • Administrators can now manage volumes yet have lower-level security privileges.

  • New APIs have been added to allow file system filter driver writers better stream management functionality. File system filter driver writers can also benefit from better heap management and device object management routines, as well as new security management routines. New volume management APIs are present as well.

10.3.8 NTFS Improvements

Windows Server 2003 has made some significant improvements in NTFS, including the following:

  • It provides 10 to 15 percent improvement in NTFS performance.

  • The defragmentation APIs and the limitations on what they can accomplish have also improved remarkably.

  • NTFS can now mount volumes in a read-only state.

  • Default ACLs on NTFS volumes have been strengthened to improve security.

10.3.9 Defragmentation Improvements

Windows Server 2003 builds on the defragmentation APIs supported by the Windows I/O subsystem in general and NTFS in particular. The defragmentation improvements include the following:

  • The NTFS master file table (MFT) can now be defragmented. Entries in the MFT can be moved around, even when the file represented by an entry in the MFT is opened by an application.

  • Encrypted files can be defragmented without the file having to be opened and read, so security and performance are improved.

  • Noncompressed files are now defragmented at the disk cluster boundary rather than at the memory page boundary.

  • NTFS can defragment files even when the disk cluster size is greater than 4K.

  • NTFS can defragment not just the unnamed data stream, but also reparse points and file attributes.

10.3.10 EFS Improvements

The encrypting file system (EFS) was first introduced in Windows 2000. Windows Server 2003 adds refinements to the EFS to improve security. Specifically, the additional functionality provided includes the following:

  • The encrypting file system now allows multiple users to access an encrypted file. Recall that EFS encrypts a file using a symmetric key algorithm (the same key is used to encrypt or decrypt), and the symmetric key itself is encrypted via an asymmetric key algorithm. Specifically, the symmetric key is encrypted with a user's public key and stored in the same file. Windows Server 2003 simply allows the symmetric key to be encrypted and stored multiple times, using a different user 's public key.

  • EFS now supports full revocation list checking for a user's certificate, thus allowing a user that is no longer authorized to access a file that he or she may have been able to access in the past.

  • EFS now supports more encryption algorithms by providing support for the Microsoft cryptographic service provider.

  • End-to-end encryption over WebDAV can now be done. The Windows 2000 version decrypted files before transferring content over the network using WebDAV. The content transmitted over WebDAV is now encrypted and decrypted locally at the client.

  • The offline files can now be stored in an encrypted form. Windows 2000 EFS support did not include encrypting the offline file cache.

  • EFS encrypted files can now be stored in a Web folder.

10.3.11 Remote Storage Services

The Hierarchical Storage Management (HSM) provided by Remote Storage Services in Windows 2000 supported only tape as the secondary media. (RSM in Windows 2000 supported other media such as optical drives , changers, and jukeboxes, but the HSM solution did not.) Windows Server 2003 provides HSM support for other secondary media besides tape.

10.3.12 Boot Improvements

Windows XP and Windows Server 2003 have been invested with optimizations to minimize the boot time for a system. The loader (ntldr) uses a single I/O operation per file while reading the required files off the disk. The operating system also overlaps disk I/O with device initializations, and it delays the initialization of system processes and services that are not essential for a boot.

10.3.13 CHKDSK Improvements

Windows 2000 improved availability by significantly reducing the number of situations in which CHKDSK needed to be run fully, as well as the amount of time taken by CHKDSK when it did need to run. Windows Server 2003 continues this trend. Here is some quantitative data that may or may not hold true in the final shipping product: A test case with 3 million files completed CHKDSK on Windows Server 2003 in approximately one-twelfth the time that the same test case took on Windows 2000.

10.3.14 Caching Behavior Improvements

Windows XP I/O benchmarks reported a much lower throughput with SCSI disks as compared to the performance benchmarked with Windows 2000. And therein lies an interesting tale.

Operating system and storage disk vendors have provided caching features, in an attempt to boost performance and throughput. Windows NT has a Cache Manager that provides caching features for all file systems. Some storage disks also provide a high-speed cache memory within the storage subsystem. Although caching can improve performance, the tradeoff is that when data is written to a cache instead of to the storage media, there is a potential loss of data if the data is never transferred from the cache to the disk media. To allow an application writer to control caching behavior, Windows NT provides the following facilities:

  • An application can specify the FILE_FLAG_WRITE_THROUGH parameter in the CreateFile API to indicate that a device may not complete a write request until the data is committed to media. The Windows NT drivers are expected to communicate this behavior to the storage device using the SCSI Force Unit Access (FUA) flag. The FUA flag is specified by the SCSI standards and can be used to disable caching within a storage device on a per-I/O basis.

  • An application can specify the FILE_FLAG_NO_BUFFERING parameter in the CreateFile API to indicate that no caching should be performed in the file system layer.

  • An application can use the FlushFileBuffers API to force all data for an open file handle to be flushed from the system cache, and it also sends a command to the disk to flush its cache. Note that contrary to its name , this API affects all data stored in the device cache.

The problem is that the Windows platform has been mishandling application requests to refrain from caching, and Windows Server 2003 is the first Windows platform that correctly handles write-through requests. This meant that performance benchmarks were artificially high on the Windows NT 4.0 and Windows 2000 platforms. In addition, some operating systems competing with Windows NT have a similar bug. Microsoft recommends that administrators use the configuration utility supplied by the storage vendor to disable the storage drive cache. However, the matter has some other complications, such as the following:

  • Windows XP fixed the bug for basic disks, but not for dynamic disks. Thus it appeared for a while that Microsoft heavily favored dynamic disks. The concept of basic and dynamic disks, which pertains to how information about the logical partitioning of the disk is written on the disk, is explained in Chapter 6.

  • Microsoft literature has been incorrectly requiring application writers that favor higher performance to also use the FILE_FLAG_WRITE_THROUGH parameter in the CreateFile API. Microsoft has indicated that it will be fixing all applications, tools, and utilities it owns to remove the FILE_FLAG_WRITE_THROUGH parameter when the emphasis is on performance. Microsoft has also indicated that it will make available an application compatibility layer to take care of applications that have not yet been modified to remove the FILE_FLAG_WRITE_THROUGH parameter when the emphasis is on performance.

10.3.15 Automated System Recovery

Windows Server 2003 improves system reliability by providing a mechanism to perform a disaster recovery operation. In particular, a server can be recovered from a disk failure via a one-step restore process that will restore all operating system information and system state. Automated System Recovery is based on use of the volume shadow copy service, and it requires the presence and use of a floppy drive.

10.3.16 Dfs Improvements

Dfs was first introduced in Windows NT 4.0 and enhanced in Windows 2000. Windows Server 2003 provides some more enhancements to Dfs, including the following:

  • Dfs now supports multiple roots. This means that an enterprise can enjoy the advantages of consolidating views into a namespace, yet also have multiple namespaces for security and administrative purposes. This is extremely useful for corporations that have multiple divisions ”for example, a corporation that provides consumer products and also homeland security products. Another case where this would be useful is when a corporation acquires another corporation and wishes to administer the two separately.

  • File replication has been improved.

  • Load balancing has been improved.

  • Users can select servers closest to their location, improving performance.

10.3.17 WebDAV Redirector

Windows 2000 servers and clients provided a wide range of connectivity, offering support for CIFS, NFS, NetWare, and Macintosh client or server connectivity. Windows Server 2003 adds a WebDAV client. WebDAV stands for "Web Distributed Authoring and Versioning," an Internet standard protocol that allows file transfer using HTTP as the transport protocol. Just as a person downloading an Excel file from a server is unaware that the CIFS protocol is being used, users are not aware that a file may be retrieved from a server via WebDAV.

10.3.18 Driver Infrastructure Improvements

Windows Server 2003 builds on the momentum generated by Windows XP to make more tools available to driver writers, and it significantly toughens the testing and logo certification requirements in order to make the drivers more reliable. Driver writers not only benefit from more tools and education to help them write their drivers, but they also get more feedback and information that allows them to debug their existing drivers and make an updated version available.

10.3.19 HBA API Support

The Storage Networking Industry Association (SNIA) defined a C library API to allow storage management applications to manage Fibre Channel HBAs. The APIs defined include support for querying and setting HBA configuration management, as well as measuring HBA performance statistics.

Although Microsoft and SNIA members both support the HBA API, the approaches differ a little. The SNIA approach, illustrated in Figure 10.8, requires three components:

  1. A generic HBA API DLL owned and maintained by SNIA. This DLL exposes a standard interface for the benefit of management applications. At the bottom edge, the DLL interfaces with multiple vendor-written DLLs.

  2. An HBA vendor-written DLL that plugs into the generic HBA API DLL. This vendor-written DLL makes available management information and interacts with the vendor-written driver using proprietary IOCTLs.

  3. A vendor-written device driver for the HBA.

Figure 10.8. SNIA HBA API

graphics/10fig08.gif

Although this standardization effort has its merits, Microsoft appears to see some problems with this approach, including the following:

  • There is no clear way to manage the distribution and versioning of the proposed dynamic link libraries. This is one more example of a potential DLL hell in which various applications install versions of the libraries and potentially overwrite libraries that other applications have installed.

  • The HBA vendor needs to write not only the device driver and private IOCTL interface to the driver, but also the vendor-specific DLL, and it must potentially modify the wrapper HBA library to handle vendor-specific interfaces to the vendor DLL.

  • It will be extremely hard to test and certify vendor drivers that implement private IOCTLs. For example, how does one verify that the driver code will not result in a buffer overrun situation when bad parameters are passed on the IOCTL call?

  • The architecture appears to be extensible at first sight, but closer inspection reveals that the HBA vendors will be forever chasing the management application vendors to add code that deals with vendor-specific enhancements.

  • The solution does not cater to kernel mode “to “kernel mode communication and management. A management device driver may want to accomplish functionality such as LUN masking before the system completely boots up. With the SNIA solution, the HBA API works only after the system is completely booted up.

Microsoft advocates a slightly different approach, illustrated in Figure 10.9, that consists of the following components:

  • A generic HBA API DLL owned and maintained by SNIA. This DLL exposes a standard interface for the benefit of management applications. At the bottom edge, the DLL interfaces with Windows Management Instrumentation (WMI), the Microsoft implementation of the Common Information Model (CIM), an object-oriented systems management model adopted by both SNIA and the DMTF.

  • A vendor-written device driver for the HBA. This driver implements WMI and makes available management and configuration interfaces in the WMI repository. Because WMI is a two-way interface, the driver also implements WMI IRP functionality that allows a management application to set configuration parameters for the driver.

  • A mapping DLL written by Microsoft that translates between WMI and the SNIA HBA API interface.

Figure 10.9. Microsoft HBA API

graphics/10fig09.gif

The advantages with the Microsoft approach are as follows:

  • All interfaces are standardized, whereas in the SNIA approach, the interface between the generic HBA API DLL and the vendor-written DLL is proprietary for each vendor. The Microsoft approach is consistent with the SNIA adoption of the DMTF Common Information Model.

  • A vendor can easily extend an existing WMI class or define a new one and populate management information into that class. Again, this feature just emphasizes the extensibility of the SNIA-adopted CIM model.

  • The biggest advantage is that management applications can use either the SNIA HBA API or the SNIA CIM model. Applications that use the SNIA HBA API still work unaltered, thanks to the WMI code in the driver and the Microsoft WMI-to-HBA mapping DLL.

  • The architecture allows a kernel mode component to interrogate the vendor-written driver and take some management action.

Note that the WMI interfaces needed to code the HBA driver shipped with Windows 2000, so device vendors can easily add the required WMI code in their drivers.

10.3.20 GUID Partition Table Disks

Windows Server 2003 has a 64-bit version of Windows that supports the industry standard Extensible Firmware Interface (EFI). EFI is a replacement for the old legacy BIOS that has been a hallmark of the PC industry.

EFI defines a GUID partition table (GPT). GUID is short for "globally unique identifier." The exact layout of a GUID table is specified in Chapter 16 of the EFI specification, which is available at http://developer.intel.com/technology/efi/download.htm.

A GPT disk can have 2 64 logical blocks. The EFI specification uses the term logical block for what is commonly termed a disk cluster , which is the smallest unit of storage the file system allocates . Because EFI specifies a typical logical block size of 512 bytes, this equates to a disk size of approximately 18EB. A GPT disk can have any number of partitions, just like a dynamic disk. And also just like dynamic disks, GPT disks are self-describing , meaning that all information about how the disk is logically structured is present on the disk itself. And just like Windows 2000 dynamic disks, GPT disks store partition information redundantly to provide fault tolerance. Whereas GPT disks are an industry standard, technically Windows 2000 disks are a proprietary standard. However, there are not too many EFI-based PCs, compared to BIOS-based PCs.

To guard against data corruption when a GPT disk is accessed by a legacy system, GPT disks also have a Master Boot Record (MBR) defined that encompasses the whole disk. Thus the legacy system is prevented from believing that the disk is not partitioned.

GPT boot disks have a new partition defined, called the EFI system partition , or ESP . This partition contains the files needed for booting the system, such as ntldr, hal.dll, boot.txt, and drivers. The ESP can be present on a GPT disk or an MBR disk, as defined by the EFI specification. The 64-bit Windows Server 2003 requires the ESP to be on a GPT disk.

Another partition of interest is the Microsoft Reserved partition ( MSR ). GPT disks prohibit any hidden sectors, and the MSR is used by components that previously used a hidden sector. The MSR is created when the disk is first partitioned, either by the OEM or when a Windows Server 2003 64-bit version is installed. On disks smaller than 16GB, the MSR is 32MB in size. For disks larger than 16GB, the MSR is 128MB in size.

While on this subject, it is worthwhile mentioning that EFI is available from Intel in only a 64-bit version. Although the standard does not prohibit a 32-bit EFI version, no such implementation is on the horizon. Thus the 64-bit and 32-bit Windows versions will have significant differences in their low-level code, as well as boot sequence code.

The 64-bit version of Windows Server 2003 must boot from a GPT disk. However, it can access older legacy disks that are not GPT disks (but not boot from them). For the 32-bit version of Windows Server 2003, MBR disks continue to be the preferred disk format over GPT disks.


   
Top


Inside Windows Storage
Inside Windows Storage: Server Storage Technologies for Windows 2000, Windows Server 2003 and Beyond
ISBN: 032112698X
EAN: 2147483647
Year: 2003
Pages: 111
Authors: Dilip C. Naik

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net