Network Load Balancing

On the Windows NT Server platform, Network Load Balancing was known as the Windows NT Load Balancing Service (WLBS) and performed the same basic function it does today: providing higher availability for applications and faster response times from servers. Network Load Balancing works by distributing incoming IP traffic across a cluster of serversup to 32 Windows Server 2003 systemsthat share a single virtual IP address. It is available on all versions of Windows Server 2003. The service enables you to allocate incoming Transmission Control Protocol/Internet Protocol (TCP/IP) traffic between multiple servers so that applications and the servers that host them can handle traffic better and ensure a higher level of availability and throughput for users and system access.

Component Load Balancing is available under Microsoft Application Center 2000. These cluster designs allow growth and system availability by enabling COM+ applications to be distributed across up to 12 nodes. The cluster service is available on only the Enterprise and Datacenter Editions of Windows Server 2003. Under Windows 2000 Advanced Server, the maximum supported cluster size was two nodes; in Windows Server 2003 Enterprise Edition, it's eight nodes. Windows 2000 Datacenter Server supported four node clusters; Windows Server 2003 Datacenter Edition supports eight.

graphics/note_icon.gif

Component Load Balancing is a feature of Microsoft Application Center 2000. It is a standalone product not found in the Windows Server 2003 family of server operating systems. Server clusters cannot be made up of nodes running both Enterprise and Datacenter Editions of Windows Server 2003. All nodes must run either Datacenter Edition or Enterprise Edition. There cannot be a mix of both types.


When you're considering clustering services for a high-availability solution in your enterprise, think about whether the main focus should be on load balancing or fault tolerance when building clusters for your environment. Windows 2000 Server has two main types of cluster configurations: active/active clusters and active/passive clusters.

Active/active is an established pair in which either system responds to client requests . This design configuration allows for load balancing because the system with more available resources can respond better. To a degree, some fault tolerance is built into this design because a single node failure does not cause all the cluster's services to be lost.

In an Active/passive established pair, only one server responds to client requests; the job of the remaining server is to monitor the online system. If the twin system stops responding, the idle node comes fully online and responds to user service requests. This design is fault tolerant because unlike the active/active configuration, there is no shared load.

Implementing Network Load Balancing

After you decide to use Network Load Balancing to install and configure the clustering technology for your environment, follow these steps:

  1. Log on as an administrator (or have this level of access on the system).

  2. Open the Network Load Balancing Manager by typing nlbmgr at the command line.

  3. Right-click Network Load Balancing Clusters, and choose New Cluster.

  4. Enter the IP address and subnet mask of the cluster. Click Next.

  5. Click the Add button to add virtual IP addresses used by the cluster. Click OK and then Next.

  6. Enter the virtual IP address and subnet mask information. You can add any required port rules. Click Next.

  7. Enter the name of a host that will be a member of the cluster, and click Connect. Available network adapters on the entered host are listed at the bottom of the dialog box. Click Next.

  8. Choose the network adapter(s) you want to use for this Network Load Balancing configuration. Click Finish.

Now that the cluster has been created, you can add hosts in the future by opening the Network Load Balancing Manager, connecting to the existing cluster (if it is not shown), right-clicking the cluster, and choosing Add Host To Cluster. The remainder of the steps are the same as those for creating a new cluster: Enter a hostname, click Connect, and choose the network adapters that are available on the entered host. They are listed at the bottom of the dialog box, so you can choose the network adapter(s) you want to use for this Network Load Balancing configuration. After you complete these steps, the host will be part of this Network Load Balancing cluster.

If you need to drop a single node from a cluster, right-click the node in the Network Load Balancing Manager, and choose Delete Host. If you need to delete the entire Network Load Balancing cluster, right-click the cluster in the Network Load Balancing Manager, and choose Delete Cluster.

After you've finished setting up the entire Network Load Balancing cluster with the nodes you want to use, you need to configure additional settings, such as IP address, subnet mask, full Internet name, cluster operation mode, remote control, and password information, in the Cluster Parameters tab of the Cluster Properties dialog box.

Notes from the Field

There are many other things you need to know about Network Load Balancing that go beyond the scope of this book, but in this sidebar I've supplied some highlights you should focus on.

The Network Load Balancing Manager is the recommended way to configure Network Load Balancing. You also have the option to set up TCP/IP for Network Load Balancing by configuring the Network Load Balancing Properties dialog box through Network Connections.

You can connect to existing clusters in the Network Load Balancing Manager by choosing File, Load Host List from the menu, selecting any available host list text file, and clicking Open. You can also do this from the command line by entering the following:

 
 nlbmgr /hostlist  host-list  

This command force-loads the hosts specified in the file into the Network Load Balancing Manager. Another point to remember about network configuration is that adapters can be configured in both unicast and Internet Group Management Protocol (IGMP) multicast mode.


Considering Systems for Network Load Balancing

Network Load Balancing is designed to work only on 10Mbps, 100Mbps, and gigabit ethernet network adapters. It is not compatible with Asynchronous Transfer Mode (ATM), ATM local area network (LAN) emulation, or token ring networks.

On x86-based 32-bit ethernet network configurations, Network Load Balancing uses from 750KB to 2MB of RAM per network adapter in a default configuration, which can vary as high as 27MB, depending on the network load. Configuration settings can be modified to allow using up to 84MB of memory.

On Itanium-based 64-bit ethernet network configurations, Network Load Balancing uses from 825KB to 2.5MB of RAM per network adapter in a default configuration, which can vary as high as 32MB, depending on the network load. Configuration settings can be modified to allow using up to 102MB of memory.

Network Load Balancing can be configured using only a single network adapter on a node. For the best possible cluster performance, however, you should install at least one additional network adapter on each Network Load Balancing node so that the first network adapter can be set to handle the network traffic addressed to the server as part of the Cluster Service. The second network adapter can be used for communication between nodes in the cluster.

The primary rationale for installing Network Load Balancing is to mitigate possible single points of failure that can interrupt your network services. Single points of failure can be hardware based, such as when a single router or a single server goes offline. They can also be caused by single external dependencies, such as the loss of public utilities. There are a number of ways to mitigate some single points of failure, depending on what they are.

When power for a server system is a concern, you can have redundant power supplies installed so that if one supply fails, another is running and can assume the full load for the system. You can also attach power to these redundant power supplies from separate electrical circuits in your building in an effort to prevent a circuit trip from becoming another single failure point.

Implementing a RAID Solution

For hard drive issues, there are hardware and software redundant array of independent (or inexpensiveboth are correct) disks (RAID). On most Windows Server 2003 systems, you deal with RAID 1 or 5 most of the time. RAID 0, which is often referred to as "stripe" or a "striping configuration," is just a stripe with no parity. It is available on Windows Server 2003 and considered a RAID build, but it has no parity or fault tolerance.

Hardware RAID is any type of RAID deployed on a computer system that is controlled at a hardware level, independent of the operating system. Before any type of operating system is installed on the system, the hardware-level RAID configuration is already configured on the system. Hardware RAID is designed and deployed so that when errors or configuration problems cause the loss of the operating system, recovering data is easier because the disk configuration is held on the hardware controller. (Sometimes the disk configuration is also stored on certain reserved sectors of the hard drives, depending on the controller's manufacturer and design.) The downside of a hardware RAID configuration is that loss of the hardware device makes data retrieval much more difficult. Because the operating system sees the combined space of all the drives as one logical structure, the hard drive configuration is unknown to the operating system (and any standard data recovery tools), thus increasing the difficulty of retrieving data.

In a software-based (by default, software-based means "derived from the operating system") RAID solution, the operating system creates and stores the logical structure of drives in the array. Direct access, such as booting from a floppy disk, an NTFS boot disk, or an installation CD, bypasses the operating system. In most cases, this bypassing does not allow access to data on the array because the operating system is not allowed to initialize and access the logical drive array it has created.

In a software-based RAID configuration, you don't need to be concerned about RAID-based hardware failure (such as a controller card); however, if you lose the operating system to the extent that its repair function cannot fix the problem, all the RAID data created by the operating system is usually lost.

A RAID 0 configuration makes it possible to use the total combined space of all drives without the loss of any available total space. That is, if five 20GB hard drives ( totaling 100GB of space) are committed to a RAID 0 array, the total usable space is 100GB.

RAID 1 is deployed using a total of two drives. This configuration can sustain the loss of one drive and still allow the system's full operation. It is referred to as disk mirroring when two different hard drives are used on the same IDE, SCSI, or RAID controller. It's called disk duplexing when two different hard drives are used on two different IDE, SCSI, or RAID controllers.

There is no striping in this configuration (as you would see in a RAID 0 configuration); however, all data written to one disk is duplicated on the other. RAID 1's fault tolerance is based on of this duplicate data, which is the parity information used to maintain the system in the event of a drive failure.

A RAID 1 configuration effectively causes the "loss" of 50% of the total usable disk space because the second drive is committed to the parity writing of the array. When two 50GB hard drives totaling 100GB are installed in a system and then configured in a RAID 1 configuration, the total amount of disk space available for use is 50GB. The "lost" space is allotted for parity storage, which in this configuration is the total duplication of the first drive. A RAID 1 configuration can sustain the loss of its twin drive and still allow the system's full operation.

RAID 5, referred to as striping with parity , has some similarities to RAID 0; the main difference is that RAID 5 includes fault tolerance, and RAID 0 doesn't. RAID 5 data is divided into blocks ranging from 512 bytes to 64KB. The data is distributed across all disks in the array, with parity information being spread out and written to each drive. RAID 5 requires a minimum of three disks in its standard configuration.

A RAID 5 configuration makes it possible to use the total combined space of all drives, minus the total space of a single drive. That is, if five 20GB hard drives (totaling 100GB of space) are committed to a RAID 5 array, the total usable space is 80GB. The "lost" space is allotted for parity storage. A RAID 5 configuration can sustain the loss of one drive and still allow the system's full operation.

Clustering can help mitigate possible single points of failure in your environment. Although it does protect data availability, it cannot protect the data itself. Therefore, having a backup strategy is still important.

Backing Up Data

You have more options for backing up data in Windows Server 2003 from within the operating system than in previous versions of the operating system. The NTBACKUP utility is still available for administrators to set up and configure backups and offers five options to choose from. You can run NTBACKUP from the graphical user interface (GUI) or the command line. Note that even though you can perform a data backup from the command line, you cannot perform a restore. These are the available command-line switches for NTBACKUP:

  • systemstate Enables you to perform a normal or copy backup of the System State data.

  • @bks file name Enables you to set the backup selection filename ( .bks file) to be used for this backup operation. The at ( @ ) character must precede the name. An example of this usage is ntbackup backup @c:\monday.bks .

  • /J {"job name"} Identifies the job name to be used in the log file.

  • /P {"pool name"} Identifies the media pool from which you want to use media. If you select this command-line option, you cannot use the /A , /G , /F , or /T command-line switches.

  • /G {"guid name"} Overwrites or appends data to the specified media. You should not use this switch with the /P option.

  • /T {"tape name"} Overwrites or appends data to the specified media. You should not use this switch with the /P option.

  • /N {"media name"} Sets a new tape name. You should not use this switch with the /A option.

  • /F {"file name"} Enables you to enter the logical disk path and filename. You should not use this switch with the /P , /G , or /T options.

  • /D {"set description"} Enables you to identify the label for each backup set.

  • /DS {"server name"} Enables you to back up the directory service file for the specified Microsoft Exchange server.

  • /IS {"server name"} Enables you to back up the Information Store file for the specified Microsoft Exchange server.

  • /A Enables you to run an append operation on another backup. Either /G or /T must be used in combination with this switch. The /P option should not be used.

  • /V:{yesno} Verifies data after the backup is completed.

  • /R:{yesno} Restricts access to data on the backup media to the backup owner or members of the Administrators group.

  • /L:{fsn} Identifies the type of log file to use: f=full , s=summary , or n=none .

  • /RS:{yesno} Used to back up migrated data files located in Remote Storage. This option is not required to back up the local Removable Storage database (that contains the Remote Storage placeholder files). When you back up the %systemroot% folder, Backup automatically backs up the Removable Storage database as well.

  • /HC:{onoff} Sets the use of hardware compression on backup media if it is available.

  • /SNAP:{onoff} Identifies whether the backup is a volume shadow copy.

  • /M {backup type} Sets the backup type to normal, copy, differential, incremental, or daily.

Normal backups , also called full backups, are configured to back up all selected files and folders. A normal backup does not rely on backup markers, referred to as the archive bit , to determine which files to back up. The backup process simply backs up everything that's selected, regardless of the archive setting. A normal backup clears any existing archive bits it finds and marks all the backed up files as having been backed up. Normal backups are most efficient in the restoration process because the backed up files are the most current, and you do not need to restore multiple backup jobs. Their main drawback is the length of time it takes to perform the initial backup because this method takes the most time.

Copy backups are used to back up all selected files and folders. Like a normal or full backup, this backup process does not rely on the archive bit to determine which files to back up; it backs up everything selected. The copy backup, however, doesn't reset the archive bit as a normal backup does. If you need to back up files and folders and do not want to affect other backup types by resetting the archive bit, a copy backup is the best option. A copy backup is useful when you want a current backup but don't want to disrupt your backup rotation, such as performing an update, an upgrade, or system maintenance.

Daily backups are used to back up all selected files and folders that have changed during that particular day. This backup procedure also does not rely on or reset the archive bit.

Incremental backups back up only selected files and folders that have an archive bit set. That means if you select an entire partition, only the data that has changed is backed up. During an incremental backup, the archive bit is reset (turned off).

graphics/note_icon.gif

When data is edited or changed after being backed up, the archive bit is turned on. Therefore, if a document was backed up during a normal backup on Sunday evening, the archive bit is turned off. If you made no changes to the document on Monday or Tuesday, there's no need for it to be backed up again by backup processes that focus on the archive bit, such as incremental backups.

If you open the file on Wednesday, edit it, and then save it, the archive bit is turned back on, which flags the incremental backup process to back up the data during the Wednesday night incremental backup. If you do not edit this file on Thursday, the incremental backup run on Thursday night does not back up this file again.


Incremental backups are normally used on a daily basis in between normal backups. For example, if you perform a normal backup Sunday night on an entire partition, you're backing up all the data on the entire partition, and the process backs up all available data and resets any archive bits it finds. On Monday night's incremental backup of the entire partition, the backup process backs up only those files that have changed since the Sunday normal backup, and as the data is backed up, the archive bits are turned off. Tuesday's incremental backup of the entire partition backs up only those files that have changed since the Monday incremental backup. During the Tuesday night backup, the data that has changed is backed up, and the archive bits are turned off. This process continues all week until another normal backup is run again on Sunday.

This process allows for quicker nightly backups of incrementally changed data from the previous day, but it does tend to lengthen the restoration process. If you need a restoration for all the backed up data from that partition on a Friday morning, you have to use the Sunday normal backup to restore the main bulk of the data and use each incremental tape, from Monday through Thursday, to restore all the required data.

Differential backups are normally used on a daily basis between normal backups. They back up only selected files and folders that have an archive bit set. That means if you select an entire partition, only the data that has changed is backed up. During a differential backup, the archive bit is not reset (is not turned off).

For example, if you perform a normal backup Sunday night on an entire partition, you are backing up all the data on the entire partition, and the process backs up all available data and resets any archive bits it finds. On Monday night's differential backup of the entire partition, the backup process backs up only those files that have changed since the Sunday normal backup. The archive bits are not reset (turned off). During Tuesday's incremental backup of the entire partition, all the changed files from Monday and Tuesday are backed up, even though Monday's changed data is already on the Monday tape. The Tuesday backup has all the differences in data from both Monday and Tuesday. Again, the differential backup process does not reset the archive bit. This process continues all week until another normal backup is run again on Sunday.

This process allows for quicker nightly backups of differentially changed data from Sunday's normal backup, but as each day passes , the backup process takes longer. However, it shortens any required restorations. If you need to restore all the backed up data from that partition on a Friday morning, you have to use the Sunday normal backup to restore the main bulk of the data and use only the last differential tape or mediain this case, the one from Thursday nightto restore all required data.

Using Automated System Recovery (ASR)

Another option in NTBACKUP is creating an Automated System Recovery (ASR) backup of your system, which creates a backup set of the System State data, system services, and all the disk configuration information on both basic and dynamic volumes .

ASR creates a startup disk, previously known as an Emergency Repair Disk (ERD, or RDISK from the NT 4 days), that contains information about the backup and how to accomplish a restore. New ASR startup disks should be created after any major change to the system so that the information is up to date.

graphics/note_icon.gif

ASR backs up only the system files that are necessary to restart a failed system. You still need to have a separate, regular backup plan for the data on the system itself.

For clusters, you need to run the Automated System Recovery Preparation Wizard on all nodes of the cluster and make sure the Cluster Service is running when you start each ASR backup. One of the nodes must be listed as the owner of the quorum resource while the wizard is running for this part of the ASR backup to be successful.


To create an ASR set, start NTBACKUP (see Figure 5.1) by clicking Start, Run and entering NTBACKUP or choosing All Programs, Accessories, System Tools, Backup.

Figure 5.1. In the opening window of NTBACKUP, Automated System Recovery is the icon at the bottom.

graphics/05fig01.gif

The Backup Wizard or Restore Wizard should start by default unless they have been disabled. You can use the Backup Wizard to create an ASR set by selecting the All Information on This Computer option in the What Do You Want to Backup? section. You can also create the ASR set in advanced mode by clicking the Advanced Mode link in the Backup or Restore Wizard. After the ASR process starts, it creates a backup file that can be stored locally or remotely and recovery information that's stored on a floppy disk for emergency recovery (see Figure 5.2).

Figure 5.2. In this stage of creating an ASR set, the information required to restart a downed system is written to a floppy disk.

graphics/05fig02.gif

To recover from a system failure using Automated System Recovery, you need the ASR floppy disk, the most recent backup, and the operating system installation CD (or the network location of the operating system files used to build the system). From here, restart the failed system and press F2 when prompted at the beginning of the text-only mode section of Setup. When prompted, insert the ASR floppy disk to recover the system.

Using Shadow Copies

Windows Server 2003 includes the capability to create a shadow copy , which can be used on volumes and shared folders to help prevent unintentional loss of data caused by accidental deletions. By creating previous versions of the data at a specific point in time at predetermined time intervals, shadow copies enable users to return to their data at an earlier point in time to retrieve the document. For example, if a user is editing a PowerPoint presentation and deletes a few slides from the presentation and some images in the other remaining slides, and then saves and closes the presentation, there would be a problem if the user suddenly realized she still needs those slides and images. She would have to manually redo all the work in the presentation or have a previously backed up copy restored. With Shadow Copy, users can go back to a previously stored version of their data on their own, retrieve itin this example, from a saved point before the deletions were madeand restore it for current use. Shadow copies are especially useful when deletions occur over the network and there is no Recycle Bin for users to recover the file.

graphics/note_icon.gif

The name of the Shadow Copies of Shared Folders option is slightly misleading; it implies that you can apply shadow copies to specific folders on a volume. This option is enabled at the volume level only, however, so every shared folder on that volume is configured to use this feature after it is set. It also means that shadow copy settings are configured for the entire volume.


A shadow copy makes a copy of any changes in files that have occurred since the last shadow copy. Only the changes are copied , not the entire file, which is helpful because shadow copies don't normally take up as much disk space as the original file.

Using this feature does not eliminate the need to perform regular backups on your Windows Server 2003 systems; a full fault of your server, such as failure of the local hard drive, still causes a loss of data. The other key point to remember is that older shadow copies are periodically removed from the system when the maximum limit of shadow copies per volume is reached.

When using and configuring the Shadow Copy service, you need to determine the following settings:

  • First, choose the volume that needs to be configured to use this service.

  • Figure out the allocation of disk space needed for shadow copies and decide whether to use separate hard drives.

  • Decide how often shadow copies should be created.

  • Configure the maximum number of shadow copies per volume.

By default, the Shadow Copy service is configured to create shadow copies at 0700 and 1200, Monday through Friday, but administrators can reset this schedule to fit their needs. The default setting for volume space reserved for shadow copy use is 10% of the total volume size (not 10% of the volume's free space). Administrators can change this setting, too, but keep in mind that setting the limit too low causes the oldest shadow copies to be deleted regularly, often much sooner than the maximum number of shadow copies allowed per volume, because the volume's reserved space is being used up. Regardless of the amount of free space, the maximum number of shadow copies that can be created per volume is 64.

graphics/note_icon.gif

Shadow copies can be configured on a volume only if you are logged in as an administrator. If you are logged in with an account that has a different level of access on the system, you can't see the Shadow Copies tab in the volume's Properties dialog box. To use the Shadow Copy service, volumes must be formatted with NTFS.


The Shadow Copy service is disabled by default; to enable it, go to the Shadow Copies tab in the applicable volume's Properties dialog box (see Figure 5.3).

Figure 5.3. The Shadow Copies tab of the volume's Properties dialog box, shown in its default state.

graphics/05fig03.gif

If you want to make just a one-time shadow copy, click the Create Now button, which creates a single copy that appears in the Shadow Copies of Selected Volume list box at the bottom.

To enable default settings for the Shadow Copy service, click the Enable button. As shown in Figure 5.3, three shares are enabled for the Shadow Copy service. To change the default settings (see Figure 5.4), click the Settings button to open the Settings dialog box, where you can change parameters such as the default amount of drive space. To change the schedule for making shadow copies, click the Schedule button.

Figure 5.4. Changing the default settings for shadow copies in the Settings and Schedule dialog boxes.

graphics/05fig04.gif



MCSE 70-293 Exam Cram. Planning and Maintaining a Windows Server 2003 Network Infrastructure
MCSE 70-293 Exam Cram: Planning and Maintaining a Windows Server 2003 Network Infrastructure (2nd Edition)
ISBN: 0789736195
EAN: 2147483647
Year: 2004
Pages: 123

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net