Making Network Load Balancing Part of Your High-Availability Plan


EXAM 70-293OBJECTIVE 4.1.2

The other high-availability tool included in Windows Server 2003 is Network Load Balancing (NLB). A primary use for NLB is increasing the scalability and availability of Internet applications (Web, FTP, VPN, firewall, proxy servers, and so on) by having multiple machines simultaneously answering and serving client requests. NLB is included in all versions of Windows Server 2003 and is installed automatically, although it must be configured and activated before it is usable.

Microsoft also considers NLB a clustering technology. The two clustering technologies are very different and serve different purposes. A server cluster requires specialized hardware, and there is typically one installed copy of each application, that moves between server cluster nodes. Only the node actively hosting the application responds to client requests. An NLB cluster does not require any specialized or additional hardware. Every host runs a separate and independent copy of the application and actively responds to client requests. Server clusters are used mainly for database-type applications. NLB clusters are used for traffic or communication oriented applications.

Exam Warning

Make sure you thoroughly understand the difference between the two clustering technologies and where each is primarily used. Exam questions may attempt to mix or confuse the two.

NLB has been available since Windows NT 4.0 when it was an add-in component called Windows Load Balancing Service (WLBS). You will still see NLB called this in some utilities and documentation. Unless specifically referred to in a historical context, the terms WLBS and NLB should be considered interchangeable.

Terminology and Concepts

NLB introduces some new terms for dealing with this form of clustering. Some terms are similar to those used with server clusters, but they have different meanings.

Hosts/Default Host

When referring to NLB, a host is a server running any edition of Windows Server 2003 that has been configured to respond to client requests via the NLB driver. Since NLB is automatically installed, any Windows Server 2003 server has the potential to be an NLB host.

The default host in an NLB cluster is the host with the highest currently active priority. The priority is a unique identifying number assigned to each host in an NLB cluster. An

NLB cluster can have up to 32 hosts, so the priorities range from 1 to 32. Hosts cannot be configured to have the same priority.

Load Weight

As previously mentioned, an NLB cluster can consist of up to 32 hosts. The hosts do not need to be identical in hardware or configuration. The load weight is a mechanism for distributing the traffic load within an NLB cluster to the hosts that are most suited to handle the load. Lighter loads can be configured for hosts with less capacity and heavier loads for more robust hosts.

The load weight is applicable only if specifically configured; otherwise, all hosts are configured with equal load weights. When used, each host is assigned a load weight from 0 (lowest weight) to 100 (highest weight). The weights from all active hosts in the cluster are averaged, and traffic is distributed accordingly. In this way, the load weight is a relative value within the NLB cluster.

Traffic Distribution

The way requests from clients are spread out among the hosts in an NLB cluster is referred to as traffic distribution. Each host in an NLB cluster is configured with at least two IP addresses. One address is reserved for the nonclustered traffic directed to the host, and the second IP address is shared among all nodes in the cluster and is called the cluster IP address. It is to this second IP address that clients direct their requests.

When a request is sent to the cluster IP address, all hosts in the cluster receive the request. The NLB driver passes the incoming traffic through the defined port rules. The host that the port rules specify to receive the request services the request, while all other hosts discard the request. Port rules are the mechanism used to direct incoming traffic on specific TCP/IP ports to specific hosts or groups of hosts. All hosts in an NLB cluster must have the same number and specific port rules. Port rules can apply to a specific cluster IP address, all port numbers, or a specific range of port numbers, and to the TCP, UDP, or both protocols.

In addition, each port rule contains a filtering mode for that rule. The filtering mode defines how the hosts in a cluster handle inbound traffic. The options for the filtering mode are as follows:

  • Disabled All traffic matching the associated cluster IP address, port range, and protocol will be blocked. Applications on the NLB cluster will never see this traffic.

  • Single Host All traffic matching the associated cluster IP address, port range, and protocol will be handled by one specific host in an NLB cluster. For example, this filtering mode could be used to direct all FTP traffic inbound to an NLB cluster to host 2 of that cluster, while Web traffic is served from all nodes.

  • Multiple Host All traffic matching the associated cluster IP address, port range, and protocol will be distributed to multiple hosts in the NLB cluster. When using the multiple host filtering mode, you must also select an affinity. Affinity describes how multiple requests from the same client are directed among the multiple hosts. There are three affinity options:

    • None Any NLB host matching the port rule can service requests from clients. This is the most efficient affinity setting in terms of evenly distributing the workload, but it should not be used with the UDP or Both protocol settings to properly handle fragmented packets.

    • Single This is the default setting. Single affinity ensures that only one NLB host will handle traffic requests for the same client session. This setting is necessary if session state preservation is needed. (for example, for Web servers using server-side cookies). This setting reliably supports the UDP or Both protocol setting.

    • Class C This affinity setting specifies that all client requests originating from the same class C IP subnet will be directed to the same NLB host. This setting is useful in large NLB clusters handling traffic inbound from the Internet. This setting also reliably supports the UDP or Both protocol setting.

Convergence and Heartbeats

An NLB cluster can be a fluid environment. By design, a host can be added or removed from the operational cluster without affecting the services provided by the NLB cluster. However, each time a host is added to or removed from the NLB cluster, the cluster must reconfigure itself to allow for the new increased or decreased capacity, and calculate for traffic distribution accordingly. This process is called convergence. During convergence, the new stable state of the cluster and default host (the host with the highest priority) is determined.

Convergence normally occurs within 10 seconds, and client requests to operational hosts are unaffected. Requests to hosts that have failed or exited the cluster are redistributed to working hosts after convergence is completed.

NLB cluster hosts determine the status of each other by exchanging heartbeat messages. Heartbeats in an NLB cluster differ from those used in a server cluster but serve a similar purpose. In essence, the heartbeat messages generated by an NLB host are a way for the host to tell the other members of the cluster “I’m alive.” By default, if a host does not send a heartbeat message to the other NLB cluster hosts within five seconds, it will be considered failed and a convergence will be initiated.

How NLB Works

NLB requires between 2 and 32 host systems to be effective. Each host has its own copy of the applications being supported by the cluster. The hosts share one or more IP addresses. When the cluster is started, the hosts perform a convergence. Once convergence is complete, the hosts will begin responding to client requests. Client systems then issue requests directed to one of the cluster IP addresses. All of the cluster hosts receive the request. The host that is next in line to service the request does so, while the other hosts ignore it.

Once per second, a host issues heartbeat messages to the other hosts in the NLB cluster. If another host is added or if a host leaves, the cluster will perform another convergence.

Relationship of NLB to Clustering

Server clustering and NLB clustering differ greatly. You cannot combine NLB and server clustering on the same hosts, but the two technologies can sometimes be used together to increase overall reliability and performance.

Server clustering is used primarily for database-type applications (such as SQL Server, Exchange Server, and Oracle) that run as a single instance of the application, and parallel or concurrent execution is impossible or impractical. Server-clustered databases often operate behind an NLB cluster. For this reason, a server cluster is sometimes referred to as the back-end.

NLB is used for applications whose primary resource is TCP/IP communication-related—such as Internet Information Server (IIS), ISA Server, virtual private Network (VPN) servers, and terminal servers—that can run in multiple instances or in a parallel fashion. By adding hosts to an NLB cluster, more requests can be serviced simultaneously, increasing responsiveness and performance. The applications on the NLB hosts would then issue requests to the back-end on the client’s behalf, process the returned request, and then fulfill the original client request. Since the NLB cluster logically resides between the client and the server cluster, or in “front” of the server cluster, the NLB cluster is usually referred to as the front-end. The combination of these two high-availability technologies can be very powerful and reliable. Figure 9.43 illustrates this front-end/back-end structure.

click to expand
Figure 9.43: Combining Network Load Balancing and Server Clustering into a Front-end/Back-end Architecture

Managing NLB Clusters

EXAM 70-293 OBJECTIVE 4.4

Windows Server 2003 includes some useful tools for creating and managing NLB clusters. The NLB Manager (new to Windows Server 2003) is provided to centrally create and manage NLB clusters from a graphical interface. For performing administrative tasks from the command-line interface, the NLB.exe utility is provided.

Using the NLB Manager Tool

Microsoft made many improvements and added many tools to Windows Server 2003, but NLB Manager should earn Microsoft a special thanks. This tool is extremely powerful. It takes what used to be a difficult manual process and simplifies it with a point-and-click interface. With NLB Manager, you can perform the following tasks:

  • Create a new NLB cluster.

  • Add and automatically configure a new host.

  • Remove a host from an NLB cluster, automatically disabling NLB on the removed host.

  • Configure all NLB-related properties on the cluster.

  • Configure all hosts in the cluster.

  • Replicate the NLB cluster configuration (but not applications) to other NLB hosts.

  • Troubleshoot NLB clusters.

To run NLB Manager, you must be a member of the local Administrators group on the host you are adding, configuring, or removing from the cluster. You do not need to have elevated privileges for the system on which you are running NLB Manager.

The NLB Manager utility is a part of the Windows Server 2003 Administration Tools Pack, which can be found in %systemroot%\System32\Adminpak.msi. The Administration Tools Pack can be installed on a Windows XP workstation to allow remote administration.

To access NLB Manager, select Start | Administrative Tools | Network Load Balancing Manager. When the utility starts for the first time, you are presented with an empty session, as shown in Figure 9.44. From here, you can begin the process of creating or managing an NLB cluster.

click to expand
Figure 9.44: Starting NLB Manager for the First Time

Remote Management

You must take a series of steps in order to remotely manage an NLB cluster or host with NLB Manager. NLB Manager uses Windows Management Instrumentation (WMI) interfaces. WMI requires Remote Procedure Call (RPC) and Distributed Component Object Model (DCOM) availability. You can verify that these services are available for NLB Manager’s use by selecting Start | Administrative Tools | Services and viewing the list of services.

If you are attempting to manage an NLB cluster that is on the other side of a firewall from your location, you will need to make sure that your firewall is configured to allow DCOM to pass. Microsoft has a white paper available that describes how to do this at www.microsoft.com/com/wpaper/dcomfw.asp.

Command-Line Tools

Before Windows Server 2003, the only way to manage a load-balanced cluster was with command-line tools. In some situations, this approach still makes sense, because command-line tools can be scripted and scheduled.

Microsoft includes the NLB.exe utility for this purpose. NLB.exe can perform many of the same functions as NLB Manager, but it uses a different mechanism that is disabled by default. NLB.exe uses the remote-control feature of NLB instead of RPC and DCOM. This may be advantageous in certain circumstances, but enabling the remote-control feature exposes the cluster to possible security risks. Microsoft recommends that remote control be disabled and suggests that you perform all NLB administration through NLB Manager.

If you need to use NLB.exe, make sure that you enforce strong passwords on the NLB cluster and keep your NLB cluster behind a firewall. The default UDP ports used by NLM.exe are 1717 and 2504.

Figure 9.45 shows the command-line parameters that can be used with NLB.exe.

start figure

Usage: NLB <command> [/PASSW [<password>]] [/PORT <port>] <command>   help                             - displays this help   ip2mac    <cluster>              - displays the MAC address for the                                      specified cluster   reload    [<cluster> | ALL]      - reloads the driver’s parameters                                      from the registry for the                                      specified cluster (local only).                                      Same as ALL if parameter is not                                      specified.   display   [<cluster> | ALL]      - displays configuration parameters,                                     current status, and last several                                      event log messages for the                                      specified cluster (local only).                                      Same as ALL if parameter is not                                      specified.   query     [<cluster_spec>]       - displays the current cluster state                                      for the current members of the                                      specified cluster. If not                                      specified a local query is                                      performed for all instances.   suspend   [<cluster_spec>]       - suspends cluster operations                                      (start, stop, etc.) for the                                      specified cluster until the resume                                      command is issued. If cluster is                                      not specified, applies to all                                      instances on local host.     resume    [<cluster_spec>]       - resumes cluster operations after a                                        previous suspend command for the                                        specified cluster. If cluster is                                        not specified, applies to all                                        instances on local host.     start     [<cluster_spec>]       - starts cluster operations on the                                        specified hosts. Applies to local                                        host if cluster is not specified.     stop      [<cluster_spec>]       - stops cluster operations on the                                        specified hosts. Applies to local                                        host if cluster is not specified.     drainstop [<cluster_spec>]       - disables all new traffic handling                                        on the specified hosts and stops                                        cluster operations. Applies to                                      local host if cluster is not                                        specified.     enable <port_spec> <cluster_spec> - enables traffic handling on the                                        specified cluster for the rule                                        whose port range contains the                                        specified port     disable <port_spec> <cluster_spec> - disables ALL traffic handling on                                        the specified cluster for the rule                                        whose port range contains the                                        specified port     drain <port_spec> <cluster_spec> - disables NEW traffic handling on                                        the specified cluster for the rule   whose port range contains the                                        specified port     queryport [<vip>:]<port>         - retrieve the current state of the               [<cluster_spec>]         port rule. If the rule is handling                                        traffic, packet handling                                        statistics are also returned.     params [<cluster> | ALL]        -  retrieve the current parameters                                        from the NLB driver for the                                        specified cluster on the local                                        host.   <port_spec>     [<vip>: | ALL:](<port> | ALL)    - every virtual ip address (neither                                        <vip> nor ALL) or specific <vip>                                        or the "All" vip, on a specific                                        <port> rule or ALL ports   <cluster_spec>     <cluster>:<host> | ((<cluster> | ALL) - specific <cluster> on a         specific     (LOCAL | GLOBAL))                  <host>, OR specific <cluster> or                                        ALL clusters, on the LOCAL machine                                        or all (GLOBAL) machines that are                                        a part of the cluster     <cluster>                        - cluster name | cluster primary IP                                        address     <host>                           - host within the cluster (default -                                        ALL hosts): dedicated name |                                      IP address | host priority ID                                        (1..32) | 0 for current DEFAULT                                        host     <vip>                            - virtual ip address in the port                                        rule     <port>                           - TCP/UDP port number   Remote options:   /PASSW <password>                - remote control password (default -                                        NONE)                                      blank <password> for console prompt     /PORT <port>                     - cluster’s remote control UDP port

end figure

Figure 9.45: Output of the NLB.exe/? Command

Exam Warning

One particularly useful function of both NLB.exe and NLB Manager is the drainstop option. This feature allows you to plan the shutdown of an NLB host without affecting sessions already in progress (think transparent to the user). This function works by setting the host to not allow any new connections to it. As existing connections complete their conversations, their sessions are closed. If that same client starts a new connection, the NLB cluster directs the connection to another available host. It is likely that the exam will present questions regarding this scenario or function.

NLB Error Detection and Handling

The objective of NLB is increased availability. Consequently, Microsoft has included mechanisms in NLB to handle and manage error situations without affecting the reliability of the NLB cluster. If an error is encountered, details about the error are recorded in the Windows event log, and NLB isolates the host having the problem by preventing it from joining the cluster and servicing requests.

As previously stated, an NLB cluster performs a convergence when a host joins or leaves the cluster. When a host attempts to join, it notifies the other cluster hosts of its configuration. Likewise, the other hosts notify the joining host of their configurations. A check for consistency in operating parameters (host priority, port rules, and so on) is performed. If the host that is attempting to join does not have a configuration consistent with the hosts already in the cluster, the new host will not be allowed to join, and convergence will not occur. This process ensures that a misconfigured host does not compromise cluster operations.

Monitoring NLB

Events encountered by NLB (convergence, communication errors, and so on) are recorded by NLB in the System event log. You can use Event Viewer to examine these events.

NLB Manager does not use the Windows event logs. Instead, it includes its own logging function that records actions performed by the utility. This log file allows you to see what administrative activity has occurred on your NLB cluster. The log function must be activated before it can be used. To activate the log, start NLB Manager and select Options | Log Settings…, as shown in Figure 9.46.

click to expand
Figure 9.46: Starting an NLB Manager Log

When the Log Settings dialog box appears, as shown in Figure 9.47, check Enable logging and enter a path and filename for the log. If no path is given, the log file is stored in the profile of the logged-on user account.

click to expand
Figure 9.47: Enabling the NLB Manager Log

This log file contains sensitive information about your NLB cluster. You should secure it by restricting access to it with NTFS permissions. Be aware, however, that the account under which NLB Manager runs will require Full Control permissions to the log file.

Using the WLBS Cluster Control Utility

If you have enabled the remote-control feature of NLB, you can use the NLB.exe command-line utility to get status information from an NLB cluster. You can use the NLB query command to display the current configuration, status, and any recent event log messages for the NLB cluster. The NLB display command displays the current state of the NLB cluster and hosts.

NLB Best Practices

As with all technologies, there are certain ways to implement and operate NLB that are better than others. Microsoft publishes a number of items that fall into the best practices category for NLB.

Multiple Network Adapters

NLB can be implemented with a single network interface adapter in each host, but multiple adapters are recommended. A single network interface generates additional communications overhead for the NLB cluster, because all hosts see the network traffic destined for a specific host.

You are also limited in how you can perform administrative tasks. A host with a single NIC cannot perform regular (non-NLB) communications. This means that you cannot run the NLB administrative tools on an NLB host in this configuration. To avoid this situation, you must enable multicast or use multiple network adapters. When multiple network adapters are installed in each host, one adapter can be configured for NLB and the other for regular traffic. When using multiple adapters, you should configure only one adapter for use by NLB.

Protocols and IP Addressing

NLB supports only TCP and UDP communications. Do not attempt to attach any other protocols (IPX/SPX, AppleTalk, ATM, and so on) to the adapter. Only static IP addresses are allowed on an NLB cluster node. DHCP is not supported. This is true for the cluster IP address and the dedicated host IP address.

Each node in an NLB cluster must be on the same TCP/IP subnet. NLB does not support hosts residing on multiple subnets. When configuring the IP addresses for your hosts, keep in mind that multiple IP addresses can be assigned to an adapter, and all of those IP addresses will be load-balanced, except for the address configured as the host’s dedicated address (the one that handles non-NLB traffic). The host’s dedicated IP address must be first on the list of IP addresses assigned to a network interface, so that any outbound traffic from the host is sent from this IP address. Figure 9.48 shows a network adapter with multiple IP addresses configured in the Advanced TCP/IP Settings dialog box (to open this dialog box, click Advanced in the Internet Protocol (TCP/IP) Properties dialog box for the network interface properties).

click to expand
Figure 9.48: Configuring a Network Adapter with Multiple IP Addresses

You will notice from the example in Figure 9.48 that the IP address 10.20.200.5 is listed first and is therefore the node’s dedicated IP address. This configuration is not complete, however, until the properties of the NLB driver are also configured with this IP address, as shown in Figure 9.49 (check Network Load Balancing in the property pages of the network interface, and then click the Properties button to open this dialog box).

click to expand
Figure 9.49: NLB Dedicated IP Address Configuration

Security

Security is of greater concern in an NLB cluster than it is with a stand-alone server. NLB has no inherent security features, and it cannot be used as a firewall or in any other intrusion-prevention role. When improperly configured, NLB can open security holes into your environment. It is critical that you take proper security precautions when using NLB.

Host Security

Consider tightening the security of the operating system. Limit the number of users permitted to access the hosts. Place a secured PC in front of the NLB cluster and behind a firewall. Use this PC to run NLB Manager and administer the cluster.

Application Security

Because NLB provides no additional security functions, it is imperative to use any security features available in your load-balanced applications. If you are using IIS on an NLB cluster, follow the documented procedures and guidelines for securing IIS.

Physical Security

Like any server, an NLB host should be locked behind closed doors for protection, and so should the network equipment that the NLB cluster depends on. It is theoretically possible to cause a service disruption by forging cluster heartbeats.

Host List

If you are using the host list feature of NLB Manager, you should secure the host list file on your administrative system. Restrict access to appropriate users.

Remote Control Option

The remote-control feature of NLB is a known security risk. You should avoid using this feature. If you must enable remote control, ensure that strong passwords are used. It is also advisable to place the cluster behind a firewall and filter the port traffic going to the remote-control ports.

Exercise 9.02: Creating a Network Load Balancing Cluster

start example

This exercise will walk you through the process of creating a new NLB cluster using the NLB Manager administrative tool. Where appropriate, use your own TCP/IP addresses in this exercise.

  1. Start NLB Manager by selecting Start | Administrative Tools | Network Load Balancing Manager.

  2. Select Cluster | New, as shown in Figure 9.50.

    click to expand
    Figure 9.50: Create a New NLB Cluster

  3. You will be presented with the Cluster Parameters window. Enter the IP address, Subnet mask, and Full Internet name (this is the fully qualified domain name) of the cluster in the Cluster IP configuration section, as shown in Figure 9.51.

    click to expand
    Figure 9.51: Configure Cluster Parameters

  4. Click the Multicast option in the Cluster operation mode section, and notice how the Network address entry changes, as shown in Figure 9.52. The network (media access control, or MAC) changes to fit the correct mode based on the communication mechanism you select. (We will leave Multicast selected for the exercise.)

    click to expand
    Figure 9.52: Select Multicast Cluster Operation Mode

  5. Select the check box next to IGMP multicast, as shown in Figure 9.53.

    click to expand
    Figure 9.53: Select IGMP Multicast with the Cluster Operation Mode

  6. You will be presented with the warning message shown in Figure 9.54. This message is intended to remind you that additional configuration of your switches and NIC may be required if you select IGMP support. Click OK to close the Warning dialog box.

    click to expand
    Figure 9.54: IGMP Warning Message

  7. You will be presented with the Cluster IP Addresses window, as shown in Figure 9.55. If you want to load-balance multiple IP addresses, you can click the Add… button and add them to the cluster at this point. For this exercise, we will work with only one address. Click Next to continue.

    click to expand
    Figure 9.55: Cluster IP Addresses Window

  8. In the Port Rules window, you see the default port rule, as shown in Figure 9.56. This rule evenly distributes arriving traffic among all cluster hosts. Select the default port rule and click Edit….

    click to expand
    Figure 9.56: The Port Rules Window

  9. The Add/Edit Port Rule dialog box appears, as shown in Figure 9.57. As you can see, the default port rule applies to all cluster IP addresses on all ports and protocols. It also directs all client requests to the same cluster host (Multiple host/Single Affinity). Click Cancel to avoid modifying the default port rule.

    click to expand
    Figure 9.57: The Add/Edit Port Rule Dialog Box

  10. Click Next in the Port Rules window to advance to the Connect window.

  11. Enter the name of a host in the Host field and click the Connect button. When the host is identified, select the network interface to load-balance, as shown in Figure 9.58. Then click Next.

    click to expand
    Figure 9.58: Connect to an NLB Node

    At this point, you may receive the warning message, as shown in Figure 9.59. If you receive this message, you are using DHCP to assign an IP address to your network interface. You must use static IP addresses on your network interfaces when using NLB. You must cancel the configuration, change from DHCP to static IP addresses, and begin this process again.

    click to expand
    Figure 9.59: DHCP Warning Message

  12. You are now presented with the Host Parameters window, as shown in Figure 9.60. Enter the Priority, Dedicated IP address, and Subnet mask for the cluster host. Set the Default state of the host to Started. (This setting will make the host automatically attempt to join the NLB cluster on startup). Click Finish.

    click to expand
    Figure 9.60: Configure Host Parameters

  13. You are now taken back to the main window of the NLB Manager utility, which will look similar to Figure 9.61.

    click to expand
    Figure 9.61: The Configured NLB Cluster

  14. The bottom pane of the window is the log of activities performed by the NLB Manager. Double-click an entry. Figure 9.62 shows an example of the details that appear when Log Entry 0004 was double-clicked. When you are finished viewing the log entry’s details, click OK.

    click to expand
    Figure 9.62: View NLB Manager Log Entry Details

  15. Click the NLB cluster you just created. You will see current details about your cluster, similar to those shown in Figure 9.63.

    click to expand
    Figure 9.63: Configured NLB Cluster Details

  16. Click the host you just configured. You will see the port rules, as shown in Figure 9.64.

    click to expand
    Figure 9.64: Configured Port Rules on Cluster Node

end example




MCSE Planning and Maintaining a Windows Server 2003 Network Infrastructure. Exam 70-293 Study Guide and DVD Training System
MCSE Planning and Maintaining a Windows Server 2003 Network Infrastructure: Exam 70-293 Study Guide and DVD Training System
ISBN: 1931836930
EAN: 2147483647
Year: 2003
Pages: 173

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net