Planning and Implementing NLB Clusters
Replicating information across NLB clusters
Although beyond the capabilities of the normal NLB service, Application Center 2000 can be used to create NLB clusters that replicate from a master node to all other member nodes, thus ensuring that changes made to the master node are kept current on all other member nodes.
Plan services for high availability.
Implement a cluster server.
As discussed previously, the NLB cluster is most often used to create distributed fault-tolerant solutions for applications such as Web sites, VPN servers, and Terminal Services servers. NLB clusters are
of between 2 and 32 nodeseach of which must contain the exact same applications and content. Because NLB clusters do not replicate content among the member nodes, using applications that require users to save data locally on the node is not a good idea. In this instance, you would need to implement a clustered server environment on the back end, such as an SQL Server cluster.
The most critical part of deploying an NLB cluster is determining the operational mode that is to be used and also the port rules that will be required. To plan for these items, you must know and understand what types of applications or services will be running on the NLB cluster. Certain applications, such as an e-commerce application, make
during the same session to be sent to more than one server, the client may experience application failures due to the absence of the expected cookie on other NLB cluster
. We discuss port rules, filtering mode, affinity, and cluster operation modes in the following sections. When you have a good understanding of these key NLB concepts, you will be ready to start implementing an NLB clustered solution in your organization.
When a network load balancing cluster is created, port rules are used to determine what types of traffic are to be
across the cluster nodes. Within the port rule is the additional option to configure
port rule filtering
, which determines how the traffic will be load-balanced across each of the cluster nodes.
In an NLB cluster, every cluster node can answer for the cluster's IP address; thus, every cluster node receives all inbound traffic by default. When each node receives the inbound request, it either responds to the requesting client or
the packet if the client has an existing session in progress with another node. Should no port rule be configured to
define how traffic on the specific port is to be handled, the request is passed off to the cluster node having the
configured priority. This may result in decreased performance by the NLB cluster as a whole if the traffic is not
to be or cannot be load-balanced.
Port rules allow you to change this behavior in a
and controlled fashion. Think of port rules as the network load balancing equivalent of a firewall rule set. When you configure port rules to allow traffic on the specific ports you require to reach the NLB cluster and configure an additional rule to drop all packets not meeting any other port rules, you can greatly improve the performance of the NLB cluster by allowing it to drop all packets that are not allowed to be load-balanced. From an administrative and security standpoint, port rules allow for easier monitoring of the server due to the limited number of ports that must be
Filtering Mode and Affinity
in the previous section, you can configure how NLB clusters load-balance traffic across cluster nodes; this action is referred to as
. By configuring filtering, you can specify whether only one node or multiple nodes within the NLB cluster are allowed to respond to multiple requests from the same client during a single session (connection).
The three filtering modes are as
When this filtering mode is configured, all traffic that meets the port rule criteria is sent to a specific cluster node. The Single Host filter might be used in a Web site that has only one SSL server; thus, the port rule for TCP port 443 would specify that all traffic on this port must be directed to that one node.
Disable Port Range
This filtering mode instructs the cluster nodes to ignore and drop all traffic on the configured ports without any further action. This type of filtering can be used to prevent ports and port ranges from being load-balanced.
The default filtering method, Multiple Host, specifies that all active nodes in the cluster are allowed to handle traffic. When Multiple Host filtering is enabled, the host affinity must be configured.
interact with the cluster nodes and varies depending on the requirements of the applications that the cluster is providing. Three types of affinities can be configured as follows:
This affinity type sends an inbound client request to all nodes within the cluster. This type of affinity results in increased speed, but is suitable only for providing static content to clients, such as static Web sites and FTP downloads. Typically, no cookies are generated by the applications running on the clusters that are configured for this type of affinity.
This affinity type causes all inbound client requests from a particular Class C address space to a specific cluster node. This type of affinity allows a
's state to be
but can be overloaded or fooled if all client requests are passed through a single firewall or proxy server.
This affinity type maintains all client requests on the same node for the duration of the session (connection). This type of affinity provides the best support for maintaining user state data and is often used when applications are running on the cluster that generates cookies.
NLB Cluster Operation Mode
The mathematical algorithm used by network load balancing sends inbound traffic to every host in the NLB cluster. The inbound client requests can be distributed to the NLB cluster nodes through one of two
. Although both methods send the inbound client requests to all
by sending them to the media access control (MAC) address of the cluster, they go about it in different ways.
When you use the
method, all cluster nodes share an identical unicast MAC address. To do so, NLB overwrites the original MAC address of the cluster network adapter with the unicast MAC address that is assigned to all the cluster nodes. When you use the
method, each cluster node retains its original MAC address of the cluster network adapter. The cluster network adapter is then assigned an additional multicast MAC address, which is shared by all the nodes in the cluster. Inbound client requests can then be sent to all cluster nodes by using the multicast MAC address.
The unicast method is usually preferred for NLB clusters unless each cluster node has only one network adapter installed in it. Recall that in any clustering arrangement, all nodes must be able to communicate not only with the clients, but also among themselves. Recall that NLB modifies the MAC address of the cluster network adapter when unicast is used; thus, the cluster nodes cannot communicate among
. If only one network adapter is installed in each cluster node, you need to use the multicast method.
SWITCH PORT FLOODING
As discussed previously, the mathematical algorithm used by network load balancing sends inbound traffic to every host in the NLB cluster. It does so by preventing the switch that the NLB cluster nodes are attached to from ever associating the NLB cluster's MAC address with a specific port on the switch. This, however, leads to the unwanted side effect of switch port flooding, where the switch floods all its ports with all packets inbound to the NLB cluster.
Switch port flooding is both a waste of
network resources and a nuisance to you when implementing NLB clusters. In Windows 2000, you either need to place all nodes of an NLB cluster on a dedicated switch or on a dedicated VLAN within the switch to get around the problems of switch port flooding. A new feature in Windows Server 2003, however,
switch port flooding from occurring. For a good introduction to Virtual LANs (VLANs), see the article at www.2000trainers.com/printarticle.aspx?articleID=65.
Internet Group Management Protocol (IGMP) support has been provided in Windows Server 2003 network load balancing to prevent flooding from occurring on those switch ports that do not have an NLB cluster node attached to them. With this feature, non-NLB cluster nodes do not ever see inbound traffic that is intended for the NLB cluster. At the same time, all NLB cluster nodes continue to receive all inbound traffic, thus meeting the requirements of the NLB algorithm. IGMP support is available only when multicast mode is configured for the NLB clusterwhich
its own set of benefits and drawbacks. As an alternative, you can utilize the dedicated switch or VLAN methods to eliminate switch port flooding with the NLB cluster in unicast mode; unicast mode does not present the same drawbacks associated with multicast mode.
At this point, you are ready to move forward and create an NLB cluster. After creating your NLB cluster, you can then join additional cluster nodes to it and begin managing the NLB cluster.
Creating an NLB Cluster
By now, you've received a good introduction into network load balancing and some of its key
. However, you still may not have a good idea in your head exactly what an NLB cluster solution might look like. Figure 5.2 shows a four-node NLB cluster arrangement.
Figure 5.2. This four-node NLB cluster provides high availability Web sites.
Any good implementation needs a good plan. To successfully implement your NLB cluster, you need to identify the key parameters that require you to have information ready ahead of time. They are broken down into two major parts:
cluster host parameters
. We examine each in
in the following paragraphs.
The following parameters are of interest when planning for the entire cluster:
Cluster virtual IP address
The virtual IP (VIP) address that will be assigned to represent the entire cluster must be determined. This IP address must be in the same IP subnet as the IP addresses assigned to the cluster network adapters on all cluster hosts. Also, these IP addresses should be in different IP subnets than the IP addresses
for the administrative IP addresses. In the example shown in Figure 5.2, the cluster VIP is 10.0.0.1/24, whereas the individual cluster network adapter IP addresses for the four nodes are 10.0.0.10/2410.0.0.13/24.
A fully qualified domain
(FQDN) must be assigned to the cluster, just the same as with any host on the network. This FQDN will be registered in DNS and allow clients to access the cluster as one unit. You also need to
an FQDN for each application and service that you are running on the cluster because clients will access their FQDNs.
Cluster operation mode
You need to choose between using unicast or multicast mode for distributing inbound client requests, as discussed previously in the "NLB Cluster Operation Mode" section.
Cluster remote control settings
By default, remote administration of the cluster using
is disabled. To maintain the highest level of security for your NLB cluster, you should specify that all remote administration is to be performed using the Network Load Balancing Manager. Also, you should specify that all cluster administration be done only from specified computers that are within your trusted and secured internal network to prevent compromise of cluster administrative control.
The following parameters are of interest when planning for each of the cluster nodes:
Cluster host priority
Each cluster node is identified by a unique host priority number
from 1 to 32. During cluster convergence, the remaining cluster node with the lowest numeric host priority triggers the end of convergence and becomes the default host. No two cluster nodes can have the same host priority assignment. It's worth mentioning that you have a maximum of 32 nodes in your cluster and that the priority of each node acts as a "ranking" system, indicating the order in which you want the cluster nodes to become the default host during failure situations.
Administrative IP address
These IP addresses are assigned to each nonload-balanced network adapter and should all be in the same IP subnet. These IP addresses should also be in a different IP subnet than the IP addresses chosen for the cluster IP addresses so that load-balanced and administrative traffic are completely separated, thus providing increased security for the administrative traffic.
Cluster IP address
These IP addresses are assigned to each cluster network adapter and must be in the same IP subnet as the cluster VIP. These IP addresses should also be in a different IP subnet than the IP addresses chosen for the administrative IP addresses.
Initial Host State
A Windows Server 2003 server configured to be an NLB cluster node starts the NLB service very early in the operating system startup process and joins the NLB cluster. If this occurs before clustered services and applications are started and available on the cluster node, clients may experience service disruptions. You need to specify whether cluster nodes will automatically start the NLB service and join the NLB cluster upon operating system load or whether the NLB service will be manually started at some later time.
With the required information in hand, you are now ready to create an NLB cluster and join additional nodes to the cluster. Step by Step 5.1 outlines the steps required to create a new NLB cluster.
Using two network adapters is best
When implementing either an NLB solution or a cluster, you should have two network adapters installed in each cluster node. All the discussion and examples that follow assume that you have two network adapters installed in your cluster nodes. As well, all network adapters that are in use should be (preferably) identical in make and model; this results in easier maintenance and upkeep in that you have only one set of drivers to keep up to date. If you cannot use identical network adapters across the cluster, you should use the same speed adapters (such as 10/100 or 100/1000) to minimize potential bottlenecks.
STEP BY STEP
5.1 Creating a New NLB Cluster
the Network Connections window by selecting Start, Settings, Network Connections. The Network Connections window opens, displaying all configured connections on the computer.
Double-click the network adapter that you will be using as the administrative network adapter to open the network adapter Status dialog box.
Click the Properties button to open the Administration Properties dialog box, as shown in Figure 5.3.
Figure 5.3. The Administration Properties dialog box allows you to configure network connection properties.
On the General tab, select Internet Protocol (TCP/IP) and then click Properties. The Internet Protocol (TCP/IP) Properties dialog box opens, as shown in Figure 5.4.
Figure 5.4. The Internet Protocol (TCP/IP) Properties dialog box allows you to configure TCP/IP settings for a network connection.
Enter the IP address and subnet mask to be used for the administrative interface. In most cases, these interfaces are connected only to each other, as in Figure 5.2, and thus do not require a default gateway or DNS server. You can configure this information if required, however.
Click Close to close the Local Area Connection Properties dialog box.
Configuring TCP/IP properties
If you need a refresher on configuring TCP/IP properties, check out
MCSE TG 70-291: Implementing, Managing, and Maintaining a Microsoft Windows Server 2003 Network Infrastructure
(2003, Que Publishing); ISBN 0789729482.
Configure the load balancing adapter, if not already done, by performing steps 26 again for the load balancing adapter.
Open the Network Load Balancing Manager, shown in Figure 5.5, by selecting Start, Programs, Administrative Tools, Network Load Balancing Manager.
Figure 5.5. The Network Load Balancing Manager enables you to perform all administrative actions on NLB clusters.
Right-click on Network Load Balancing Clusters and select New Cluster from the context menu. The Cluster Parameters dialog box appears, as shown in Figure 5.6.
Figure 5.6. The Cluster Parameters dialog box allows you to create a new NLB cluster.
Enter the cluster's IP address, subnet mask, and cluster domain name. The IP address configured here is the cluster's virtual IP (VIP) address. Configure the cluster for unicast or multicast as desired. You can also configure IGMP multicast and remote control as desired. Click Next to continue.
On the Cluster IP Addresses dialog box, shown in Figure 5.7, enter any additional virtual IP addresses using the Add button. Click Next when you are ready to continue.
Figure 5.7. You can enter multiple virtual IP addresses as required by your services and applications.
On the Port Rules dialog box, shown in Figure 5.8, configure any port rules that are appropriate for your NLB cluster installation. Clicking the Add button opens the Add/Edit Port Rule dialog box, as shown in Figure 5.9.
Figure 5.8. Port rules are used to quickly filter traffic from being load-balanced by the NLB cluster.
Figure 5.9. You can use port rules to allow or disallow traffic types and configure how the traffic is to be load-balanced.
Configure your port rules as discussed previously in the "Port Rules" and "NLB Cluster Operation Mode" sections of this chapter. Click OK to accept the new port rule. Click Next to continue creating the NLB cluster.
On the Connect dialog box, shown in Figure 5.10, type the name of the first cluster node and click the Connect button. After a brief period, all available network adapters are displayed in the bottom half of the dialog box. Select the network adapter that is to be used for load balancing and click Next to continue.
Figure 5.10. You need to select the proper network adapter to use as the load balancing adapter.
On the Host Parameters dialog box, shown in Figure 5.11, configure the host priority, cluster node dedicated IP address and subnet mask, and the initial state of the cluster node. The dedicated IP address is that of the cluster network adapter itself, and must be unique and in the same subnet as the cluster VIP. After entering all required information, click Finish to complete the NLB cluster creation process.
Figure 5.11. The Host Parameters dialog box contains critical configuration items that identify the specific cluster node.
After a brief period of time, you can see the newly created and fully
cluster displayed in the Network Load Balancing Manager, as shown in Figure 5.12.
Figure 5.12. After a brief delay, the newly created NLB cluster shows up and is fully converged.
Of course, after you have created the NLB cluster, you should add at least one more cluster node to it. Step by Step 5.2 outlines the required steps to add additional nodes to the NLB cluster.
STEP BY STEP
5.2 Adding Cluster Nodes to the NLB Cluster
On the server that is to be added to the NLB cluster, select Start, Settings, Network Connections. The Network Connections window opens, displaying all configured connections on the computer.
Double-click the network adapter that you want to use as the cluster network adapter to open the
Status dialog box.
Click the Properties button to open the network adapter Properties dialog box.
On the General tab, click the Install button to open the Select Network Component Type dialog box. Double-click on Service to open the Select Network Service dialog box, as shown in Figure 5.13.
Figure 5.13. You need to ensure that Network Load Balancing is enabled for the clustering adapter.
Select Network Load Balancing and click OK. Verify that the
Properties now shows that Network Load Balancing is available for the clustering adapter, as shown in Figure 5.14.
Figure 5.14. After Network Load Balancing has been made available for the clustering adapter, you can quickly add the new cluster node.
From the cluster node where the NLB cluster was created, open the Network Load Balancing Manager.
If the Network Load Balancing Manager does not display the cluster, connect to it by right-clicking on Network Load Balancing Clusters and select Connect to Existing from the context menu.
Right-click on the NLB cluster and select Add Host To Cluster from the context menu.
On the Connect dialog box, shown in Figure 5.15, type the name of the additional cluster node and click the Connect button. After a brief period, all available network adapters are displayed in the bottom half of the dialog box. Select the network adapter that is to be used for load balancing and click Next to continue.
Figure 5.15. Ensure that you select the proper network adapter to use as the load balancing adapter.
On the Host Parameters dialog box, shown in Figure 5.16, configure the host priority, cluster node dedicated IP address and subnet mask, and the initial state of the cluster node. The dedicated IP address is that of the cluster network adapter itself, and must be unique and in the same subnet as the cluster VIP. After entering all required information, click Finish to complete the NLB cluster creation process.
Figure 5.16. The Host Parameters dialog box contains critical configuration items that identify the specific cluster node.
After a brief period of time, you can see the updated and fully converged cluster displayed in the Network Load Balancing Manager, as shown in Figure 5.17.
Figure 5.17. After a brief delay, the updated NLB cluster is fully converged.
GUIDED PRACTICE EXERCISE 5.1
Using the Network Load Balancing Manager
Although it is possible to configure NLB settings directly on a network adapter using its Properties dialog box, as shown in Figure 5.14, you should not do so. The advantages to using the NLB Manager include not having to manually duplicate settings among all hosts in the cluster. You also avoid the potential for problems and unpredictable results that often occur when attempting to manage NLB settings manually.
In this exercise, you create a new NLB cluster. This Guided Practice helps
You should try completing this exercise on your own first. If you get stuck, or you would like to see one possible solution, follow these steps:
For the cluster host administrative network adapter, configure the TCP/IP properties by entering the IP address and subnet mask you have chosen. If the administrative network adapters are connected to each other only through a switch or hub, they do not need a DNS server IP address or default gateway IP address.
For the cluster host load balancing network adapter, configure the TCP/IP properties by entering the IP address, subnet mask, DNS server IP address, and default gateway IP address.
Open the Network Load Balancing Manager.
Right-click Network Load Balancing Clusters and select New Cluster from the context menu.
Enter the cluster IP address (cluster Virtual IP), subnet mask, and cluster domain name.
Configure the cluster for unicast or multicast as desired.
Configure IGMP multicast and cluster remote control as desired.
Add any additional cluster Virtual IP addresses as required using the Add button.
Configure port rules that are appropriate for your NLB cluster installation.
On the Connect dialog box, enter the name of the first cluster node and click the Connect button. From the displayed list, select the network adapter that is to be used for load balancing.
On the Host Parameters dialog box, configure the host priority, cluster node dedicated IP address and subnet mask, and the initial state of the cluster node. The dedicated IP address is that of the cluster network adapter itself, and must be unique and in the same subnet as the cluster VIP.
After entering all required information, click Finish to complete the NLB cluster creation process.
With the discussion of creating NLB clusters behind us, we now move forward and examine MSCS clusterscommonly just referred to as