Planning and Implementing MSCS Clusters


Plan services for high availability.

  • Plan a high availability solution that uses clustering services.

Implement a cluster server.

I made the statement earlier in this chapter that NLB clusters were the "simpler of the two types to understand, deploy, and support." This is very much a true statement. As you saw, NLB clusters require no special hardware. In fact, the only real additional requirement above meeting those to install Windows Server 2003 is that each NLB cluster should have two network adapters installed. the services and applications installed on the NLB cluster may have additional requirements, but for the most part NLB clusters are less expensive and easier to implement and maintain.

Clustering, however, has its advantages especially in those applications where uninterrupted access to data and services is a must-have . Typical situations in which you can expect to deploy clusters are in support of Exchange Server, SQL Server, file shares, and printer sharesall services that businesses, clients , and users demand 24/7 access to.

So, what's the difference between clustering and network load balancing. As you saw previously in Figure 5.2, NLB uses a group of between 2 and 32 servers to distribute inbound requests among them in a fashion that permits the maximum amount of loading with the minimum amount of downtime. Each NLB cluster node contains an exact copy of the static and dynamic content that every other NLB cluster node has; in this way, it doesn't matter which NLB cluster node receives the inbound request, except in the case of host affinity where cookies are involved. The NLB cluster nodes use heartbeats to keep aware of the status of all nodes.

Clustering, on the other hand, uses a group of between 2 and 8 servers that all share a common storage device. Recall that a cluster resource is an application, service, or hardware device that is defined and managed by the cluster service. The cluster service (MSCS) monitors these cluster resources to ensure that they are operating properly. When a problem occurs with a cluster resource, MSCS attempts to correct the problem on the same cluster node. If the problem cannot be correctedsuch as a service that cannot be successfully restartedthe cluster service fails the resource, takes the cluster group offline, moves it to another cluster node, and restarts the cluster group. MSCS clusters also use heartbeats to determine the operational status of other nodes in the cluster.

Two clustering modes exist:

  • Active/Passive One node in the cluster is online providing services. The other nodes in the cluster are online but do not provide any services or applications to clients. If the active node fails, the cluster groups that were running on that node are failed over to the passive node. The passive node then changes its state to active and begins to service client requests. The passive nodes cannot be used for any other purpose during normal operations because they must remain available for a failover situation. All nodes should be configured identically to ensure that when failover occurs no performance loss is experienced .

  • Active/Active One instance of the clustered service or application runs on each node in the cluster. If a failure of a node occurs, that instance is transferred to one of the running nodes. Although this clustering mode allows you to make use of all cluster nodes to service client requests, it can cause significant performance degradation if the cluster was already operating a very high load at the time of the failure.

You must choose from three cluster models when planning for your new cluster. They are discussed in the next section.

Cluster Models

Three distinctly different cluster models exist for configuring your new cluster. You must choose one of the three models at the beginning of your cluster planning because the chosen model dictates the storage requirements of your new cluster. The three models are presented in the following sections in order of increasing complexityand cost.

Single Node Cluster

The single node cluster model, as shown in Figure 5.18, has only one cluster node. The cluster node can make use of local storage or an external cluster storage device. If local storage is used, the local disk is configured as the cluster storage device. This storage device is known as a Local Quorum resource . A Local Quorum resource does not make use of failover and is most commonly used as a way to organize network resources in a single network location for administrative and user convenience. This model is also useful for developing and testing cluster-aware applications.

Figure 5.18. The single node cluster can be used to increase service reliability and also to prestage cluster resource groups.

Despite its limited capabilities, this model does offer the administrator some advantages at a relatively low entry cost:

  • The cluster service can automatically restart services and applications that might not be able to automatically restart themselves after a failure. This capability can be used to increase the reliability of network services and applications.

  • The single node can be clustered with additional nodes in the future, preserving the resource groups that you have already created. You only need to join the additional nodes to the cluster and configure the failover and move policies for the resource groups to ready the newly added nodes.

EXAM TIP

Creating single node clusters The New Server Cluster Wizard creates the single node cluster using a Local Quorum resource by default if the cluster node is not connected to a cluster storage device.


Single Quorum Cluster

The single quorum cluster model, as shown in Figure 5.19, has two or more cluster nodes that are configured such that each node is attached to the cluster storage device. All cluster configuration data is stored on a single cluster storage device. All cluster nodes have access to the quorum data, but only one cluster node runs the quorum disk resource at any given time.

Figure 5.19. The single quorum cluster shares one cluster storage device among all cluster nodes.

Majority Node Set Cluster

The majority node set cluster model, as shown in Figure 5.20, has two or more cluster nodes that are configured such that the nodes may or may not be attached to one or more cluster storage devices. Cluster configuration data is stored on multiple disks across the entire cluster, and the cluster service is responsible for ensuring that this data is kept consistent across all the disks. All quorum traffic travels in an unencrypted form over the network using server message block (SMB) file shares. This model provides the advantage of being able to locate cluster nodes in two geographically different locations because they do not all need to be physically attached to the shared cluster storage device.

Figure 5.20. The majority node set cluster model is a high-level clustering solution that allows for geographically dispersed cluster nodes.

Even if all cluster nodes are not located in the same physical location, they all appear as a single entity to clients. The majority node set cluster model provides the following advantages over the other clustering models:

  • Clusters can be created without cluster disks. This capability is useful in situations in which you need to make available applications that can failover, but you have another means to replicate data among the storage devices.

  • Should a local quorum disk become unavailable for some reason, it can be taken offline and the rest of the cluster remains available to service client requests.

EXAM TIP

Don't look for Majority Node Quorums Although this is a new technology in Windows Server 2003, you should not expect to see questions dealing with this technology on the exam. Majority Node Quorums are high-level hardware-dependent solutions that will be provided as a complete package from an OEM.


There are, however, some requirements that you must abide by when implementing majority node set clusters to ensure they are successful:

  • A maximum of two sites can be used.

  • The cluster nodes at either site must be able to communicate with each other with less than a 500 millisecond response time in order for the heartbeat messages to accurately indicate the correct status of the cluster nodes.

  • A high-speed, high-quality WAN or VPN link must be established between sites so that the cluster's IP address appears the same to all clients, regardless of their location on the network.

  • Only the cluster quorum information is replicated between the cluster storage devices. You must provide a proven effective means to replicate other data between the cluster storage devices.

The primary disadvantage to this clustering model is that if a certain number of nodes fail, the cluster loses its quorum and it then fails. Table 5.1 shows the maximum number of cluster nodes that can fail before the cluster itself fails.

Table 5.1. Number of Failed Nodes to Fail the Majority Node Set Cluster

Number of Nodes in the Cluster

Number of Nodes to Cause the Cluster to Fail

1

2

3

1

4

1

5

2

6

2

7

3

8

3

As shown in Table 5.1, the majority node cluster set will remain operational as long as a majoritymore than halfof the initial cluster nodes remain available.

WARNING

Using majority node set clusters Majority node set clusters are most likely going to be the clustering solution of the future due to their capability to geographically separate cluster nodes, thus further increasing the reliability and redundancy of your clustering solution. Microsoft, however, at the present time recommends that you implement majority node set clustering only in very specific instances and only with close support provided by your Original Equipment Manufacturer (OEM), Independent Software Vendor (ISV), or Independent Hardware Vendor (IHV).


Now that you have seen the three cluster models available in Windows Server 2003, you next should give consideration to choosing the operation mode that your cluster will utilize if it is a single quorum cluster or a majority node set cluster. Cluster operation modes are discussed in the next section.

Cluster Operation Modes

You can choose from four basic cluster operation modes when using a single quorum cluster or a majority node set cluster. These operation modes are specified by defining the cluster failover policies accordingly , as discussed in the next section. Following are the four basic cluster operation modes:

  • Failover Pair This mode of operation is configured by allowing applications to failover between only two specific cluster nodes. Only the two desired nodes should be placed in the possible owner list for the service of concern.

  • Hot-standby (N+I) This mode of operation allows you to reduce expenses and overhead associated with dedicated failover pairs by consolidating the spare node for each failover pair into a single node, thus providing a single cluster node that is capable of taking over the applications from any active node in the event of a failover. Hot-standby is often referred to as active/passive, as discussed previously in this chapter. Hot-standby is achieved through a combination of using the preferred owners list and the possible owners list. The preferred node is configured as the node that will run the application or service under normal conditions in the preferred owners list, and the spare (hot-standby) node is configured in the possible owners list.

  • Failover Ring This mode of operation has each node in the cluster running an instance of the application or service. In the event a node fails, the application or service on the failed node is moved to the next node in the sequence. The failover ring mode is achieved by using the preferred owner list to define the order of failover for a given resource group. This order should start on a different node on each node in the cluster.

  • Random This mode of operation allows the cluster to determine which node will be failed over to randomly . You define the random failover mode by configuring an empty preferred owner list for each resource group.

Now that you've been introduced to failover, let's examine cluster failover policies; this is the topic of the next section.

Cluster Failover Polices

Although the actual configuration of failover and failback policies is discussed later in this chapter, it is important at this time to discuss them briefly so as to properly acquaint you with their use and function. Each resource group within the cluster has a prioritized listing of the nodes that are supposed to act as its host.

You can configure failover policies for each resource group to define exactly how each group will behave when a failover occurs. You must configure these three settings:

  • Preferred nodes An internal prioritized list of available nodes for resource group failovers and failbacks. Ideally, all nodes in the cluster are in this list, in the order of priority you designate .

  • Failover timing The resource can be configured for immediate failover if the resource fails, or the cluster service may be configured to try to restart the resource a specified number of times before failover actually occurs. The failover threshold value should be equal to or less than the number of nodes in the cluster.

  • Failback timing Failback can be configured to occur as soon as the preferred node is available or during a specified period of the time, such as when peak load is at its lowest so as to minimize service disruptions.

Creating a Cluster

Now that you have a good introduction into what clustering is and how it works, you are ready to create the cluster and install the first node in the cluster. As with the NLB cluster, you should do a bit of preparation before actually starting the configuration process to ensure that your cluster is created successfully.

Any good implementation needs a good plan. To successfully implement your MSCS cluster, you need to determine and document the following pieces of information:

  • All services and applications that will be deployed on the cluster.

  • Failover and failback policies for each service or application that is to be deployed.

  • The quorum model to be used.

  • The configuration and operating procedures for the shared storage devices to be used.

  • All hardware to ensure that it is listed on the Hardware Compatibility List (HCL). MSCS clusters have higher hardware requirements than NLB clusters.

  • The clustering and administrative IP address and subnet information, including the cluster IP address itself.

  • The cluster name , no more than 15 characters long so that it complies with NetBIOS naming requirements.

After you've configured and prepared your servers and shared storage device, you are ready to move forward with the creation of the MSCS cluster. Any installation and configuration required for the shared storage device must be completed in accordance with the manufacturers' or vendors ' specifications to ensure successful deployment.

Step by Step 5.3 shows how to create a new MSCS cluster.

STEP BY STEP

5.3 Creating a New MSCS Cluster

  1. Open the Active Directory Users and Computers console and create a domain user account to be used by the MSCS service. Configure the password to never expire. Later during the cluster creation process, this user account will be given Local Administrator privileges on all cluster nodes and will also be delegated cluster- related user rights in the domain, including the Add Computer Accounts to the Domain user right. Figure 5.21 shows an example of what your summary page might look like after creating the domain user account.

    Figure 5.21. You need to ensure the cluster service domain user account's password is set to never expire.

    NOTE

    Creating user accounts If you need a refresher on creating user accounts, check out MCSE TG 70-290: Managing and Maintaining a Microsoft Windows Server 2003 Environment (2003, Que Publishing; ISBN: 0789729350).

  2. Ensure that the load balancing and administrative network adapters on the first cluster node are configured correctly, as discussed previously and in Step by Step 5.1.

  3. Open the Cluster Administrator by selecting Start, Programs, Administrative Tools, Cluster Administrator. You should be prompted with the Open Connection to Cluster dialog box, as shown in Figure 5.22. If not, click File, Open Connection. Select Create New Cluster and click OK to continue.

    Figure 5.22. You need to create a new cluster because you don't already have an existing one to open.

  4. Click Next to dismiss the opening dialog box of the New Cluster Creation Wizard.

  5. On the Cluster Name and Domain dialog box, shown in Figure 5.23, select the cluster domain from the drop-down list. Enter the cluster name in the space provided. Click Next to continue.

    Figure 5.23. Only computers that are members of the selected domain can join the cluster.

  6. On the Select Computer dialog box, shown in Figure 5.24, select the computer that will be the first node in the new cluster. Click Next to continue.

    Figure 5.24. Enter or browse to the name of the first node of the cluster.

  7. The Analyzing Configuration dialog box, shown in Figure 5.25, appears and runs for a short period of time. You can continue with caution as long as no errors or warnings occur. You can examine the log file by clicking the View Log button. The log file is shown in Figure 5.26; notice that a local quorum is being created in this cluster. Click Next to continue after you are done viewing the output.

    Figure 5.25. The Analyzing Cluster process alerts you to any show-stoppers encountered with your selected node.

    Figure 5.26. The log file, because it is very detailed, can yield some useful information.

  8. On the IP Address dialog box, shown in Figure 5.27, enter the IP address that is being assigned to the cluster. Click Next to continue.

    Figure 5.27. Ensure the IP address entered here is correct and in the same IP subnet as the IP addresses configured for the load balancing network adapter.

  9. On the Cluster Service Account dialog box, shown in Figure 5.28, enter the proper credentials for the cluster domain user account you created previously. Click Next to continue.

    Figure 5.28. You need to supply the cluster domain user account name and password to continue the cluster creation process.

  10. On the Proposed Cluster Configuration dialog box, shown in Figure 5.29, you can review the cluster configuration before continuing. Clicking the Quorum button allows you to change the type of quorum being used, as shown in Figure 5.30. When done, click Next to continue.

    Figure 5.29. You can review the selected cluster configuration before creating it.

    Figure 5.30. You can change the quorum type by selecting one of the available options if desired.

  11. If all goes welland, of course, it willyou should see results on the Creating the Cluster dialog box like those shown in Figure 5.31. Click Next to continue.

    Figure 5.31. The Creating the Cluster dialog box informs you of the status of cluster creation.

  12. Click Finish to complete the Create New Cluster Wizard.

  13. Your new cluster appears in the Cluster Administrator, as shown in Figure 5.32.

    Figure 5.32. The Cluster Administrator shows your new cluster now.


Congratulations, you just created your first cluster! That wasn't so difficult after you got all the preliminaries out of the way, was it? One thing you should change immediately, however, is the operational mode of the cluster node network adapters. By default, both the cluster and administrative network adapters are configured to pass both types of traffic; this is undesirable and should be corrected as soon as possible. To correct this setting, locate the Networks node of the Cluster Administrator, as shown in Figure 5.33. Right-click on each adapter to open its properties dialog box, as shown in Figure 5.34. Configure the adapter according to its role in the cluster.

Figure 5.33. You should change the network adapter operational mode as soon as possible.

Figure 5.34. Select Client Access Only for the cluster network adapter.

You are now ready to add a second node to your new cluster. Step by Step 5.4 outlines this procedure.

STEP BY STEP

5.4 Adding a Node to an MSCS Cluster

  1. Ensure that the load balancing and administrative network adapters on the new cluster node are configured correctly, as discussed previously and in Step by Step 5.1.

  2. Open the Cluster Administrator. If the cluster does not appear in the Cluster Administrator, click File, Open Connection and supply the required information to connect to the cluster.

  3. Right-click on the cluster name in the Cluster Administrator and select New, Node from the context menu.

  4. Click Next to dismiss the opening dialog box of the Add Nodes Wizard.

  5. On the Select Computers dialog box, shown in Figure 5.35, enter the computer names that are to be joined to the cluster.

    Figure 5.35. You can have up to eight nodes in a Windows Server 2003 cluster.

  6. The Analyzing Configuration dialog box appears for the new node(s) providing information about their suitability to join the cluster. Click Next to continue.

  7. On the Cluster Service Account dialog box, enter the correct password for the cluster service account. Click Next to continue.

  8. On the Proposed Cluster Configuration dialog box, you can review the cluster configuration before continuing. Click Next to continue.

  9. The Adding Nodes to the Cluster dialog box appears, detailing the status of the node addition. Click Next to continue.

  10. Click Finish to complete the Add Nodes Wizard.

  11. Your new cluster node appears in the Cluster Administrator, as shown in Figure 5.36.

    Figure 5.36. The Cluster Administrator displays the newly added cluster node.


EXAM TIP

Hardcore hardware Although this topic is way beyond the scope of this exam, you need to know how to set up and configure the storage devices required for MSCS clustering implementations . For more information, be sure to see Server+ Certification Training Guide (2001, Que Publishing; ISBN: 0735710872).


Now that you know how to create MSCS clusters, let's move forward and examine monitoring and managing your high availability solutions.

GUIDED PRACTICE EXERCISE 5.2

In this exercise, you create a new MSCS cluster. This Guided Practice helps reinforce the preceding discussion.

You should try completing this exercise on your own first. If you get stuck, or you would like to see one possible solution, follow these steps:

  1. From the Active Directory Users and Computers console, create a domain user account to be used by the MSCS service. Configure the password to never expire.

  2. For the cluster host administrative network adapter, configure the TCP/IP properties by entering the IP address and subnet mask you have chosen. If the administrative network adapters are connected to each other only through a switch or hub, they do not need a DNS server IP address or default gateway IP address.

  3. For the cluster host load balancing network adapter, configure the TCP/IP properties by entering the IP address, subnet mask, DNS server IP address, and default gateway IP address.

  4. Open the cluster and create a new cluster.

  5. On the Cluster Name and Domain dialog box, select the cluster domain from the drop-down list and enter the cluster name.

  6. On the Select Computer dialog box, select the computer that will be the first node in the new cluster.

  7. View the results presented in the Analyzing Configuration dialog box.

  8. On the IP Address dialog box, enter the cluster Virtual IP address.

  9. On the Cluster Service Account dialog box, enter the account name and password for the cluster service account you created previously.

  10. On the Proposed Cluster Configuration dialog box, either accept the proposed configuration or change the quorum type being used.




MCSE Windows Server 2003 Network Infrastructure (Exam 70-293)
MCSE 70-293 Exam Prep: Planning and Maintaining a Microsoft Windows Server 2003 Network Infrastructure (2nd Edition)
ISBN: 0789736500
EAN: 2147483647
Year: 2003
Pages: 151
Authors: Will Schmied

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net