Server Clusters


Server clusters are the pure availability form of Windows Clustering, and they form the basis for a SQL Server 2000 failover cluster. If you do not configure it properly, your failover cluster installation could be built on a bed of quicksand.

More Info

For more information on server clusters, you can go to the Cluster Technologies Community Center, located at http://www.microsoft.com/windowsserver2003/community/centers/clustering/default.asp. This site covers Microsoft Windows NT 4.0, Windows 2000 Server, and Windows Server 2003. If you want direct access to white papers in the Community Center, they can be found at http://www.microsoft.com/windowsserver2003/community/centers/clustering/more_resources.asp#MSCS. There are many best practices papers, such as for security, which might cover topics that are not mentioned in this chapter.

Planning a Server Cluster

Planning your server cluster is in many ways more important than the implementation itself: most issues that eventually crop up with a server cluster configuration stem from a missed configuration point.

On the CD

To assist you in your planning, use the document Server_Cluster_Configuration_Worksheet.doc.

Types of Server Clusters

Starting with Windows Server 2003, there are two types of server clusters: a standard server cluster, which is the same technology that can be found in Windows 2000 Server, and a new type called a majority node set (MNS) cluster. Both utilize many of the same semantics behind the scenes and behave similarly, but there are a few main differences.

The first difference is the quorum resource. The quorum not only contains the definitive and up-to-date server cluster configuration, but it also is used in the event a split-brain scenario occurs. A split-brain scenario can happen if two or more nodes in your server cluster lose all public and private network connectivity. At that point, you might have different partitions of your server cluster. The node owning the quorum resource gains ownership of all clustered resources, and the other nodes not seen by the partition that can use the quorum are evicted.

Both types require a quorum, but the mechanism is different. For a standard server cluster, this disk is accessed by one server at a time and is on the shared disk array. Under an MNS cluster, this is not a disk at all ”nothing is shared. This is only found on an MNS cluster under Windows Server 2003. The quorum is actually found on your system disk in %SystemRoot%\Cluster\QoN.%ResourceGUID%$\%ResourceGUID%$\MSCS. This directory and its contents should not be modified in any way. The other nodes access the quorum through a share named \\%NodeName%\%ResourceGUID%$ created with local access. Again, because all nodes of the cluster use this share, do not modify permissions with the share name , the Administrators group , or the Cluster Service account itself.

Tip

If you are implementing an MNS cluster, you should use RAID on your system disks to ensure the availability of your quorum. Do not use Integrated Device Electronics (IDE) disks.

Note

If the node owning the resources is still up and it is, say, the first node in the cluster, the cluster might appear to function until a reboot of that node. If all of the other nodes are still unavailable, the MNS cluster will not start and you might have to force the quorum. When nodes go offline, you might see a message pop up, alerting you that a delayed write to Crs.log failed.

SQL Server supports both types of server clusters. If you use an MNS cluster, you have the immediate benefit of not worrying about one more shared disk taking up a drive letter. It also gives you another possible geographic solution, assuming your hardware vendor certifies the solution. You are also protecting yourself from physical quorum disk failures bringing down the cluster. However, because losing the wrong number of nodes causes the entire solution to go down, it might not be a good choice. Table 5-2 shows the number of node failures tolerated. If your vendor builds a geographic solution based on MNS clusters, it might be a good thing, but for a local SQL Server cluster, you might be better off implementing a standard server cluster. You must weigh your options.

Table 5-2: Numbers of Nodes and Failure Tolerance in a Majority Node Set Cluster

Nodes

Maximum Node Failures Tolerated

1

2

3

1

4

1

5

2

6

2

7

3

8

3

More Info

For more information on MNS clusters, see the white paper Server Clusters: Majority Node Set Quorum at http://www.microsoft.com/technet/treeview/default.asp?url=/technet/prodtechnol/windowsserver2003/deploy/confeat/majnode.asp.

Disk Subsystem

For up to a two-node Windows 2000 Advanced Server and Windows Server 2003 Enterprise Edition 32-bit server cluster, the shared disk subsystem can be either SCSI or fibre-based. For anything more than two nodes, all Datacenter editions, and all 64-bit editions, you must use fibre.

Cluster Service Account

The service account used to administer your server cluster does not need to be a domain administrator. In fact, you should not make the cluster administrator a domain administrator because that is an escalation of privileges. Because the cluster administrator account needs to be able to log into SQL Server, you could expose yourself if someone maliciously impersonated that account. The Cluster Service account must be a domain account that is a member of the local Administrators group on each node. During the installation of the server cluster, the account is configured with the proper rights, but if you ever need to manually re-create the account on each node, here are the privileges required:

  • Act As Part Of The Operating System

  • Back Up Files And Directories

  • Increase Quotas

  • Increase Scheduling Priority

  • Load And Unload Device Drivers

  • Lock Pages In Memory

  • Log On As A Service

  • Restore Files And Directories

Of this list, only Lock Pages In Memory, Log On As A Service, and Act As Part Of The Operating System are not granted when you place an account in the Administrators group on a server. As part of that group, the account also inherits the rights of Manage Auditing And Security Log, Debug Programs, and Impersonate A Client After Authentication (Windows Server 2003 only). Even if you restrict these privileges to other administrators, these rights must be granted to the Cluster Service account.

Tip

If you have multiple clusters, you might want to use the same service account for each to ease administration.

Tip

In some companies, security administrators lock down the rights that an account can use on a server. Without the listed privileges, your server clusters will not work. Work with your security or network administrators to ensure that the service accounts used for the server cluster and for SQL Server have the rights they need.

Server Clusters, the Hardware Compatibility List, and the Windows Catalog

Whether you are implementing a standard server cluster or an MNS cluster under Windows Server 2003, your solution must be on the Hardware Compatibility List (HCL), and going forward in the Windows Catalog. The HCL can be found at http://www.microsoft.com/hwdq/hcl , and the Windows Catalog for server products can be found at http://www.microsoft.com/windows/catalog/server2/default.aspx?subID=22 . Remember that the entire solution must be on the HCL ”server nodes, network cards, the SAN or direct attached storage (DAS) device, driver versions, and so on.

In the Windows Catalog, there are two subcategories that you need to check under the main Cluster Solutions category: Cluster Solution and Geographically Dispersed Cluster Solution. In the HCL, the categories that you need to check are Cluster, Cluster/DataCenter 2-Node, Cluster/DataCenter 4-Node, Cluster/Geographic (2-Node Advanced Server), Cluster/Geographic (2-Node Datacenter Server), Cluster/Geographic (4-Node Datacenter Server), and Cluster/Multi-Cluster Device. There are other cluster- related categories on the HCL, but they are for vendors only.

What is the purpose of these lists? The goal is to ensure that you have a known good platform for your server cluster solutions. Because you are combining multiple products (including drivers) to work with Windows, they need to work well together. The validation process involves low-level device tests of both the storage and the network as well as higher-level stress tests of the cluster under extreme conditions, such as many failures or heavy load. The Windows operating system has specific requirements of the hardware, particularly in a server cluster environment where data disks are physically connected to and visible to all the nodes in the cluster. Configurations that have not passed the server cluster test process are not guaranteed to operate correctly. Because disks are visible to multiple nodes, this can lead to corruption of application data or instability where the connection to the disks is not reliable.

Note

Vendors must submit their solutions for testing; Microsoft does not dictate which components they can use. With Windows Server 2003, there is a new qualification process called the Enterprise Qualification Process. The difference you will notice is that the lists displayed in the HCL or the Windows Catalog point back to the vendor s own Web sites, so it is up to the vendor to make sure the information is accurate.

Should you choose to ignore the HCL or Windows Catalog and put a noncompliant cluster into your production environment, your solution will technically be considered unsupported. Microsoft Product Support Services (PSS) will assist you as best they can, but they might not be able to resolve your issues because your solution is nonstandard. If you do buy a supported solution, what does supported mean? Because Microsoft cannot debug or support software from third-party vendors, or support and troubleshoot complex hardware issues alone, who supports what?

Microsoft supports qualified configurations, along with the vendors where appropriate, to provide root cause analysis. There are some deviations allowed, which are detailed in Table 5-3. When necessary, Microsoft escalates any issues found in the Windows operating system through the Windows escalation process. This process is used to ensure that any required hotfixes for the Windows operating system are provided to the customer. Microsoft supports and troubleshoots the Windows operating system and the server cluster components to determine the root cause. Part of that process might involve disabling non- Microsoft products in an effort to isolate issues to specific components or reduce the number of variables , but only if it does not impact the environment. For example, it would be possible to disable quota management software, but disabling storage multipath software would require infrastructure changes in the SAN; even worse , disabling volume management software might mean that the data is no longer available to applications.

If the analysis points toward non-Microsoft components in the configuration, the customer must work with Microsoft to engage the appropriate vendor within the context of agreements between the customer and the vendor. Microsoft does not provide a support path to vendors, so it is important that all components in the system are covered by appropriate service agreements between the customer and the third-party vendors.

Table 5-3: Supported Deviations from Windows Catalog

Component

Server Model

Deviations Allowed

Server model

Model number

Number of processors

Memory size (unless it is lower than what is required for Windows)

Number of network cards (but there must be at least two physically separate network cards)

Processor speed (can be different up to 500 MHz only)

Host bus adapter (Fibre Channel only)

Driver version (miniport or full port)

Firmware version

See note

Multipath software (MPIO)

Driver version

See note

Storage controller

Firmware version

See note

Multisite interconnect for geographically dispersed clusters

Network technology (Dense Wavelength Division Multiplexing [DWDM], Asynchronous Transfer Mode [ATM], and so on)

Switch type and firmware version

No deviations allowed

Latency between sites must be less than 500 ms round trip

Geographically dispersed cluster software components

Driver version

Software version

No deviations allowed

Note

Third-party vendors might update versions of the drivers and firmware that deviate from the Windows Catalog and HCL listings. These can be qualified for your server clusters, but a major version change for any component requires the vendor to submit the solution for requalification, which will subsequently be listed separately. A minor version change does not require a resubmission of the clustered solution, but your vendor must provide a statement listing what combinations are supported. A vendor typically tests a specific combination of driver version, HBA firmware, multipath software version, and controller version to make up the solution.

Because complete vendor-qualified combinations are the only supported solutions, you cannot mix and match versions. The same rules apply for hotfixes and service packs released by Microsoft. A qualification for Windows 2000 Server implies all service packs and hotfixes, but vendors might not have tested their components against the latest service packs or with the hotfixes. Vendors should provide a statement indicating whether different combinations are supported and tested.

If you plan on connecting multiple servers (either other cluster nodes or stand-alone servers) on a single SAN, you also need to consult the Cluster/Multi-Cluster Device list of the HCL. Although there are many SANs out there, not all are certified for use with a server cluster, and not all can support multiple devices without interrupting others in the chain.

Under no circumstances can you mix and match components from various clusters or lists to make up your cluster solution. Consider the following examples:

  • The following would not be a supported solution: Node 1 from vendor A listed under one of the System/Server categories on the HCL (or even listed as a node under the Cluster category), node 2 from vendor B listed under one of the System/Server categories on the HCL (or even listed as a node under the Cluster category), a Fibre Channel card from the Storage/FibreChannel Adapter (Large Memory Category), and a SAN from the Cluster/Multi-Cluster Device.

  • If you use two servers that are on the Cluster list, but you take a different fibre controller from the Cluster/FibreChannel Adapter list (that is, change the base configuration of the cluster solution listed on the Cluster list), and use a SAN that is not on the HCL at all, this would be an unsupported cluster.

  • You implement a Unisys ES-7000 as a one-node cluster because you want to use 16 processors for one Windows installation and plan on adding in the other node later. You then could not, say, go get another type of server and throw it into the cluster as a second node. Assuming your ES-7000 and disk array are part of what would be on the HCL as one of the Cluster categories, you could either add more processors to your ES-7000 and then carve out another server that way because it is a partitionable server, or buy another ES-7000.

    More Info

    The following are helpful Knowledge Base articles, located at http://support.microsoft.com/ , about the Microsoft support policy for server clusters:

    • 309395: The Microsoft Support Policy for Server Clusters and the Hardware Compatibility List

    • 304415: Support for Multiple Clusters Attached to the Same SAN Device

    • 327831: Support for a Single Server Cluster That Is Attached to Multiple Storage Area Networks (SANs)

    • 280743: Windows Clustering and Geographically Separate Sites

    • 327518: The Microsoft Support Policy for a SQL Server Failover Cluster

Certified Cluster Applications

If you use Windows 2000 Server Advanced Server and Windows Server 2003 Enterprise Edition, you are not required to check that the applications you are running on your cluster are certified to work in a cluster. With all Datacenter Editions, the opposite is true: the application must be certified for use with Windows Datacenter. Applications that earn the Certified for Windows logo are listed in the Windows Catalog, and you can also consult Veritest s Web site ( http://cert.veritest.com/CfWreports/server/ ).

Ports, Firewalls, Remote Procedure Calls, and Server Clusters

Server clusters can work with firewalls, but you need to understand what ports you need to open . A server cluster uses User Datagram Protocol (UDP) port 3343 for the intracluster, or heartbeat, communication. Because this is a known port that is registered with the Internet Assigned Numbers Authority (IANA), you need to ensure that you cannot encounter a denial of service type of attack that can interfere with and potentially stop your server cluster.

A server cluster is dependent on remote procedure calls (RPCs) and the services that support them. You must ensure that the RPC service is always up and running on your cluster nodes.

More Info

For more information on ports, firewalls, and RPCs, reference the following resources:

  • Knowledge Base article 154596: Configure RPC Dynamic Port Allocation to Work with Firewall

  • Knowledge Base article 258469: Cluster Service May Not Start After Restricting IP Ports for RPC

  • Knowledge Base article 300083: Restrict TCP/IP Ports on Windows 2000

  • Knowledge Base article 318432: BUG: Cannot Connect to a Clustered Named Instance Through a Firewall

  • TCP and User Datagram Protocol Port Assignments (from the Windows 2000 Server Resource Kit )

  • On Microsoft TechNet, search for TCP and UDP Port Assignments.

Geographically Dispersed Clusters

If you want to physically separate your cluster nodes to provide protection from site failure, you need a certified geographic cluster solution on the HCL or in the Windows Catalog. You cannot take two or more nodes, separate them, and implement a server cluster. That configuration is completely unsupported.

Antivirus Programs, Server Clusters, and SQL Server

In the past few years , use of antivirus software on computers has increased dramatically. In many environments, this software is installed as a de facto standard on all production servers. In a clustered environment and with SQL Server, antivirus requirements should be evaluated because antivirus programs are typically not cluster-aware, meaning they are not aware of how disks work in a clustered environment. The software might interfere with cluster operations. The server running SQL Server itself, unless it is, say, hosting a Web server or files for a solution, should technically have no need for virus scanning because the database-related .mdf, .ndf, and .ldf files are always managed by SQL Server and are not like a Microsoft Word or other text file that could be handled by an arbitrary application, or contain executable code or macros.

If you do need to place antivirus software on your server clusters, use the program s filters to exclude the \Mscs directory on the quorum for a standard server cluster. For SQL Server, use the filter to exclude all data and log directories on the shared disk array. If you do not filter the SQL Server files out, you might have problems on failover. You might also want to filter the \Msdtc directory used by MS DTC.

For example, consider a situation in which you encounter a failover and there is no filter. When the shared disks are recognized by another node, the virus scanner scans them, preventing the failover from completing and SQL Server from recovering. If your databases are very large, the problem will be worse.

More Info

See Knowledge Base article 250355, Antivirus Software May Cause Problems with Cluster Services, at http://support.microsoft.com .

Server Clusters, Domains, and Networking

Network requirements are a source of contention among some people considering a server cluster. These requirements are often misunderstood. No clustered solution can work without domain connectivity. If you cannot guarantee that the nodes will have domain access, do not attempt to implement a server cluster.

Network Configuration

The following are the requirements for configuring your network cards for use with a server cluster:

  • To configure a server cluster you will need the following dedicated IP addresses: one IP address for each node on the public network, one IP address for each node on the private network, one IP address for the server cluster itself, and at least one IP address for each SQL Server 2000 instance that will be installed.

  • All cluster nodes must have domain connectivity. There is no way to implement a server cluster without it. If your cluster is going to be in a demilitarized zone (DMZ), you must have a domain controller in the DMZ or open a hole to your corporate network for the clusters.

  • All cluster nodes must be members of the same domain.

  • The domain the nodes belong to must meet the following standards:

    • There must be redundant domain controllers.

    • There must be at least two domain controllers configured as global catalog servers.

    • If you are using Domain Name System (DNS), there must be redundant DNS servers.

    • DNS servers must support dynamic updates.

    • If the domain controllers are the DNS servers, they should each point to themselves for primary DNS resolution and to others for secondary resolution.

  • You should not configure your cluster nodes as domain controllers; instead, you should have dedicated domain controllers. If you do configure your nodes in any combination of a primary and backup domain controller, it could have direct implications for any SQL Server environment, as noted in Knowledge Base articles 298570, BUG: Virtual SQL Server 2000 Installations May Fail if Installed to Windows 2000 Domain Controllers, and 281662, Windows 2000 and Windows Server 2003 Cluster Nodes As Domain Controllers. In general, you should not install SQL Server 2000 on a domain controller.

  • Windows Server 2003 does not require NetBIOS, so you can disable NetBIOS support. However, you need to know the implications of doing this and ensure that nothing else you have running on your cluster needs NetBIOS name resolution. This includes Cluster Administrator, which uses NetBIOS to enumerate the clusters in the domain, meaning you cannot use the browse functionality. By default, NetBIOS is enabled, but you can disable it on each IP resource s properties.

  • Use two (or more) completely independent networks that connect the two servers and can fail independently of each other to ensure that you have no single points of failure. This means that the public and private networks must have separate paths (including switches, routers, and hubs) and physically independent hardware. If you are using a multiport network card to serve both the private and public networks, it does not meet the stated requirement.

  • Each individual network used for a server cluster must be configured as a subnet that is distinct and different from the other cluster networks. For example, you could use 172.10.x.x and 172.20.x.x, both of which have subnet masks of 255.255.0.0. You should use separate subnets for the networks due to the implications of multiple adapters on the same network, as noted in Knowledge Base article 175767, Expected Behavior of Multiple Adapters on Same Network.

  • If desired, you can use a crossover cable to connect your cluster nodes for intracluster communications. Using a regular network is recommended.

  • If you are implementing a geographically dispersed cluster, the nodes can be on different physical networks. The private and public networks, on the other hand, must appear to the cluster as a single, nonrouted LAN using something like a VLAN.

  • Round-trip time between cluster nodes for the heartbeat must be less than 500 ms for all types of server clusters, local or geographic.

Network Card Configuration

To configure your network cards for use in a server cluster, address the following public network and private network configuration points.

Public Network Configuration

Consider the following points when configuring your public network dedicated for cluster communications:

  • The speed of the network card should match the maximum speed of the network card and its underlying network. This must be set the same on each public adapter. Do not set auto detect.

  • You cannot enable Network Load Balancing on the same server or network card used for a server cluster.

  • Both a primary and secondary DNS must be configured.

  • Static IP addresses are strongly recommended. Do not use Dynamic Host Configuration Protocol (DHCP).

  • The public network should be configured for all cluster communications to have redundancy for your private network. Because that is a requirement of a server cluster, it is the recommended implementation method.

Private Network (Heartbeat) Configuration

Consider the following when configuring your private network dedicated for cluster communications:

  • The speed of the network card should match the maximum speed of the network card and its underlying network. This must be set the same on each private adapter. Do not set auto detect.

  • Do not set a default gateway.

  • Disable NetBIOS. This can be found on the WINS tab of the Advanced Properties dialog box for the network card.

  • Only TCP/IP should be enabled. No other protocol or service (such as Network Load Balancing or sharing) should be checked in the properties of the network card.

  • Although network teaming cards are supported in a server cluster, using them for the private network cards used for internal communication only in the cluster is not supported.

  • For private network redundancy, set the publicly faced network to handle both private and public traffic.

  • For a private network, the valid blocks of IP addresses are as follows :

    • 10.0.0.0

    • 172.16.0.0

    • 192.168.0.0

  • You can use a crossover cable, but a regular network is recommended. If you use a crossover cable between the cluster nodes, you must still use static IP addresses.

  • If you are using a crossover cable with Windows 2000, add the following registry key to each node:

     HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Tcpip\Parameters Value Name: DisableDHCPMediaSense Data Type: REG_DWORD Data: 1 This disables the TCP/IP stack destruction feature of Media Sense. 
    Warning

    Do not modify the registry entry for Media Sense on Windows Server 2003 clusters.

  • Disable autosensing or automatic detection of the speed of the network. This could cause problems on the private network.

  • The speed of the network card should match the maximum speed of the network card and its underlying network. This must be set the same on each private adapter. Setting this to 10 MB/sec and half duplex should provide sufficient bandwidth.

Implementing a Server Cluster

Whether you are implementing a standard server cluster or an MNS cluster, you will have to handle some preconfiguration and postconfiguration tasks in addition to installing the cluster itself.

Preconfiguration Tasks

Before you install your server cluster, you must perform some tasks to make sure you are prepared.

On the CD

Use the document Server_Cluster_Pre-Installation_ Checklist.doc to ensure that you are ready to install your server cluster. Also take this time to fill out the worksheet Node_Configuration_Worksheet.doc for each node s configuration.

Configuring Network Cards

Configure your network cards per the recommendations given earlier.

Network Cards Used on the Public Network

To configure a network card for use on a public network, follow these steps:

  1. From your desktop, right-click My Network Places, and select Properties. Under Windows Server 2003, you might need to enable the Classic Start menu view from the Properties menu on the taskbar to see this.

  2. In the Network And Dial-up Connections (Windows 2000 Server) or Network Connections (Windows Server 2003) window, select the network card. Rename this to something recognizable and usable, such as Public Network, by selecting it, right-clicking, and selecting Rename. This is the same value on all nodes.

  3. Select the Public Network network card, right-click, and select Properties.

  4. Select Internet Protocol (TCP/IP) and click Properties. Set the static IP address of the card to a valid IP address on the externally facing network. This address is different for each node of the server cluster. These addresses will all be on the same subnet, but a different subnet from the private network. Click OK.

  5. Make sure the Subnet mask is correct.

  6. Enter your default gateway.

  7. Enter your primary and secondary DNS servers.

  8. Click OK to return to the Public Network Properties dialog box. Click Configure.

  9. In the Properties dialog box for the network card, select the Advanced tab, shown in Figure 5-12.

  10. For the External PHY property, set the value for the correct network speed. You set this to be the same on each node. Click OK.

    click to expand
    Figure 5-12: The Advanced tab of the Properties dialog box for a network card.

  11. Click OK to close the Public Network properties dialog box.

Network Cards Used on the Private Network

To configure a network card for use on a private network, follow these steps:

  1. From your desktop, right-click My Network Places, and select Properties. Under Windows Server 2003, you might need to enable the Classic Start menu view from the Properties menu of the taskbar to see this.

  2. In the Network And Dial-up Connections (Windows 2000 Server) or Network Connections (Windows Server 2003) window, select the network card. This network card is located only on the private network on the approved subnets. Rename this to something recognizable and usable, such as Private Network, by selecting it, right- clicking, and selecting Rename. This is the same value on all nodes.

  3. Select the Private Network network card, right-click, and select Properties.

  4. Make sure that Client For Microsoft Networks, Network Load Balancing, File And Printer Sharing For Microsoft Networks, and any other options are not selected, as shown in Figure 5-13.

    click to expand
    Figure 5-13: The General tab of the Properties dialog box for a network card.

  5. Select Internet Protocol (TCP/IP) and click Properties. Set the static IP address of the card to a valid IP address on the externally facing network. This address is different for each node of the server cluster, and must be in the proper class. These addresses will all be on the same subnet, but a different subnet from the public network. Click OK.

  6. Make sure the subnet mask is correct.

  7. Do not enter a default gateway.

  8. Do not enter any DNS servers.

  9. Click Advanced.

  10. Select the WINS tab of the Advanced TCP/IP Settings dialog box, shown in Figure 5-14, and select Disable NetBIOS Over TCP/IP if you are not on an MNS cluster. Click OK.

    click to expand
    Figure 5-14: The WINS tab of the Advanced TCP/IP Settings dialog box.

  11. Click OK to return to the Private Network Properties dialog box. Click Configure.

  12. Click Advanced. In the Properties dialog box for the network card, select the Advanced tab.

  13. For the External PHY property, as shown in Figure 5-12, set the value for the correct network speed. Set this to be the same on each node. Click OK.

  14. Click OK to close the Private Network Properties dialog box.

  15. If you are on a Windows 2000 server, add the following registry key and its associated values only if you are using a crossover cable:

     HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Tcpip\Parameters Value Name: DisableDHCPMediaSense Data Type: REG_DWORD Data: 1 
    Warning

    Do not perform this step if you are using Windows Server 2003.

    Repeat this procedure for each node in the server cluster and for each private network.

Changing Network Priority

You need to configure your networks so that they have the right priority and one will not impede the other. At the server level, the public networks should have the priority. You can configure the network by following these steps:

  1. On your desktop, right-click My Network Places and select Properties. If you are using Windows Server 2003, you might need to enable the Classic Start menu view from the Properties menu on the taskbar to see this.

  2. From the Advanced menu, select Advanced to open the Advanced Settings dialog box, shown in Figure 5-15.

  3. All public, or externally faced, networks should have priority over the private ones. If this is not the case, set the proper order and click OK. If you have multiple networks, set them in the proper order. This order is the same on all nodes.

    click to expand
    Figure 5-15: Advanced Settings dialog box.

  4. Close the Network Connections (Windows Server 2003) or Network And Dial-Up Connections (Windows Server 2000) window.

  5. Repeat this procedure for each node of the cluster.

Verifying Your Network Connectivity

To verify that the private and public networks are communicating properly prior to installing your server cluster, perform the following steps. It is imperative to know the IP address for each network adapter in the cluster, as well as for the IP cluster itself.

Verifying Connectivity and Name Resolution from a Server Node

This method shows how to check both IP connectivity and name resolution at the server level.

  1. On a node, from the Start menu, click Run, and then type cmd in the text box. Click OK.

  2. Type ping ipaddress where ipaddress is the IP address of another node in your server cluster configuration. This must be done for both the public and private networks. Repeat for every other node, including the node you are on.

  3. Type ping servername where servername is the name of another node in your server cluster configuration. Repeat for every other node, including the node you are on.

  4. Repeat Steps 1 “3 on each node.

Verifying Connectivity and Name Resolution from a Client or Other Server

This method shows how to check both IP connectivity and name resolution at the client or other servers that will access the server cluster.

  1. On a client computer, from the Start menu, click Run, and then type cmd in the text box. Click OK.

  2. Type ping ipaddress where ipaddress is the public IP address of one of the nodes in your server cluster configuration. If there is more than one public IP address, you must repeat this step.

  3. Type ping servername where servername is the name of one of the nodes in your server cluster configuration.

  4. Repeat Steps 2 and 3 for each node.

Creating the Shared Disks

Prior to installing your server cluster, you should also configure all disks that will be used in the cluster up front to minimize downtime later should you need to add a disk. Mountpoints do not get you around the main drive letter limitation, but they give you the ability to add a disk without a drive letter to an existing disk without interrupting the availability of a resource such as SQL Server.

Important

When configuring your server clusters, always configure the first node and the shared disks before you power on the other nodes and allow the operating system to start. You do not want more than one node to access the shared disk array prior to the first node being configured.

Also, for 64-bit editions of Windows Server 2003, the shared cluster disks must not only be basic disks, but they must be partitioned as master boot record (MBR) and not GUID partition table (GPT) disks.

More Info

For more information on using mountpoints with SQL Server 2000, see Chapter 6, Microsoft SQL Server 2000 Failover Clustering.

Creating Basic Disks

To create a basic disk, follow these steps:

  1. Start Computer Management from the Administration Tools menu.

  2. Select Disk Management.

  3. Select the disk you want to use, right-click, and select New Partition.

  4. Click Next in the Welcome To The New Partition Wizard page.

  5. In the Select Partition Type wizard page, classify this as either a primary partition or an extended partition. Click Next.

  6. In the Specify Partition Size wizard page, enter the size (in megabytes) of the partition. You do not have to use the entire disk, but if you do not, all partitions created are presented as one disk to a server cluster. Click Next.

  7. In the Assign Drive Letter Or Path wizard page, select a drive letter from the Assign The Following Drive Letter drop-down list. Click Next.

  8. In the Format Partition wizard page, select NTFS from the File System drop-down list, define the Allocation Unit Size (which should be 64K for SQL Server disks that will contain data), and specify a name for the Volume. Click Next.

  9. If the disk is not a basic disk (which is the default under Windows Server 2003), convert it to a basic disk.

Creating a Volume Mountpoint

To create a volume mountpoint, follow these steps:

  1. Start Cluster Administrator and pause all nodes of the cluster.

    Important

    You will experience a brief bit of downtime when configuring the mountpoint, but it will not be as bad as adding a new disk.

  2. Start Computer Management from the Administration Tools menu.

  3. Select Disk Management.

  4. Select the disk you want to use, right-click, and select New Partition.

  5. Click Next in the Welcome To The New Partition Wizard page.

  6. In the Select Partition Type wizard page, classify this as either a primary partition or an extended partition. Click Next.

  7. In the Specify Partition Size wizard page, enter the size (in megabytes) of the partition. You do not have to use the entire disk, but if you do not, all partitions created are presented as one disk to a server cluster. Click Next.

  8. In the Assign Drive Letter Or Path wizard page, do not assign a drive letter. Click Next.

  9. In the Format Partition wizard page, select NTFS from the File System drop-down list, define the Allocation Unit Size (which should be 64K for SQL Server disks that will contain data), and specify a name for the Volume. Click Next.

  10. If the disk is not a basic disk (which is the default under Windows Server 2003), convert it to a basic disk.

  11. On the disk you want to use as the root of your mountpoint, create a blank folder.

  12. In the Disk Management window, select the new volume that you created, right-click, and select Change Drive Letter And Paths. Click Add.

  13. In the Add Drive Letter Or Path dialog box, select the Mount In The Following Empty NTFS Folder option, and select the folder created in Step 11 (or you can also do it here). Click OK. The change will be reflected in the Change Drive Letter And Path dialog box.

  14. Click OK. Close the Computer Management console.

    Note

    Steps 15 and 16 refer to specific cluster steps to be done after your server cluster is configured, and they are listed here for completeness.

  15. Using Cluster Administrator, in the resource group with the disk that has the root folder, create a new disk resource (see the section Adding a New Disk later in this chapter). During the process, you must add the dependency of the root disk to the disk that will serve as a mountpoint.

  16. Unpause all nodes in Cluster Administrator and test failover for the new disk resource.

    Important

    Do not create a mountpoint on a drive that is not part of the shared disk array. Also, do not use the quorum disk or the disk for MS DTC. You can use mountpoints to expand these drives , however.

Installing the Server Cluster

Once you have completed the preconfiguration tasks, you are ready to install the server cluster. The process under Windows 2000 Server differs from the process under Windows Server 2003.

Under Windows 2000 Server, you have two options to configure the server cluster: using the Cluster Service Configuration Wizard or the command line. Under Windows Server 2003, you have three options to configure your server cluster: through the GUI, using the command line, and an unattended installation.

On the CD

For instructions on configuring a server cluster, see the document Server_Cluster_Installation_Instructions.doc.

Important

When configuring your server clusters, always configure the first node before powering on the other nodes and allowing the operating system to start. A Windows 2000 server cluster is dependent on the IIS Common Files being installed on each node during the operating system installation, or later adding the components through Add/Remove Windows Components. Do not attempt to copy these DLLs from another server into the right locations, as having the DLLs alone is not enough. For informational purposes only, here is a list of the DLLs installed as part of the IIS Common Files:

%systemroot%\Admwprox.dll
%systemroot%\Adsiis.dll
%systemroot%\Exstrace.dll
%systemroot%\Iisclex4.dll

%systemroot%\Iisext.dll
%systemroot%\Iismap.dll

%systemroot%\IisRtl.dll
%systemroot%\Inetsloc.dll
%systemroot%\Infoadmn.dll
%systemroot%\Wamregps.dll
%systemroot%\System32\Inetsrv\Coadmin.dll
%systemroot%\System32\Inetsrv\Isatq.dll
%systemroot%\System32\Inetsrv\Iisui.dll
%systemroot%\System32\Inetsrv\Logui.ocx

Postconfiguration Tasks

Once you have installed your server cluster, there are a few things that you must do prior to installing any applications like SQL Server.

On the CD

For a useful checklist to use when performing these tasks and for auditing capabilities, use the document Server_Cluster_Post- Installation_Checklist.doc.

Important

There are some post-Windows 2000 Service Pack 3 hotfixes that you should apply to your nodes that are important for server clusters. As of the writing of this book, the following list is accurate. Please check to see if there are any additional hotfixes for Windows 2000 or any for Windows Server 2003 before you configure your server clusters.

Information about the hotfixes can be found in the following Knowledge Base articles:

  • 325040: Windows 2000: Drive Letter Changes After You Restart Your Computer

  • 323233: Clusres.dll Does Not Make File System Dismount When IsAlive Fails

  • 307939: Disks Discovered Without the Cluster Service Running Are Not Protected

  • 815616: Clustered Disk Drive Letter Unexpectedly Changes

  • 326891: The Clusdisk.sys Driver Does Not Permit Disks to Be Removed by Plug and Play

There is one other fix that should not be applied unless you have spoken with the storage hardware vendor for your SAN or shared disk array and he or she indicates that the fix is valid and required. You can find more information about this in the following Knowledge Base article:

  • 332023: Slow Disk Performance When Write Caching Is Enabled

Configuring Network Priorities

Besides setting the network priorities at the server level, you need to do it in Cluster Administrator for the cluster. In the cluster, the private network is the ruler. Follow these steps to configure network priorities:

  1. Start Cluster Administrator. Select the name of the cluster, right-click, and select Properties, or you can select Properties from the File menu.

  2. Select the Network Priority tab, shown in Figure 5-16. Make sure that the private heartbeat network has priority over any public network. If you have multiple private and public networks, set the appropriate order.

    click to expand
    Figure 5-16: Network Priority tab.

  3. Click OK. Close Cluster Administrator.

Enabling Kerberos

If you are going to be using Kerberos with your cluster, ensure that the Cluster Service account has the appropriate permissions. Then perform the following steps:

  1. Start Cluster Administrator.

  2. Select the Groups tab, and select the Cluster Group resource group. In the right pane, select Cluster Name , right-click, and select Take Offline.

  3. Once Cluster Name is offline, right-click it and select Properties.

  4. On the Parameters tab, select the Enable Kerberos Authentication check box, and click OK.

  5. Bring online.

Changing the Size of the Cluster Log

The cluster log size defaults to 8 MB, which is not very big. If the cluster log becomes full, the Cluster Service overwrites the first half of the log with the second half. In that case, you can only guarantee that half of your cluster log is valid. To prevent this situation, you must create a system environment variable named ClusterlogSize, following these steps. You should make this value great enough to ensure validity of the cluster log in your environment.

More Info

For more information, consult Knowledge Base article 168801, How to Turn On Cluster Logging in Microsoft Cluster Server.

  1. From Control Panel, select System (or right-click My Computer).

  2. Select the Advanced tab, and click Environment Variables.

  3. In the Environment Variables dialog box under System Variables, click New.

  4. In the New System Variables dialog box, type ClusterlogSize for the Variable Name, and for Variable Value, enter ClusterlogSize=x , where x is the size in MB.

  5. Click OK. The new environment variable will be displayed.

  6. Click OK two more times to exit.

  7. Reboot the cluster nodes one by one so that the change takes effect. Verify that the Cluster.log file is the appropriate size after the final reboot.

Configuring MS DTC

Microsoft Distributed Transaction Coordinator (MS DTC) is used by SQL Server and other applications. For a SQL virtual server to use MS DTC, it must also be clustered. Its configuration will vary depending on your needs. You cannot use a remote MS DTC; you must configure an MS DTC resource for your server cluster.

Note

MS DTC is shared for all resources in the cluster. If you are implementing multiple SQL Server instances, they will all use the same MS DTC resource.

MS DTC does require disk space; in general, the size used should be roughly 500 MB. However, some applications might have specific requirements. For example, Microsoft Operations Manager recommends a minimum MS DTC size of 512 MB. You obviously need to consider this in your overall disk subsystem planning.

Creating MS DTC on Windows 2000 Server

Under Windows 2000 Server, there are two schools of thought when it comes to designing a server cluster when MS DTC is used:

  • Use the default configuration, which configures MS DTC to use the quorum drive. This is the most popular and most often recommended solution for use with SQL Server. If MS DTC is going to utilize the quorum, be sure there is enough room for the cluster files. You should also set the MS DTC resource so that it does not affect the group in the event of a failure.

  • Plan in advance and create a separate cluster disk dedicated to MS DTC. This might reduce contention on the quorum drive if MS DTC is being used in the cluster, but it might also mean that not enough drives will be available for the SQL Server instances. It also involves a few more steps in configuring the cluster. For example, a clustered Microsoft BizTalk Server configuration requires that MS DTC be placed on a separate drive and in a separate cluster group.

For a default installation of MS DTC using the quorum, perform the following steps:

  1. From the Start menu, choose Run, and type comclust.exe .

  2. Repeat this on each cluster node.

If you want to create MS DTC in its own group, or you need to move it after creating it in the default location, follow the steps given next. Do not place it in the group with SQL Server or make it dependent on any of its resources. You should create a new group (or use an existing one with an unused disk) with its own dedicated IP address, network name, and disk for MS DTC, move the DTC resource into the new group, and add the network name and the disk resource as dependencies of the MS DTC resource.

Important

Make sure no users or applications are connected to your cluster while you are performing this procedure.

  1. Start Cluster Administrator. Select the group that has the dedicated disk you will use for MS DTC and rename the group appropriately.

  2. If you already have a clustered MS DTC in a group (such as the one containing the quorum), delete the existing MS DTC resource. If you have not yet configured MS DTC, you can skip this step.

  3. From the File menu, select New, and then Resource (or right-click the group and select the same options). In the New Resource dialog box, in the Name box, type an appropriate name, such as MS DTC IP Address; in the Resource Type drop-down list, select IP Address. In the Group drop-down list, make sure the right group is selected. Click Next.

  4. In the Possible Owner dialog box, all nodes of the cluster should appear as possible owners . If they do not, add the nodes and then click Next.

  5. In the Dependencies dialog box, select the disk resource in the group you selected from the Available Resources, and then click Add. The disk resource appears in the Resource Dependencies list. Click Next.

  6. In the TCP/IP Address Parameters dialog box, enter the TCP/IP information. In the Address text box, enter the static IP address that will be used with MS DTC. In the Subnet Mask text box, enter the IP subnet if it is not automatically chosen for you. In the Network To Use text box, select the public cluster network you want to use. Click Finish.

  7. You will see a message confirming that the IP address is successfully configured.

  8. In the Cluster Administrator window, the newly created resource appears in the right pane.

  9. From the File menu, select New, and then Resource (or right-click the group and select the same options). In the New Resource dialog box, in the Name text box, type an appropriate name such as MS DTC Network Name . In the Resource Type drop-down list, select Network Name. In the Group drop-down list, make sure the proper group is selected. Click Next.

  10. In the Possible Owner dialog box, all nodes of the cluster should appear as possible owners. If they do not, add the nodes, and click Next.

  11. In the Dependencies dialog box, the MS DTC IP address resource you configured previously appears in the Available Resources list. Select the resource, and then click Add. The resource appears in the Resource Dependencies list. Click Next.

  12. In the Network Name Parameters dialog box, type MSDTC , and then click Finish.

  13. You will see a message confirming that the Network Name resource is successfully configured.

  14. In the Cluster Administrator window, the newly created resource appears in the right pane.

  15. From the File menu, select New, and then Resource (or right-click the group and select the same options). In the New Resource dialog box, in the Name text box, type an appropriate name such as MS DTC . In the Resource Type drop-down list, select Distributed Transaction Coordinator. In the Group drop-down list, make sure the proper group is selected. Click Next.

  16. In the Possible Owner dialog box, all nodes of the cluster should appear as possible owners. If they do not, add the nodes, and click Next.

  17. In the Dependencies dialog box, the MS DTC IP address and network name resources you configured previously appear in the Available Resources list. Select both, and then click Add. The resource appears in the Resource Dependencies list. Click Next.

  18. In the Network Name Parameters dialog box, type MSDTC , and then click Finish.

  19. You will see a message confirming that the Distributed Transaction Coordinator resource is successfully configured.

  20. In the Cluster Administrator window, the newly created resource appears in the right pane.

  21. On each node, rerun Comclust.exe.

  22. On the node that currently owns the MS DTC disk resource, you must reset the log. At a command prompt, type msdtc -resetlog.

  23. To start the new resources, which are all offline, right-click each one, and then click Bring Online.

Creating MS DTC on Windows Server 2003

With Windows Server 2003, the process is completely different, and it is not unlike the second process detailed under Windows 2000 Server. You can no longer run Comclust.exe. You must now manually create an IP address, network name, and Distributed Transaction Coordinator resource, following these steps:

Important

When configuring MS DTC, do not use the group containing a disk with the quorum or any of the ones planned for use with SQL Server.

  1. Start Cluster Administrator. Select the group that has the dedicated disk you will use for MS DTC and rename the group appropriately.

  2. From the File menu, select New, and then Resource (or right-click the group and select the same options). In the New Resource dialog box, in the Name text box, type an appropriate name, such as MS DTC IP Address . In the Resource Type drop-down list, select IP Address. In the Group drop-down list, make sure the right group is selected. Click Next.

  3. In the Possible Owner dialog box, all nodes of the cluster should appear as possible owners. If they do not, add the nodes, and click Next.

  4. In the Dependencies dialog box, select the disk resource in the group you selected from the Available Resources, and then click Add. The disk resource appears in the Resource Dependencies list. Click Next.

  5. In the TCP/IP Address Parameters dialog box, enter the TCP/IP information. In the Address text box, enter the static IP address that will be used with MS DTC. In the Subnet Mask text box, enter the IP subnet if it is not automatically chosen for you. In the Network To Use list box, select the public cluster network you want to use. Click Finish.

  6. You will see a message confirming that the IP address is successfully configured.

  7. In the Cluster Administrator window, the newly created resource appears in the right pane.

  8. From the File menu, select New, and then Resource (or right-click the group and select the same options). In the New Resource dialog box, in the Name text box, type an appropriate name such as MS DTC Network Name . In the Resource Type drop-down list, select Network Name. In the Group drop-down list, make sure the proper group is selected. Click Next.

  9. In the Possible Owner dialog box, all nodes of the cluster should appear as possible owners. If they do not, add the nodes, and click Next.

  10. In the Dependencies dialog box, the MS DTC IP address resource you configured previously appears in the Available Resources list. Select the resource, and then click Add. The resource appears in the Resource Dependencies list. Click Next.

  11. In the Network Name Parameters dialog box, type MSDTC , and then click Finish.

  12. You will see a message confirming that the Network Name resource is successfully configured.

  13. In the Cluster Administrator window, the newly created resource appears in the right pane.

  14. From the File menu, select New, and then Resource (or right-click the group and select the same options). In the New Resource dialog box, in the Name text box, type an appropriate name such as MS DTC . In the Resource Type drop-down list, select Distributed Transaction Coordinator. In the Group drop-down list, make sure the proper group is selected. Click Next.

  15. In the Possible Owner dialog box, all nodes of the cluster should appear as possible owners. If they do not, add the nodes, and click Next.

  16. In the Dependencies dialog box, the MS DTC IP address and network name resources you configured previously appear in the Available Resources list. Select both, and then click Add. The resource appears in the Resource Dependencies list. Click Next.

  17. In the Network Name Parameters dialog box, type an appropriate name such as MS DTC , and then click Finish.

  18. You will see a message confirming that the Distributed Transaction Coordinator resource is successfully configured.

  19. In the Cluster Administrator window, the newly created resource appears in the right pane.

  20. To start the new resources, which are all offline, right-click each one, and then click Bring Online.

Verifying Your Server Cluster Installation

Once you have configured your base server cluster, you need to test it to ensure that it is configured and working properly.

Verifying Connectivity and Name Resolution

To verify that the private and public networks are communicating properly, perform the following steps. It is imperative to know the IP address for each network adapter in the cluster, as well as for the IP cluster itself.

  1. On a node, from the Start menu, click Run, and then type cmd in the text box. Click OK.

  2. Type ping serverclusteripaddress where serverclusteripaddress is the IP address for the server cluster you just configured.

  3. Type ping serverclustername where serverclustername is the network name for the server cluster you just configured.

  4. Repeat Steps 2 and 3 for each node.

  5. Repeat Steps 2 and 3 for representative servers or client machines that will need access to the server cluster.

Failover Validation

You need to ensure that all nodes can own the cluster resource groups that were created. To do this, follow these steps:

  1. Start Cluster Administrator.

  2. Verify that all nodes configured for the failover cluster appear in the bottom of the left pane of Cluster Administrator.

  3. For each cluster group, make sure it can be failed over and back from all nodes in the server. To do this, right-click the group, and select Move. If the server cluster has more than two nodes, you must also select the destination node. This change will be reflected in Cluster Administrator.




Microsoft SQL Server 2000 High Availability
Microsoft SQL Server 2000 High Availability
ISBN: 0735619204
EAN: 2147483647
Year: 2006
Pages: 137

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net