Getting Prepared for Clustering


When it comes to SQL Server 2005 clustering, the devil is in the details. If you are not familiar with this phrase, you will be by the time you complete your first SQL Server 2005 clustering installation. If you take the time to ensure that every step is done correctly and in the right order, your cluster installation will be smooth and relatively quick. But if you don't like to read instructions, instead preferring the trial and error approach to computer administration, then expect to face a lot of frustration and a lot of time installing and reinstalling your SQL Server 2005 cluster, as not paying attention to the details will bite you over and over again.

The best way, we have found, to ensure a smooth cluster installation is to create a very detailed, step-by-step plan for the installation, down to the screen level. Yes, this is boring and tedious, but doing so will force you to think through every option and how it will affect your installation and your organization (once it is in production). In addition, such a plan will come in handy the next time you build a cluster, and will also be great documentation for your disaster recovery plan. You do have a disaster recovery plan, don't you?

Preparing the Infrastructure

Before you even begin building a SQL Server 2005 cluster, you must ensure that your network infrastructure is in place. Here's a checklist of everything required before you begin installing a SQL Server 2005 cluster. In many cases, these items are the responsibility of others on your IT staff, but it is your responsibility to ensure that all of these are in place before you begin building your SQL Server 2005 cluster.

  • Your network must have at least one Active Directory server and ideally two for redundancy.

  • Your network must have at least one DNS server and ideally two for redundancy.

  • Your network must have available switch ports for the public network cards used by the nodes of the cluster. Be sure they are manually set to match the manually set network card settings used in the nodes of the cluster. In addition, all the nodes of a cluster must be on the same subnet.

  • You will need to secure IP addresses for all the public network cards.

  • You must decide how you will configure the private heartbeat network. Will you use a direct network card to network card connection, or use a hub or switch?

  • You will need to secure IP addresses for the private network cards. Generally, use a private network subnet such as 10.0.0.0 - 10.255.255.255, 172.16.0.0 - 172.31.255.255, or 192.168.0.0 - 192.168.255.255. Remember, this is a private network seen only by the nodes of the cluster.

  • Ensure that you have proper electrical power for the new cluster servers and shared array (assuming they are being newly added to your data center).

  • Ensure that there is battery backup power available for all the nodes in your cluster and your shared array.

  • If you don't already have one, create a SQL Server service account to be used by the SQL Server services running on the cluster. This must be a domain account, with the password set to never expire.

  • If you don't already have one, create a cluster service account to be used by the Windows Clustering service. This must be a domain account, with the password set to never expire.

  • Create three global groups, one each for the SQL Server Service, the SQL Server Agent Service, and the SQL Text Service. You will need these when you install SQL Server 2005 on a cluster.

  • Determine a name for your virtual cluster (Clustering Services) and secure a virtual IP address for it.

  • Determine a name for your virtual SQL Server 2005 cluster and secure a virtual IP address for it.

  • If you are using a Smart UPS for any node of the cluster, remove it before installing Cluster Services; then re-add it.

  • If your server nodes have AMP/ACPI power saving features, turn them off. This includes network cards, drives, and other components. Their activation can cause a failover.

We will talk more about most of these as we go over the installation process. I have included this list here so that you understand that these are steps you need to take before actually beginning a cluster install.

Preparing the Hardware

Based on our experience building clusters, the hardware presents the thorniest problems, often taking the most time to research and configure. Part of the reason for this is that there are many hardware options, some of which work, and others that don't. Unfortunately, there is no complete resource you can use to help you sort through this. Each vendor offers different hardware, and the available hardware is always changing, along with new and updated hardware drivers, making this entire subject a moving target with no easy answers. In spite of all this, here is what you need to know to get started on selecting the proper hardware for your SQL Server 2005 cluster.

Finding Your Way Through the Hardware Jungle

Essentially, here's the hardware you need for a SQL Server cluster. To keep things simple, we will only be referring to a two-node active/passive cluster, although these same recommendations apply to multinode clusters. The following are my personal minimum recommendations. If you check out Microsoft's minimum hardware requirements for a SQL Server 2005 cluster, they will be somewhat less. Also, I highly suggest that each node in your cluster be identical. This can save lots of installation and administrative headaches.

The specifications for the Server Nodes should be the following:

  • Dual CPUs, 2 GHz or higher, 2MB L2 Cache (32-bit or 64-bit)

  • 1GB or more RAM

  • Local mirrored SCSI drive (C:), 9GB or larger

  • SCSI DVD drive

  • SCSI connection for local SCSI drive and DVD drive

  • SCSI or Fiber connection to shared array or SAN

  • Redundant power supplies

  • Private network card

  • Public network card

  • Mouse, keyboard, and monitor (can be shared)

The Shared Array should have a SCSI-attached RAID 5 or RAID 10 array with appropriate high-speed SCSI connection. With Microsoft Clustering, SCSI is only supported if you have a 2-node cluster. If you want to cluster more than two nodes, you must use a fiber-attached disk array or SAN. Or you may have a fiber-attached RAID 5 or RAID 10 array with appropriate high-speed connection. Or you may use a fiber-attached SAN storage array with appropriate high-speed connection (generally a fiber switch).

Because this chapter is on SQL Server 2005 clustering, not hardware, we won't spend much time on hardware specifics. If you are new to clustering, I would suggest you contact your hardware vendor for specific hardware recommendations. Keep in mind that you will be running SQL Server 2005 on this cluster, so ensure that whatever hardware you select meets the needs of your predicted production load.

The Hardware Compatibility List

Whether you select your own hardware or get recommendations from a vendor, it is highly critical that the hardware selected is listed in the Cluster Solutions section of the Microsoft Hardware Compatibility List (HCL), which can be found at http://www.microsoft.com/whdc/hcl/default.mspx.

As you probably already know, Microsoft lists all of the hardware in the HCL that is certified to run their products. If you are not building a cluster, you can pick and choose almost any combination of certified hardware from multiple vendors and know that it will work with Windows 2003 Server. This is not the case with clustering. If you look at the Cluster Solutions in the HCL, you will notice that entire systems, not individual components, have to be certified. In other words, you can't just pick and choose individually certified components and know that they will work. Instead, you must select from approved cluster systems, which include the nodes and the shared array. In some ways, this reduces the variety of hardware you can choose from. On the other hand, by only selecting approved cluster systems, you can be assured the hardware will work in your cluster. And assuming you need another reason to select only an approved cluster system, Microsoft will not support a cluster that does not run on an approved system.

In most cases, you will find your preferred hardware as an approved system. But, as you can imagine, the HCL is always a little behind, and newly released systems may not be on the list yet. So what do you do if the system you want is not currently on the HCL? Do you select an older, but tested and approved system, or do you take a risk and purchase a system that has not yet been tested and officially approved? This is a tough call. But what we have done in the past when confronted by this situation is to require the vendor to certify, on their own, that the hardware will become certified by Microsoft at some time in the future, and if the hardware is not approved (as promised) that the vendor has to correct the problem by replacing unapproved hardware with approved hardware at their cost. We have done this several times and it has worked out fine so far.

Preparing the Hardware

As a DBA, you may or may not be the one who installs the hardware. In any case, here are the general steps most people follow when building cluster hardware:

  1. Install and configure the hardware for each node in the cluster as if they will be running as stand-alone servers. This includes installing the latest approved drivers.

  2. Once the hardware is installed, install the operating system and latest service pack, along with any additional required drivers.

  3. Connect the node to the public network. To make things easy, name the network used for public connections as "network."

  4. Install the private heartbeat network. To make things easy, name the private heartbeat network "private."

  5. Install and configure the shared array or SAN.

  6. Install and configure the SCSI or fiber cards in each of the nodes and install the latest drivers.

  7. One at a time, connect each node to the shared array or SAN following the instructions for your specific hardware. It is critical that you do this one node at a time. By this, we mean that only one node at a time should be physically on and connected to the shared array or SAN and configured. Once that node is configured, turn it off and turn the next node on and configure it, and so on, one node at a time. If you do not following this procedure, you risk corrupting the disk configuration on your nodes, requiring you to start over again.

  8. After connecting each node to the shared array or SAN, you will need to use Disk Administrator to configure and format the drives on the shared array. You will need at least two logical drives on the shared array. One will be for storing your SQL Server databases, and the other one will be for the Quorum drive. The data drive must be big enough to store all the required data, and the Quorum drive must be at least 500MB (which is the smallest size that an NTFS volume can efficiently operate). When configuring the shared drives using Disk Administrator, it is required that each node of the cluster use the same drive letter when referring to the drives on the shared array or SAN. For example, you might want to assign your data drive as drive "F:" on all the nodes, and assign the Quorum drive letter "Q:" on all the nodes.

  9. Once all of the hardware is put together, it is critical that it be functioning properly. This means that you need to test, test, and test again, ensuring that there are no problems before you begin installing clustering services. While you may be able to do some diagnostic hardware testing before you install the operating system, you will have to wait until after installing the operating system before you can fully test the hardware.

Once all of the hardware has been configured and tested, you are ready to install Windows 2003 Clustering, which is our next topic.



Professional SQL Server 2005 Administration
Professional SQL Server 2005 Administration (Wrox Professional Guides)
ISBN: 0470055200
EAN: 2147483647
Year: 2004
Pages: 193

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net