Using Server Clusters

When you minimize the number of single points of failure in your network environment, as with Network Load Balancing, you also ensure maximum reliability and availability because a number of servers are running in a distributed fashion, performing the same core work. This is the main goal of a true server cluster, in which virtually interconnected multiple systems function as a single system to ensure maximum reliability and availability.

A server cluster running under Windows Server 2003 Enterprise or Datacenter Edition can be formed from two to eight nodes and can be configured in one of the following ways:

  • Single node server cluster ” A cluster configuration made up of a single node and configured with or without external cluster storage devices. When no external cluster storage device is used, the local drive is used as the cluster storage device.

  • Single quorum device server cluster ” A cluster configuration made up of two or more nodes; each node is attached to one or more cluster storage devices. The cluster configuration data is stored on a single cluster storage device.

  • Majority node set server cluster ” A cluster configuration with two or more nodes, each node may or may not be attached to one or more cluster storage devices. The cluster configuration data is stored on multiple disks across the cluster, and the Cluster Service makes sure this data is kept consistent across different disks.

Nodes that are part of a cluster configuration have one of the following five states assigned:

  • DOWN ” Assigned when the node is not actively participating in cluster operations.

  • JOINING ” Assigned when a node is in the process of becoming an active participant in cluster operations.

  • PAUSED ” Assigned when a node is actively engaged in some part of the cluster process but it cannot or has not taken ownership of any resource groups.

  • UP ” Assigned when a node is actively participating in all cluster operations.

  • UNKNOWN ” Assigned when the node's state cannot be determined.

After you have decided which type of cluster to use, you need to assign an IP address and a network name for the cluster. You must also allocate physical disk space for the local system and the clustered data (where the disk space is often located on a drive array).

You also need to consider the applications or services being hosted because some applications are cluster aware, meaning they can be fully utilized in a clustered environment; other applications are not and must be specifically configured to be used with clusters. In some situations, you might not be able to use a particular application in this type of configuration. If so, you need to determine which resources can fail over with the cluster and which will become unavailable because of a node failure. You also need to list any service dependencies for all applications on the cluster so that the Cluster Service enables the dependent services successfully. That way, the application or service can run after a failover.

There are also considerations for the server hardware and the network infrastructure supporting the cluster. Microsoft supports only the server cluster systems deployed with hardware listed in the Windows Catalog, including hardware components such as hard drives, network interface cards (NICs), and the like. Best practices dictate that the cluster configuration consist of identical storage hardware ( drives , arrays, controllers, and so on) on all nodes in the cluster to help prevent potential compatibility and differential issues.

Implementing Clusters

All the hard drives that are allocated for the cluster need to be partitioned and formatted before adding the first node of the cluster. All partitions on the cluster drives must be formatted with NTFS, and all partitions on one disk are managed as one resource, which also includes the quorum resource. That means it doesn't matter how many drive letters are configured for the physical disk; when the set fails over, all the drive letters go.

Also, you need to be sure that hardware requirements for your hosted applications and services are scaled correctly. The hardware in place must provide the type of system response you require, such as ensuring fast disk access by installing fast, high-RPM drives with solid controller input. You should also optimize the configuration by making sure the paging file is sized correctly (not too large or to small) and resides on a separate local drive in each node to increase overall system performance.

Other considerations include the proper type and number of CPUs installed in the system and a sufficient amount of physical RAM. The quorum resource is a single resource per cluster that performs these tasks :

  • A quorum resource allows the cluster to maintain its state and configuration independently of individual node failures by storing a constantly updated version of the cluster database and having it available to all nodes in the cluster.

  • Cluster implementations can use one of two types of quorum resources at initial build and cannot be changed.

A quorum-device cluster requires a storage-class resource on a shared SCSI implementation or fiber-driven solution. The Window Server 2003 Cluster Service allows only physical disk resources to operate as quorum devices by default, but third-party solutions support other types of storage devices as quorum devices. The quorum disk should be at least 500MB, but you might need to increase this amount, depending on your environment's needs.

A majority node set server cluster is a type of cluster configuration that has two or more nodes; each node may or may not be attached to one or more cluster storage devices. The cluster configuration data is stored on multiple disks across the cluster, and the Cluster Service makes sure this data is kept consistent across different disks.

Other concerns that need to be addressed for your cluster setup are the design of the network where your cluster resides. All network adapters in each cluster node need to be bound to TCP/IP. Best practices also recommend that client-side network adapters, called public network adapters , should have NetBIOS over TCP/IP and NetBIOS enabled so that clients can browse to a virtual server by name. (NetBIOS should be disabled on the cluster-only side.) You should also make sure you are configuring NICs to specific settings and not leaving them on "auto-configure." If the network is half-duplex, you should force-set NICs that way. Flow control and media type should also be set to the same values on each adapter.

graphics/note_icon.gif

A private network is set up for the sole use of internal cluster communication. In this setup, there is no client communication on the wire.

A public network is set up and enabled so that client systems have access to the cluster's applications and services. In this setup, there is no internal cluster communication on the wire.

A mixed (public and private) network allows using the networked connection for both internal cluster communication and client communication.


The Cluster Service doesn't support NWLink, AppleTalk, and NetBEUI. Clustering should be enabled on nodes that are member servers in the same domain with access to a domain controller. If any node needs to be configured as a domain controller, all the nodes should be configured as domain controllers.

If you want nodes to be domain controllers of their own domain, it is better to configure a domainlet , which is a small domain that contains no user accounts and no Global Catalog servers. Domain controllers in a domainlet do not have to authenticate users or provide global catalog lookup services to users or computers.

A domainlet includes only well-known policies and groups defined for all domains, such as Administrators, Domain Administrators, and service accounts required by the clusters it supports. When configuring an account to run the Cluster Service, you should select a domain account that's static across all nodes in the cluster. You must also verify that the TCP/IP configuration your nodes used in the network cluster is statically enabled on each NIC for the best performance.

You should follow some standard best practices for securing server clusters in your enterprise. As with all server systems deployed in your enterprise, you should restrict physical access to the server cluster to only trusted personnel. Physical access restrictions include the physical location of the server hardware and any associated networking infrastructure.

On the network side, you should use firewalling to protect your cluster from unauthorized access. Make sure the internal cluster communication is segmented from other networks. If the heartbeat messages exchanged between systems are disrupted ”either intentionally by a denial-of-service (DoS) attack or accidentally through traffic overload of that network circuit ”the failure might cause another node on the cluster to believe it needs to take over as an active resource. This takeover can potentially bring down the cluster if different nodes begin to fight over shared resources.

Any remote administration should be done only from trusted, secure computers. Additionally, the Cluster Service account should not be a member of the Domain Administrators group , and it should never be used to administer the cluster. To track events on the cluster's nodes, enable full auditing of security- related events in the cluster.

Understanding Cluster Management

On Windows Server 2003 cluster systems, you can use several different resource types to manage cluster resources. For example, the Physical Disk resource type manages disks on a cluster storage device and enables you to assign drive letters to disks or create mounted drives. When drive letter designations are used, they must be constant across all cluster nodes.

No more than one node at a time should use a cluster disk. Normally, the Cluster Service handles this, but if you install a new disk to a cluster or remove a disk from the cluster's control, the Cluster Service can't maintain control of the resource, and nodes might fight over the newly available resource.

Dynamic Host Configuration Protocol (DHCP) and Windows Internet Naming Service (WINS) provide network services to clients. Their resources can be configured to fail over if their databases are kept on a disk that is part of the cluster storage.

The Print Spooler resource is used to cluster print services that are available over the network and allows print jobs to be handed off to another node during system failover. The main exception to this rule is when the printer is connected by a local port on a node, such as LPT1. There is no way to fail over control of that printer because the system it's physically connected to has gone offline.

The File Share resource can be used to provide basic file sharing via the cluster for high availability. You can also use the File Share resource type to create a resource that manages a standalone distributed file system (DFS) root, but the File Share resource type cannot manage fault-tolerant DFS roots.

The Internet Protocol Address resource type manages IP addresses as cluster resources, which allow network clients to access groups as virtual servers. In a similar fashion, the Network Name resource type provides an alternate computer name on a network so that when the resource is included in a group with an IP address resource, network clients can access it as a virtual server by using this alternate computer name.

The Volume Shadow Copy Service Task resource type can be used to create jobs in the Scheduled Task folder that must run on the node currently hosting a particular resource group. This enables you to configure the scheduled task so that it can fail over from one cluster node to another.



MCSE 70-293 Exam Cram. Planning and Maintaining a Windows Server 2003 Network Infrastructure
MCSE 70-293 Exam Cram: Planning and Maintaining a Windows Server 2003 Network Infrastructure (2nd Edition)
ISBN: 0789736195
EAN: 2147483647
Year: 2004
Pages: 123

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net