Lesson 2: Designing Clustered Exchange 2000 Servers

Clustering technology allows you to design high-availability solutions for enterprise-level applications, which are especially desirable for mailbox and public folder servers. Do not mistake clusters based on the Windows 2000 Cluster service with load-balancing clusters, however. Load-balancing clusters, discussed in Chapter 8, "Designing Hosted Services with Microsoft Exchange 2000 Server," are suitable for front-end (FE) servers that do not store any user data. Clusters based on the Windows 2000 Cluster service, on the other hand, rely on completely different mechanisms. They are a good choice for back-end (BE) servers, which host the data of potentially thousands of users. The Cluster service is available only in Windows 2000 Advanced Server and Datacenter Server. It is important to note that only the Enterprise Edition of Exchange 2000 Server supports this form of clustering, in the following called a server cluster. The Standard Edition does not benefit from this complex, advanced technology.

This lesson discusses the design of a server cluster of up to four nodes for Exchange 2000 Server. You can read about the clustering technology in general and the differences between Windows 2000 Advanced Server and Datacenter Server. You can also learn about important load-balancing concepts, such as active/active and active/passive clustering.

After this lesson, you will be able to

  • Describe the advantages and limitations of server clusters that rely on the Windows 2000 Cluster service
  • Design a server cluster for Exchange 2000 Server that consists of up to four nodes

Estimated time to complete this lesson: 60 minutes

Understanding the Clustering Technology of Windows 2000 Server

A server cluster is a good choice if you want to build an Exchange 2000 Server system that must support a very large user base. For example, you could design a system of dedicated mailbox servers clustered using four eight-processor computers, each with 64 GB of RAM, multiple fault-tolerant, super-fast network cards, a reliable UPS, and each connected to a shared mass storage solution. This cluster could handle four individual storage groups, each containing five mailbox stores of 5000 mailboxes. That means that, theoretically, you could support 100,000 users with one clustered system.

Note


To take advantage of four-node clustering and 64-GB memory support, you must install Windows 2000 Datacenter Server.

The Principle of a Server Cluster

A cluster is basically a group of servers interconnected by means of a public and private network and a common Small Computer System Interface (SCSI) or fiber channel bus to an external storage system (see Figure 10.7). Together, the servers can act as one or many virtual servers. A virtual server corresponds to a generic Internet Protocol (IP) address and a network name and owns a disk resource in the cluster. Any of the cluster nodes can then host the virtual servers, and users can access all the resources in the cluster, including Exchange 2000 services, without knowing the actual name of the node that currently hosts the virtual server. When configuring a virtual server for Exchange 2000 Server, you must place the mailbox and public folder stores on the shared disk system.

Figure 10.7 - The basic architecture of a Windows 2000 server cluster

The main advantage of the server cluster is that a second node can automatically assume the workload if another node fails (see Figure 10.8). Your users lose their sessions temporarily, but can reconnect after a relatively short time using the generic virtual server name, which completely hides the complexity of the cluster from the users. By grouping two or more computers together in a cluster, you can minimize system downtime caused by software, network, and hardware failures. You can configure multiple virtual servers on one cluster.

Figure 10.8 - Accessing a virtual Exchange 2000 server after a node failure

Windows 2000 server clusters consist of the following components:

  • Shared storage bus Connects all nodes to the disks (or RAID storage systems) where all clustered data must reside. A cluster can support more than one shared storage bus if multiple adapters are installed in each node.
  • Public network connection Connects client computers to the nodes in the cluster. The public local area network (LAN) is the primary interface of the cluster. You should use fast, reliable network cards, such as Fast Ethernet or Fiber Distributed Data Interface (FDDI) cards.
  • Private network connection Connects the nodes in a cluster and ensures that the nodes are able to communicate with each other even if the public LAN is down. This private LAN is optional but highly recommended. Low-cost Ethernet cards are sufficient to accommodate the minimal traffic of the cluster communication.

It is advisable to purchase complete cluster sets from a hardware vendor instead of configuring cluster hardware manually. All hardware must be on Microsoft’s Hardware Compatibility List for the Cluster service. Furthermore, all host bus adapters should be of the same kind and should have the same firmware revision. If you are using a traditional SCSI-based cluster, connect only disks and SCSI adapters to it—no tape devices, CD-ROMs, scanners, and so forth—and ensure that the bus is terminated properly on both ends. Carefully test the cluster configuration. A faulty hardware component or improperly terminated SCSI bus can lead to corruption of the cluster disks and serious server problems.

Windows 2000 Clustering Architecture

The Cluster service consists of several internal elements and relies on additional external components to handle the required tasks of managing cluster memberships, failover procedures, and failbacks. The Node Manager, for instance, is an internal module that maintains a list of nodes that belong to the cluster and monitors their system state. This component periodically sends heartbeat messages over the network to its counterparts running on other nodes in the cluster to recognize node faults.

The health of the cluster resources is monitored by the Resource Monitor, which is implemented in a separate process communicating with the Cluster service via remote procedure calls (RPCs). Resources are any physical or logical components that the Resource Manager, an internal component of the Cluster service, can manage, such as disks, IP addresses, network names, Exchange 2000 Server services, and so forth. The Resource Manager receives system information from Resource Monitor and Node Manager to manage resources and resource groups and initiate actions, such as startup, restart, and failover. The Resource Manager works in conjunction with the Failover Manager to carry out a failover.

The Windows 2000 clustering architecture consists of the following main components:

  • Node Manager Maintains a list of nodes that belong to the cluster, monitors their system state, and sends heartbeat messages to the Node Managers running on other nodes in the cluster.
  • Communications Manager Manages communication between all nodes of the cluster through the cluster network driver.
  • Resource Monitor Uses resource dynamic-link libraries (DLLs) to monitor the health of each cluster resource.
  • Resource Manager Receives system information from Resource Monitor and Node Manager to manage resources and resource groups and initiate actions such as startup, restart, and failover. The Resource Manager works in conjunction with the Failover Manager to carry out a failover.
  • Failover Manager Moves resource groups between the nodes of a cluster. The Failover Manager communicates with its counterparts on the other nodes to arbitrate the ownership of a failed resource group. This arbitration relies on the node preference list that you can specify when creating resources in the Cluster Administrator console. The arbitration can also take into account other factors such as the capabilities of each node, the current load, and application information. After a new node is determined for the resource group and the group is moved, all nodes update their cluster databases to track which node owns the resource group.
  • Configuration Database Manager Maintains the cluster configuration database, also known as the cluster registry, which is separate from the local Windows 2000 registry, although a copy of the cluster registry is also kept in the Windows 2000 registry. The configuration database maintains updates on members, resources, restart parameters, and other configuration information.
  • Checkpoint Manager Saves the configuration data in a log file on the quorum disk, which holds the configuration data log files and recovery logs. The quorum disk is a cluster-specific resource used to communicate configuration changes to all nodes in the cluster. There can only be one quorum disk in a particular Windows 2000 cluster.
  • Global Update Manager Provides the update service that transfers configuration changes from the configuration data log files on the quorum disk into the configuration database of each node.
  • Log Manager Writes the recovery logs on the quorum disk.
  • Event Processor Manages the node state information and controls the initialization of the Cluster service. A node may be in one of three states: offline, online, or paused. A node is offline when it is not an active member of the cluster. A node is online when it is fully active, is able to own and manage resource groups, accepts cluster database updates, and contributes votes to the quorum algorithm. The quorum algorithm determines which node can own the quorum disk. A paused node is an online node unable to own or run resource groups. You may pause a node for maintenance reasons, for instance.

Failover and Failback

Failover and failback are cluster-specific procedures to move resource groups with all their associated resources between nodes. Failover means transferring resource groups from a decommissioned or failed node to another available node in the cluster. Failback, on the other hand, describes the process of moving the resource groups back when the node that was offline is online again (see Figure 10.9). Failover can occur in two situations: It can be triggered manually for maintenance reasons, or the Cluster service can initiate it automatically in the case of a resource failure on the node owning the resource. By default, resource groups continue to run on the alternate node even if the failed node comes back online. To activate failback, you need to specify a preferred owner for the resource group. When this node comes back online again, the Failover Manager fails back the resource group to the recovered or restarted node.

Figure 10.9 - Failover and failback of a cluster resource

Partitioned Data Access

It is important to remember that the nodes in a Windows 2000 cluster cannot access the shared disks concurrently. Concurrent access to mass storage, also known as "shared-everything" data access, is more common in mainframe- oriented environments. The Windows 2000 Cluster service supports only the "shared-nothing" partitioned data access model to avoid the overhead of synchronizing concurrent disk access.

The shared-nothing model greatly influences the design of the cluster’s shared disk system because only one node can access a particular disk at any given time. For this reason, you need to provide every resource group with its own disk set. It doesn’t make much sense to configure a cluster with just one shared physical disk, for example. The node owning the resources, such as an Exchange 2000 resource group, would request exclusive access to the disk by issuing the SCSI Reserve command. No other node can then access the physical device unless an SCSI Release command is issued. The remaining nodes would not be able to run any additional virtual servers and would have to remain idle until the first node fails.

Note


The most significant disadvantage of the shared-nothing model is that you cannot achieve dynamic load balancing. Resource groups can be moved from a failed node to another node in the cluster, but it is impossible to run the same virtual server on multiple nodes at the same time.

Implementing Load Balancing Through Active/Active Clustering

To best utilize the hardware resources available in a cluster, you may want to implement combined application servers that provide more than one kind of client/server services to their users. For instance, in a two-node cluster, you can run an instance of Microsoft SQL Server on one node and an instance of Exchange 2000 Server on another node. If one node fails, its virtual server resource groups (that is, the virtual SQL Server or Exchange 2000 Server with all its dependent services) are moved to the remaining node. This may reduce the performance of this node, but the cluster quickly continues to provide the complete set of application services, which is usually more important than a temporary performance decrease.

Exchange 2000 Server supports a load-balancing mechanism known as active/active clustering, which allows you to create a two-node or four-node cluster for Exchange 2000 Server only. In this case, you need to configure multiple Exchange 2000 virtual servers and distribute them across the nodes, thus providing static load balancing. Keep in mind that each virtual server requires access to its own disk resources, meaning one or more dedicated sets of physical disks. In other words, if you want to configure four virtual servers in a four-node cluster, you need four separate physical disks plus one disk for the quorum resource, which you should not utilize for any other purposes (see Figure 10.10).

Note


Microsoft does not recommend adding Exchange 2000 services to the virtual server that owns the quorum disk. Defining dedicated virtual servers for Exchange 2000 simplifies service maintenance, such as taking a virtual Exchange 2000 system offline.

Storage Area Network Solutions and Server Clusters

Storage area networks (SANs) are an emerging technology for data storage and storage management. SANs rely on fiber channel switching, which connects the storage systems on which data is stored and protected to the computer systems running arbitrary operating systems, such as Windows 2000 Datacenter Server. Fiber channel switching technology is fast and reliable, and it allows hardware vendors to create storage solutions of up to several terabytes of storage capacity. Complete SAN packages include the hardware, as well as the necessary storage management software. In a reliable SAN environment, multiple paths exist to the stored data.

As shown in Figure 10.10, Windows 2000 server clusters are a perfect solution for SAN-based storage systems that host the mailbox stores and public folder stores of your messaging organization. Even if you need to perform maintenance on a particular server computer, users can continue to work with all resources. When you bring the server back online, failback ensures that all nodes are running virtual servers, as explained earlier in this lesson.

Figure 10.10 - Four Exchange 2000 virtual servers in a four-node cluster

Exchange 2000 Server in a Clustered Environment

The installation of Exchange 2000 Server in a cluster is a straightforward process. Make sure you have installed Windows 2000 Advanced Server or Datacenter Server and the Cluster service on all nodes in the cluster and properly configured the cluster environment. You cannot install Exchange 2000 Server on a nonclustered server and integrate this installation into a cluster afterward. When you launch the Setup executable on the cluster, the installation follows the usual path, with the exception of a single dialog box, which is displayed during the installation to inform you that Setup has detected the cluster environment. In spite of this, Setup copies and configures cluster-aware Exchange 2000 components and resource DLLs and sets the Exchange 2000 services to start manually. This prevents the services from starting automatically when rebooting the server, which is required at the end of the installation. At minimum, install Microsoft Exchange Messaging and Collaboration and Microsoft Exchange System Management Tools on all nodes.

Installing Exchange 2000 Server on the Remaining Nodes

Keep in mind that you must set up Exchange 2000 Server on all nodes with exactly the same parameters. It is important that you install only one node at a time using the same account you used to install the Cluster service. This account must have the required permissions to set up Exchange 2000 Server in a forest as outlined in Chapter 5, "Designing a Basic Messaging Infrastructure with Microsoft Exchange 2000 Server."

During installation, you need to place the binary files on the local system drive of each node. The binary files are not shared between the nodes, but it is important to specify the same drive letters and directories on all the nodes in the cluster. Remember to make sure that drive M is not in use on any node because, when you configure virtual servers later on, the WSS will use the M drive by default. If this drive is not available on one node, the WSS uses the next drive letter automatically, which may cause problems because the drive letter would differ from the remaining nodes.

Controlling Exchange 2000 Server Services

Before your users can access messaging resources, you need to configure and start virtual servers in the Cluster Administrator console, which is the primary administration utility. Do not start or stop clustered Exchange 2000 services in the Services snap-in because this undermines the cluster environment. You must always use the Cluster Administrator console to bring clustered services online or offline.

Note


Do not configure restart settings in the Services snap-in for services that have been installed in a cluster. Restarting services outside the Cluster Administrator console interferes with the cluster management software.

Configuring Resource Groups and Virtual Servers

Following the installation of Exchange 2000 Server on all cluster nodes, you need to configure resource groups, which correspond to virtual servers. As mentioned earlier in this lesson, every virtual server (that is, resource group) that you plan to run in a cluster requires an IP address, a network name, and one or more shared disk resources where the messaging databases must be placed. The network name is an important parameter because users must specify this name in their client settings to connect to their mailboxes. To complete the configuration, include the Exchange System Attendant resource. The remaining Exchange 2000 components are added to the virtual server automatically. The last step is to bring the server online, which is accomplished quickly in the Cluster Administrator console. Right-click the virtual server resource group and select Bring Online.

Note


For each virtual server, you can specify preferred owner information to distribute the resources equally across the cluster nodes.

Deleting the Default Public Folder Store

When configuring more than one virtual server, keep in mind that you cannot configure more than one MAPI-based public folder store on the cluster. After adding additional virtual servers, you must delete their default public folder stores before bringing the resource groups online. The configuration of MAPI-based and alternate public folder hierarchies and databases is covered in Chapter 5, "Designing a Basic Messaging Infrastructure with Microsoft Exchange 2000 Server."

Configuring Virtual Protocol Servers

Clustered Exchange 2000 Servers, due to their reliable support of large end user communities, are ideal candidates for BE servers in a front-end/back-end (FE/BE) configuration. As outlined in Chapter 8, "Designing Hosted Services with Microsoft Exchange 2000 Server," you may have to create additional virtual servers for Internet access protocols on the cluster to control access to mailbox and public folder resources. Use the Exchange System Manager snap-in to create the protocol virtual servers as usual, but do not use this tool or the System snap-in to bring the virtual servers online.

You must use the Cluster Administrator console to complete the configuration of virtual protocol servers. Right-click your virtual server, point to New, and then select Resource to add a new resource that corresponds to the Internet virtual protocol server that you created with the Exchange System Manager snap-in. On the Possible Owners wizard screen, make sure that the nodes on which you installed Exchange 2000 Server appear in the Possible Owners list, and then click Next. On the Dependencies wizard screen, in the Resource Dependencies list, add the Exchange Information Store, and then, on the Virtual Server Instance wizard screen, select your virtual protocol server. Click Finish to complete the configuration, and bring the virtual protocol server online by right-clicking it and selecting Bring Online.

Full-Text Indexing an Information Store of a Virtual Server

Exchange 2000 Server supports full-text indexing in active/active cluster configurations through the Exchange MS Search Instance resource, which is added to each virtual server automatically when you add the Exchange System Attendant resource to the resource group. To enable full-text indexing on a cluster, use the Exchange System Manager snap-in as usual. Right-click the desired store and select Create Full-Text Index. You must make sure the catalog is created on the shared disk resource.

If you don’t plan to use the full-text indexing feature of Exchange 2000 Server, you may delete the Exchange MS Search Instance from your virtual server. However, keep in mind that it is impossible to add this resource again without deleting and re-creating the information store of the virtual server, so it might be a better idea to keep it in the resource group. It doesn’t affect system performance—as long as you don’t create a full-text index for a mailbox or public folder store.

Limitations of Windows 2000 Server Clusters

Clustering Windows 2000 reduces the number of potential single points of failure and thereby significantly improves the availability of system resources in the network. Clustered servers also show improved scalability through the grouping of server resources. However, this does not imply that server clusters offer fault tolerance in all areas. The information store databases, for example, remain single points of failures. To address this issue, use RAID technology to implement a sophisticated hard disk configuration according to the recommendations in Lesson 1.

To put it plainly, the Cluster service of Windows 2000 cannot guarantee service availability 100 percent of the time, although it comes very close. In the event of a system failure, the Cluster service takes the virtual server offline on the first node before it puts it online again on another node; hence communication processes are interrupted and users need to reconnect after the failover. Usually, your users can reconnect almost immediately, but if the first node shut down due to problems with the Information Store service, it may take measurably longer before the virtual server is operational again on the second node. The Information Store service on the second node attempts to fix database inconsistencies in a process known as soft recovery, which may take a while. If the system cannot fix the inconsistencies, the affected store remains dismounted. Clusters—no matter how many nodes you add—don’t protect and don’t repair information store databases. You still have to develop a sound backup and disaster recovery strategy. You can read more about database recovery mechanisms in Chapter 11, "Designing a Disaster Recovery Plan for Microsoft Exchange 2000 Server."

Limitations of Exchange 2000 Server in Clustered Environments

Several limitations also apply to Exchange 2000. Recall that only one public folder store on the cluster can be associated with the default MAPI-based public folder hierarchy. Several Exchange 2000 components are not supported in a cluster at all, such as Instant Messaging, the NNTP service, the Key Management Service (KMS), or connectors to other mail systems. Others, such as the Message Transfer Agent (MTA) Stacks service, can only run in an active/passive configuration, meaning they can only be active on one node. Server clustering is a perfect choice for mailbox and public servers, but it is not appropriate for FE server or messaging bridgeheads. This is not a problem because clustered servers can coexist seamlessly with standard servers in the Exchange 2000 organization. Table 10.3 lists the Exchange 2000 components that support cluster configurations.

Table 10.3 Exchange 2000 Components Supported in Cluster Configurations

Component Cluster Support

Chat

Active/passive

Full-Text Indexing

Active/active

Hypertext Transfer Protocol (HTTP)

Active/active

Internet Messaging Access Protocol (IMAP)

Active/active

Information Store

Active/active

MTA

Active/passive

Post Office Protocol (POP3)

Active/active

Simple Mail Transfer Protocol (SMTP)

Active/active

System Attendant

Active/active

Designing Clusters for Exchange 2000 Server

The decision to implement a server cluster depends primarily on the importance of the system to your company. Servers with a large number of mailboxes are usually business-critical facilities and therefore good candidates for clustering. To best utilize the hardware resources available in a cluster, consider running multiple virtual servers in an active/active configuration. Active/active configurations better utilize hardware while all nodes are functioning, but software license costs are higher because Exchange licenses must be purchased for each active node. If you only need one virtual server to support all of your users, however, you may want to implement a combined application server that runs more than one kind of client/server services on its nodes.

Fully loaded application clusters allow you to maximize resource utilization, but their disadvantage is that performance drops if there is a node failure. With an unavailable cluster node, one of the remaining nodes must assume the extra workload in addition to supporting its own virtual servers. To avoid reduced performance, consider implementation of a fully loaded system with one hot spare node. When all nodes are online, the hot spare does not own a virtual server and is idle (see Figure 10.11). A single node failure does not affect the system performance. The hot spare configuration has the disadvantage of an idle server system when every node is operational.

Each cluster node must be capable of assuming the workload for any virtual server that it provides failover services for.

Figure 10.11 - A fully loaded system with one hot spare node

Designing Clustered Servers for Proseware, Inc.

Proseware, Inc., a modern application service provider (ASP) introduced in Chapter 8, has deployed a large number of Exchange 2000 servers as FE servers to support its 350,000 private OWA users over the Internet. "At Proseware, we expect our customer base to grow continuously," says Guy Gilbert, Head of IT Operations. "We have developed a front-end/back-end configuration that perfectly integrates with our firewall topology and arrangements of perimeter networks. To best support our users and accommodate growth, we now want to change our back-end server strategy and replace the standard servers with four-node clusters. Each node will run a virtual server with a structured arrangement of mailbox stores. To keep the number of clusters within reasonable limits, we intend to place up to 3500 users on each virtual server and will invest in eight-processor machines with 2 GB of RAM. We have a storage area network in place with an overall capacity of 240 TB."

Gilbert has developed the following design for the BE server clusters:

  1. To support 350,000 users with just 25 clusters, we need to install Windows 2000 Datacenter Server, because only this version allows us to configure four-node clusters that front-end our SAN.
  2. We cannot afford to configure one node in each cluster as a hot spare. Every node must run a virtual server and must service 3500 users. In the case of a node failure or system maintenance, performance may drop significantly, but these temporary situations should not occur often.
  3. To avoid gigantic mailbox stores, we will configure each node’s storage group with five mailbox stores, with the exception of the first node, which will have four mailbox stores and the default public folder store. Our SAN has to provide one RAID-0+1 volume to each node to store all the databases.

Activity: Designing Clustered Environments

In this activity, you will design clustered environments for two companies that plan to utilize powerful mailbox servers in their environments: Woodgrove Bank and Humongous Insurance.

Tip


You can use Figure B.28 in Appendix B as a guideline to accomplish this activity.

Scenario: Woodgrove Bank

Woodgrove Bank, a Swiss bank introduced in Chapter 3, has successfully deployed Exchange 2000 Server in Switzerland and placed all 600 users in Zurich, Bern, and Basel on a single server in Zurich named ZUR-01-EX. "Windows 2000 server clustering is the answer to my problems," says Luis Bonifaz, Chief Information Officer. "I was worried that a simple system failure might put the entire messaging environment in Switzerland out of order. We already designed the server’s information store so that single points of disk failures are eliminated, but disk redundancy is not the answer to all possible system failures. Processors and motherboards are likewise not invincible and standard servers do not protect us in this respect. With a server cluster, we can provide system redundancy in all critical areas. We have the budget to acquire any necessary hardware."

It is your task to design an appropriate cluster solution for Woodgrove Bank:

  1. Woodgrove Bank does not have a SAN in place. Which technology would you recommend, taking into consideration that Woodgrove Bank is only planning to implement a server cluster for Exchange 2000 Server?
  2. Which Windows 2000 software package should Woodgrove Bank use to install the server cluster?
  3. How many Exchange 2000 virtual servers does Woodgrove Bank need to configure to adequately support the users with acceptable response times?
  4. Single-node failures or system maintenance must not lead to a performance reduction. How should Woodgrove Bank configure the cluster to achieve this goal?

Scenario: Humongous Insurance

Humongous Insurance, introduced in Chapter 8, is a national financial institution with headquarters in New York and customer service centers in all major U.S. cities. "At Humongous Insurance, we use Exchange 2000 Server for more than simple messaging," explains Stephanie Bourne, Director of Information Technology. "Among other things, we have implemented a business-critical workflow application that our licensed agents can use to report and track insurance claims. This is only one example of how we integrate our business processes with the messaging infrastructure. Our business depends to a great extent on the reliability of our servers. Fault tolerance is very important to us and for this reason, we are currently thinking about implementing clustered Exchange 2000 servers in all locations."

It is your task to design an appropriate cluster solution for Humongous Insurance:

  1. On an average, fewer than 250 users work in the various offices of Humongous Insurance. SANs do not exist. How many nodes and which disk technology should Humongous Insurance use for the server clusters in each location?
  2. Which Windows 2000 software package should Humongous Insurance install on the cluster nodes?
  3. How many virtual servers should Stephanie Bourne configure in each cluster to adequately support the Exchange 2000 users in each location?
  4. Humongous Insurance wants to maximize hardware utilization by running Microsoft SQL Server on the cluster in addition to Exchange 2000 Server. How should Stephanie Bourne distribute the workload?

Lesson Summary

Using Windows 2000 Advanced Server or Datacenter Server and Exchange 2000 Enterprise Server, you can design high-availability solutions for mailbox and public folder servers that store the data of a potentially very large number of users. A cluster is basically a group of servers that can act as one or many virtual servers. Any of the cluster nodes can host the virtual servers, and the users can access all the resources in the cluster without knowing the actual name of the node that currently hosts a virtual server. With Windows 2000 Advanced Server, you can configure two-node clusters. Windows 2000 Datacenter Server supports clusters of up to four nodes.

To allow access to the resources through any of the cluster nodes, the data must reside on a shared storage media. Typically, clusters use a SCSI or fiber channel bus to connect to an external storage system. However, the nodes in a server cluster cannot access the shared disks concurrently. Every virtual server must therefore be assigned its own set of shared disks.

To best utilize the hardware resources available in a cluster, run a virtual server on each cluster node. Exchange 2000 Server supports active/active clustering, which allows you to run multiple Exchange 2000 virtual servers concurrently on the same cluster. If your organization only requires one Exchange 2000 virtual server, consider implementing a combined application server that provides more than one kind of client/server services to users, such as a SQL 2000 virtual server on one node and an Exchange 2000 virtual server on a second node. If one node fails, the remaining node runs both virtual servers, which may degrade system performance. However, the services remain available, which is usually more important.

Installation of Exchange 2000 Server in a cluster is a straightforward process. Setup detects the cluster environment automatically and installs the cluster-aware components and services on the node. It is important to install Exchange 2000 on all nodes with the same parameters and installation directories. The services are configured to start manually. Before you can start the services, you need to configure a resource group for Exchange 2000 Server. This group must be assigned an IP address, a network name, and a disk resource, as usual, plus an Exchange System Attendant resource. All other resources are created automatically for you. Remember to start, stop, or pause services only in the Cluster Administrator console.



MCSE Microsoft Exchange 2000 Server Design and Deployment Training Kit(c) Exam 70-225
MCSE Training Kit (Exam 70-225): Microsoft Exchange 2000 Server Design and Deployment (Pro-Certification)
ISBN: 0735612579
EAN: 2147483647
Year: 2001
Pages: 89

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net