| < Day Day Up > |
|
With numerous cluster software packages on the market, each offering a variety of configurations, there are many ways of configuring a high availability (HA) cluster. We cannot cover all possible scenarios, so in this redbook we focus on two scenarios which we believe are applicable to many sites: a mutual takeover scenario for IBM Tivoli Workload Scheduler, and a hot standby scenario for IBM Tivoli Management Framework. We discuss these scenarios in detail in the following sections.
In our scenario, we assume a customer case where they plan to manage jobs for two mission-critical business applications. They plan to have the two business applications running on separate nodes, and would like to install separate IBM Tivoli Workload Scheduler Master Domain Managers on each node to control the jobs for each application. They are seeking a cost-effective, high availability solution to minimize the downtime of their business application processing in case of a system component failure. Possible solutions for this customer would be the following:
Create separate HA clusters for each node by adding two hot standby nodes and two sets of external disks.
Create one HA cluster by adding an additional node and a set of external disks. Designate the additional node as a hot standby node for the two application servers.
Create one HA cluster by adding a set of external disks. Each node is designated as a standby for the other node.
The first two solutions require additional machines to sit idle until a fallover occurs, while the third solution utilizes all machines in a cluster and no node is left to sit idle. Here we assume that the customer chose the third solution. This type of configuration is called a mutual takeover, as discussed in Chapter 2, "High level design and architecture" on page 31.
Note that this type of cluster configuration is allowed under the circumstance that the two business applications in question and IBM Tivoli Workload Scheduler itself have no software or hardware restrictions to run on the same physical machine. Figure 3-1 on page 65 shows a diagram of our cluster.
Figure 3-1: Overview of our HA cluster scenario
In Figure 3-1, node Node1 controls TWS1 and the application APP1. Node Node2 controls TWS2 and application APP2. TWS1 and TWS2 are installed on the shared external disk so that each instance of IBM Tivoli Workload Scheduler could fall over to another node.
We assume that system administrators would like to use the Job Scheduling Console (JSC) to manage the scheduling objects and production plans. To enable the use of JSC, Tivoli Management Framework(TMF) and IBM Tivoli Workload Scheduler Connector must be installed.
Because each IBM Tivoli Workload Scheduler instance requires a running Tivoli Management Framework Server or a Managed Node, we need two Tivoli Management Region (TMR) servers. Keep in mind that in our scenario, when a node fails, everything installed on the external disk will fall over to another node.
Note that it is not officially supported to run two TMR servers or Managed Nodes in one node. So the possible configuration of TMF in this scenario would be to install TMR servers on the local disks of each node.
IBM Tivoli Workload Scheduler connector will also be installed on the local disks. To enable JSC access to both IBM Tivoli Workload Scheduler instances during a fallover, each IBM Tivoli Workload Scheduler Connector needs two connector instances defined: Instance1 to control TWS1, and Instance2 to control TWS2.
In our mutual takeover scenario, we cover the high availability scenario for IBM Tivoli Workload Scheduler. Here, we cover a simple hot standby scenario for IBM Tivoli Management Framework (TMF). Because running multiple instances of Tivoli Management Region server (TMR server) on one node is not supported, a possible configuration to provide high availability would be to configure a cluster with the primary node, hot standby node and a disk subsystem.
Figure 3-2 shows a simple hot standby HA cluster with two nodes and a shared external disk. IBM Tivoli Management Framework is installed on the shared disk, and normally resides on Node1. When Node1 fails, TMF will fall over to Node2.
Figure 3-2: A hot standby cluster for a TMR server
| < Day Day Up > |
|