Figure 1-1: Main architectural elements of IBM Tivoli Workload Scheduler relevant to high availability
Figure 1-2: Typical site configuration of all Tivoli products that can be integrated with IBM Tivoli Workload Scheduler out of the box
Figure 1-3: Relationship between major components of IBM Tivoli Workload Scheduler and IBM Tivoli Management Framework
Figure 1-4: Cost and benefits of availability technologies
Figure 1-5: Example disaster recovery incident where no job recovery is required
Figure 1-6: Example disaster recovery incident where job recovery not related to IBM Tivoli Workload Scheduler is required
Figure 1-7: Example disaster recovery incident requiring multiple, different job recovery actions
Figure 1-8: Standby configuration in normal operation
Figure 1-9: Standby configuration in fallover operation
Figure 1-10: Takeover configuration in normal operation
Figure 1-11: Takeover configuration in fallover operation
Figure 1-12: Example HACMP Cluster
Figure 1-13: Introductory high availability road map
Figure 1-14: Road map for implementing highly available IBM Tivoli Workload Scheduler (no IBM Tivoli Management Framework, no Job Scheduling Console access through cluster nodes)
Figure 1-15: Road map for implementing IBM Tivoli Workload Scheduler in a highly available configuration, with IBM Tivoli Management Framework
Figure 1-16: Road map for implementing IBM Tivoli Management Framework by itself
Chapter 2: High Level Design and Architecture
Figure 2-1: A typical HA cluster configuration
Figure 2-2: Resource groups in a cluster
Figure 2-3: Fallover of a resource group
Figure 2-4: Two-node cluster
Figure 2-5: Multi-node cluster
Figure 2-6: Symbol of IBM Tivoli Workload Scheduler engine availability
Figure 2-7: z/OS access method
Figure 2-8: Three-instance configuration
Figure 2-9: IBM Tivoli Workload Scheduler job flow
Chapter 3: High Availability Cluster Implementation
Figure 3-1: Overview of our HA cluster scenario
Figure 3-2: A hot standby cluster for a TMR server
Figure 3-3: Cluster node plan
Figure 3-4: Cluster diagram with applications added
Figure 3-5: Cluster diagram with network topology added
Figure 3-6: Cluster diagram with disks added
Figure 3-7: Cluster diagram with shared LVM added
Figure 3-8: Cluster diagram with resource group added
Figure 3-9: SSA Cabling for high availability scenario
Figure 3-10: Volume Group SMIT menu
Figure 3-11: Defining a volume group
Figure 3-12: Add a Journaled File System menu
Figure 3-13: Defining a journaled file system
Figure 3-14: Changing a Logical Volume menu
Figure 3-15: Renaming a logical volume
Figure 3-16: Import a Volume Group
Figure 3-17: Changing a Volume Group screen
Figure 3-18: Changing the properties of a volume group
Figure 3-19: The TCP/IP SMIT menu
Figure 3-20: Configuring network adapters
Figure 3-21: IBM Fix Central Web page for downloading AIX 5.2 maintenance packages
Figure 3-22: Using the IBM Fix Central Web page for downloading HACMP 5.1 patches
Figure 3-23: Select fixes page of IBM Fix Central Web site
Figure 3-24: Confirmation dialog presented in IBM Fix Central Select fixes page
Figure 3-25: Select fixes page showing fixes found that match APAR IY45695
Figure 3-26: Packaging options page for packaging fixes for APAR IY45695
Figure 3-27: Download fixes page for fixes related to APAR IY45695
Figure 3-28: Screen displayed after running command smitty install
Figure 3-29: Filling out the INPUT device/directory for software field in the Install Software smit panel
Figure 3-30: Install Software SMIT panel with all installation options
Figure 3-31: Installation confirmation dialog for SMIT
Figure 3-32: COMMAND STATUS SMIT panel showing successful installation of HACMP 5.1
Figure 3-33: Update Software by Fix (APAR) SMIT panel displayed by running command smitty update
Figure 3-34: Entering directory of APAR IY45695 fixes into Update Software by Fix (APAR) SMIT panel
Figure 3-35: Preparing to select fixes for APAR IY45695 in Update Software by Fix (APAR) SMIT panel
Figure 3-36: Selecting fixes for APAR IY45695 in FIXES to install SMIT dialog
Figure 3-37: Selecting all fixes of APAR IY45695 in FIXES to install SMIT dialog
Figure 3-38: Applying all fixes of APAR IY45695 in Update Software by Fix (APAR) SMIT panel
Figure 3-39: COMMAND STATUS SMIT panel showing all fixes of APAR IY45695 successfully applied
Figure 3-40: How to specify removal of HACMP in Remove Installed Software SMIT panel
Figure 3-41: Set options for removal of HACMP in Installed Software SMIT panel
Figure 3-42: Successful removal of HACMP as shown by COMMAND STATUS SMIT panel
Figure 3-43: Microsoft Cluster environment
Figure 3-44: Windows Components Wizard
Figure 3-45: Windows Components Wizard
Figure 3-46: Welcome screen
Figure 3-47: Hardware Configuration
Figure 3-48: Create or Join a Cluster
Figure 3-49: Cluster Name
Figure 3-50: Select an Account
Figure 3-51: Add or Remove Managed Disks
Figure 3-52: Cluster File Storage
Figure 3-53: Warning window
Figure 3-54: Network Connections - All communications
Figure 4-40: Add a Persistent Node IP Label/Address SMIT screen for tivaix1
Figure 4-41: Network Name SMIT dialog
Figure 4-42: Add a Persistent Node IP Label/Address SMIT screen for tivaix2
Figure 4-43: Select Add a Pre-defined Communication Interface to HACMP Cluster configuration
Figure 4-44: Select the Pre-Defined Communication type SMIT selector screen
Figure 4-45: Select a Network SMIT selector screen
Figure 4-46: Add a Communication Interface SMIT screen
Figure 4-47: COMMAND STATUS SMIT screen for successful verification of an HACMP Cluster configuration
Figure 4-48: COMMAND STATUS SMIT screen for our environment's configuration
Figure 4-49: Start Cluster Services SMIT screen
Figure 4-50: COMMAND STATUS SMIT screen displaying successful start of cluster services
Figure 4-51: Current status of all HACMP subsystems on a cluster node
Figure 4-52: Select a Resource Group SMIT dialog
Figure 4-53: Select a Destination Node SMIT dialog
Figure 4-54: Move a Resource Group SMIT screen
Figure 4-55: COMMAND STATUS SMIT screen for moving a resource group
Figure 4-56: How to start HACMP on system restart
Figure 4-57: Relationship of IBM Tivoli Workload Scheduler, IBM Tivoli Management Framework, Connectors, and Job Scheduling Consoles during normal operation of an HACMP Cluster
Figure 4-58: Viewing multiple instances of IBM Tivoli Workload Scheduler on separate cluster nodes on a single display
Figure 4-59: Available scheduling engines when logged into tivaix1 during normal operation
Figure 4-60: Available scheduling engines when logged into tivaix2 during normal operation
Figure 4-61: How multiple instances of the Connector work during normal operation
Figure 4-62: Multiple instances of Connectors after tivaix2 falls over to tivaix1
Figure 4-63: Sample error dialog box in Job Scheduling Console indicating possible fallover of cluster node
Figure 4-64: Available scheduling engines on tivaix1 after tivaix2 falls over to it
Figure 4-65: Available Connectors in interconnected Framework environment after tivaix2 falls over to tivaix1
Figure 4-66: Available Connectors in interconnected Framework environment during normal cluster operation
Figure 4-67: IBM Tivoli Framework 4.1.0 application and patch sequence and dependencies as of December 2, 2003
Figure 4-68: Available scheduling engines after interconnection of Framework servers
Figure 4-69: Log into TWS Engine1
Figure 4-70: Log into TWS Engine2
Figure 4-71: Network diagram of the Microsoft Cluster
Figure 4-72: Cluster Administrator
Figure 4-73: Installation-Select Language
Figure 4-74: Installation-Welcome Information
Figure 4-75: Installation-License agreement
Figure 4-76: Installation-Install new Tivoli Workload Scheduler
Figure 4-77: Installation user information
Figure 4-78: Installation install directory
Figure 4-79: Type of Installation
Figure 4-80: Type of IBM Tivoli Workload Scheduler workstation
Figure 4-81: Workstation information
Figure 4-82: Extra optional features
Figure 4-83: Installation of Additional Languages
Figure 4-84: Review the installation
Figure 4-85: IBM Tivoli Workload Scheduler Installation progress window
Figure 4-86: Completion of a successful install
Figure 4-87: Cluster Administrator
Figure 4-88: IBM Tivoli Workload Scheduler Workstation definition
Figure 4-89: New Cluster resource
Figure 4-90: Resource values
Figure 4-91: Node selection for resource
Figure 4-92: Dependencies for this resource
Figure 4-93: Resource parameters
Figure 4-94: Registry Replication
Figure 4-95: Cluster resource created successfully
Figure 4-96: Dependencies for IBM Tivoli Workload Scheduler Netman service
Figure 4-97: Cluster Administrator
Figure 4-98: The Advanced tab
Figure 4-99: General tab for Group Properties
Figure 4-100: Failover tab for Group Properties
Figure 4-101: Failover tab for Group Properties
Figure 4-102: Network diagram of the Microsoft Cluster
Figure 4-103: IBM Tivoli Workload Scheduler user authentication flow
Figure 4-104: Installation location for TMRs
Chapter 5: Implement IBM Tivoli Management Framework in a Cluster
Figure 5-1: IBM Tivoli Management Framework in normal operation on tivaix1
Figure 5-2: State of cluster after IBM Tivoli Management Framework falls over to tivaix2
Figure 5-3: Start SSA link verification on tivaix1
Figure 5-4: Results of link verification test on SSA adapter ssa0 in tivaix1
Figure 5-5: Results of SSA link verification test on SSA adapter ssa0 in tivaix2
Figure 5-6: Select an SSA disk from the SSA Physical Disk SMIT selection screen
Figure 5-7: Identify the connection address of an SSA disk
Figure 5-8: Select physical volumes for volume group itmf_vg
Figure 5-9: Configure settings to add volume group itmf_vg
Figure 5-10: Select a volume group using the Volume Group Name SMIT selection screen
Figure 5-11: Create a standard Journaled File System on volume group itmf_vg in tivaix1
Figure 5-12: Successful creation of JFS file system /opt/hativoli on tivaix1
Figure 5-13: Rename a logical volume
Figure 5-14: REname the logical log volume
Figure 5-15: Export a Volume Group SMIT screen
Figure 5-16: Import a volume group
Figure 5-17: Import volume group itmf_vg on tivaix2
Figure 5-18: IBM Tivoli Framework 4.1.0 application and patch sequence and dependencies as of December 2, 2003
Figure 5-19: Normal operation of highly available Endpoint
Figure 5-20: Fallover operation of highly available Endpoint
Figure 5-21: Normal operation of local and highly available Endpoints
Figure 5-22: Cluster state after moving highly available Endpoint to tivaix2
Figure 5-23: Cluster state after falling over tivaix1 to tivaix2
Figure 5-24: Configure Resources to Make Highly Available SMIT screen