5.2 Implementing Tivoli Framework in a Microsoft Cluster

 < Day Day Up > 



5.2 Implementing Tivoli Framework in a Microsoft Cluster

In this section we cover the installation of Tivoli on a Microsoft Cluster, which includes the following topics:

  • Installation of a TMR server on a Microsoft Cluster

  • Installation of a Managed Node on a Microsoft Cluster

  • Installation of an Endpoint on a Microsoft Cluster

5.2.1 TMR server

In the following sections, we walk you through the installation of Tivoli Framework in a MSCS environment.

  • Installation overview - provides an overview of cluster installation procedures. It also provides a reference for administrators who are already familiar with configuring cluster resources and might not need detailed installation instructions.

  • Framework installation on node 1 - provides installation instructions for installing and configuring Tivoli Framework on the first node in the cluster. In this section of the install, node 1 will own the cluster resources required for the installation.

  • Framework installation on node 2 - provides installation instructions for installing and configuring Tivoli Framework on the second node in the cluster. The majority of the configuration takes place in this section. The second node is required to own the cluster resources in this section.

  • Cluster resource configuration - this describes how the Tivoli Framework services are configured as cluster resources. After configuring the cluster resources, the Framework should be able to be moved between the nodes.

Installation overview

In this section we walk through the installation and configuration of the Framework. The sections following provide greater detail.

Node 1 installation
  1. Make sure Node 1 is the owner of the cluster group that contains the drive where framework will be installed (X:, in our example).

  2. Insert the Tivoli Framework disc 1 in the CD-ROM drive and execute the following command: setup.exe advanced

  3. Click Next past the welcome screen.

  4. Click Yes at the license screen.

  5. Click Next at the accounts and permissions page.

  6. Enter the name of the cluster name resource in the advanced screen (tivw2kv1, in our example). Make sure that the start services automatically box is left unchecked.

  7. Specify an installation password if you would like. Click Next.

  8. Specify a remote administration account and password if applicable. Click Next.

  9. Select Typical installation option. Click Browse and specify a location on the shared drive as the installation location (X:\tivoli, in our example).

  10. Enter IBMTIVOLIMANAGEMENTREGIONLICENSEKEY41 as the license key. Click Next.

  11. Click Next to start copying files.

  12. Press any key after the oserv service has been installed.

  13. Click Finish to end the installation on node 1.

Node 2 installation
  1. Copy tivoliap.dll from node 1 to node 2.

  2. Copy the %SystemRoot%\system32\drivers\etc\Tivoli directory from node1 to node 2.

  3. Move the cluster group from node 1 to node 2.

  4. Source the Tivoli environment.

  5. Create tivoli account by running %BINDIR%\TAS\Install\ntconfig -e.

  6. Load the tivoliap.dll with the LSA by executing wsettap -a.

  7. Set up TRAA account using wsettap.

  8. Install TRIP using "trip -install -auto".

  9. Install the Autotrace service using %BINDIR%\bin\atinstall --quietcopy %BINDIR%\bin.

  10. Install the object dispatcher using oinstall -install %DBDIR%\oserv.exe.

Cluster resource configuration
  1. Open the Microsoft Cluster administrator.

  2. Create a new resource for the TRIP service.

    1. Name the resource TRIP resource (TIVW2KV1 -Trip, in our example). Set the resource type to generic service.

    2. Select both nodes as possible owners.

    3. Select the cluster disk, cluster name and cluster IP as dependencies.

    4. Set the service name to "trip" and check the box Use network name for computer name.

    5. There is no registry setting required for the TRIP service.

  3. Create a new resource for the oserv service.

    1. Name the oserv resource (TIVW2KV1 - Oserv, in our example). Set the resource type to Generic Service.

    2. Select both nodes as possible owners.

    3. Select the cluster disk, cluster name, cluster IP and TRIP as dependencies.

    4. Set the service name to "oserv" and check the box Use network name for computer name.

    5. Set the registry key "SOFTWARE\Tivoli" as key to replicate across nodes.

  4. Bring the cluster group online.

TMR installation on node 1

The installation of a TMR server on an MSCS is very similar to a normal Tivoli Framework installation. In order to perform the installation, make sure that the Framework 4.1 Disk 1 is in the CD-ROM drive or has been copied locally.

  1. Start the installation by executing setup.exe advanced. Figure 5-33 illustrates how to initiate the setup using the Windows Run window.


    Figure 5-33: Start the installation using setup.exe

  2. After the installation is started, [advanced] is displayed after the Welcome to confirm that you are in advanced mode. Click Next to continue (Figure 5-34 on page 502).


    Figure 5-34: Framework [advanced] installation screen

  3. The license agreement will be displayed; click Yes to accept and continue (Figure 5-35 on page 503).


    Figure 5-35: Tivoli License Agreement

  4. The next setup screen (Figure 5-36 on page 504) informs you that the tmersrvd account will be created and the Tivoli_Admin_Privleges group will be created. If an Endpoint has been installed on the machine, the account and group will already be installed. Click Next to continue.


    Figure 5-36: Accounts and file permissions screen

  5. Now you need to enter the hostname of the virtual server where you want the TMR to be installed. The hostname that you enter here will override the default value of the local hostname.

    Make sure that the Services start automatically box remains unchecked; you will handle the services via the Cluster Administrator. Click Next to continue (Figure 5-37 on page 505).


    Figure 5-37: Framework hostname configuration

  6. You can now enter an installation password, if desired. An installation password must be entered to install Managed Nodes, create interregion connections, or install software using Tivoli Software Installation Service.

    An installation password is not required in this configuration. Click Next to continue (Figure 5-38 on page 506).


    Figure 5-38: Framework installation password

  7. Next you can specify a Tivoli Remote Access Account (TRAA). The TRAA is the user name and password that Tivoli will use to access remote file systems.

    This is not a required field and can be left blank. Click Next to continue (Figure 5-39 on page 507).


    Figure 5-39: Tivoli Remote Access Account (TRAA) setup

  8. You can now select from the different installation types. In our example, we show a Typical installation. For information about the other types of installations, refer to the Framework 4.1 documentation.

    You will want to change the location where the Tivoli Framework is installed. The installation defaults to C:\Program Files\Tivoli, so it needs to be changed to X\Tivoli. To change the installation directory, click Browse.

    Use the Windows browser to select the correct location for the installation directory. In our example, the drive shared by the cluster is the X: drive.

    Make sure you select the shared cluster drive as the installation location on your system. After the installation directory has been set, click Next to move to the next step (Figure 5-40 on page 508).


    Figure 5-40: Framework installation type

  9. In the License key dialog (Figure 5-40), enter the following:

     IBMTIVOLIMANAGEMENTREGIONLICENSEKEY41 

Click Next to continue.


Figure 5-41: Framework license key setup

The setup program will ask you to review the settings that you have specified (Figure 5-42 on page 510).


Figure 5-42: Framework setting review

If settings need to be changed, you can modify them by clicking Back. After you are satisfied with the settings, click Next to continue.

  1. After the files have been copied, the oserv will be installed (see Figure 5-43). You will have to select the DOS window and press any key to continue the installation.


    Figure 5-43: Tivoli oserv service installation window

  2. The Framework installation is now complete on the first node. Click Finish to exit the installation wizard (Figure 5-44). If the installation prompts you to restart the computer, select the option to restart later. You will need to copy some files off node 1 prior to rebooting.


    Figure 5-44: Framework installation completion

TMR installation on node 2

The Tivoli Framework installation on the second node is not as straightforward as the installation of the installation of the first node. This installation consists of the following manual steps.

  1. Before you fail over the X: drive and start the installation on node 2, you need to copy %SystemRoot%\system32\drivers\etc\Tivoli and %SystemRoot%\system32\tivoliap.dll files from node 1.

    The easiest way to do this is to copy the files to the shared drive and simply move the drive. However, you can also copy the files from one machine to another. One way to copy the files is to open a DOS window and copy the files using the DOS commands; see Figure 5-45 on page 512.


    Figure 5-45: File copy output

    The commands are as follows:

        x:    mkdir tmp    xcopy /E c:\winnt\system32\drivers\etc\tivoli x:\tmp    copy c:\winnt\system32\tivoliap.dll x:\ 

Figure 5-45 shows the output.

  1. After the files are copied, you can fail over the X: driver to node 2. You can do this manually by using the Cluster Administrator, but in this case you will need to restart the machine to register the tivoliap.dll on node 1, so you can simply restart node 1 and the drive should fail over automatically.

    After node 1 has started to reboot, the X: drive should fail over to node 2. To continue the Framework installation on the node, you will need to open a DOS window on node 2.

    Create the c:\winnt\system32\drivers\etc\tivoli directory on node 2:

        mkdir c:\winnt\system32\drivers\etc\tivoli 

    This is shown in Figure 5-46 on page 513.


    Figure 5-46: Create the etc\tivoli directory on node 2

  2. Now you need to copy the Tivoli environment files from the X:\tmp directory to the c:\winnt\system32\drivers\etc\tivoli directory just created in node 2. To do this, execute:

        xcopy /E x:\tmp\* c:\winnt\system32\drivers\etc\tivoli 

    Figure 5-47 shows the output of this command.


    Figure 5-47: Copy the Tivoli environment files

  3. Source the Tivoli environment:

        c\: winnt\system32\drivers\etc\tivoli\setup_env.cmd 

    Figure 5-48 on page 514 shows the output of this command.


    Figure 5-48: Source the Tivoli environment

  4. Now that the Tivoli environment is sourced, you can start configuring node 2 of the TMR. First you need to create the tmersrvd account and the Tivoli_Admin_Privleges group.

  5. do this, execute the ntconfig.exe executable:

        %BINDIR%\TAS\Install\ntconfig -e 

    See Figure 5-46 on page 513.


    Figure 5-49: Add the Tivoli account

  6. Copy tivoliap.dll from the X: drive to c:\winnt\system32:

        copy x:\tivoliap.dll c:\winnt\system32 

    The output is shown in Figure 5-50 on page 516.


    Figure 5-50: Copy the tivoliap.dll

  7. After tivoliap.dll has been copied, you can load it with the wsettap.exe utility:

        wsettap -a 

    A reboot will be required before the tivoliap.dll is completely loaded.


    Figure 5-51: Register the tivoliap.dll

  8. Install the Autotrace service. Framework 4.1 includes a new embedded Autotrace service for use by IBM Support. Autotrace uses shared memory segments for logging purposes.

    To install Autotrace:

        %BINDIR%\bin\atinstall --quitecopy %BINDIR%\bin 


    Figure 5-52: Installing Autotrace

  9. Finally, you need to install and start the oserv service. To install the oserv service:

        oinstall -install %DBDIR%\oserv.exe 

    Figure 5-53 on page 519 shows the output of the command, indicating that oserv service has been installed.


    Figure 5-53: Create the oserv service

After the oserv service is installed, your setup of node 2 is complete. Now you need to restart node 2 to load tivoliap.dll.

Setting up cluster resources

Now that the binaries are installed on both nodes of the clusters, you need to create the cluster resources. You will need to create two cluster resources, one for the oserv service and one for the TRIP service. Because the oserv service depends on the TRIP service, you need to create the TRIP service first.

Create the resources using the Cluster Administrator.

  1. Open the Cluster Administrator by selecting Start -> Programs -> Administrative Tools -> Cluster Administrator.

  2. After the Cluster Administrator is open, you can create a new resource by right-clicking your cluster group and selecting New -> Resource, as shown in Figure 5-54 on page 520.


    Figure 5-54: Create a new resource

  3. Select the type of resource and add a name. You can name the resource however you would like. In our example, we chose TIVW2KV1 - TRIP, in order to adhere to our naming convention (see Figure 5-55 on page 521).


    Figure 5-55: Resource name and type setup

    The Description field is optional. Make sure that you change the resource type to a generic service, and that the resource belongs to the cluster group that contains the drive where the Framework was installed. Click Next to continue.

  4. Define which nodes can own the resource. Since you are configuring your TMR for a hot standby scenario, you need to ensure that both nodes are added as possible owners (see Figure 5-56 on page 522). Click Next to continue.


    Figure 5-56: Configure possible resource owners

  5. Define the dependencies for the TRIP service. On an MSCS, dependencies are defined as resources that must be active in order for another resource to run properly. If a dependency is not running, the cluster will fail over and attempt to start on the secondary node.

    To configure TRIP, you need to select the shared disk the cluster IP and the cluster name resources as dependencies, as shown in Figure 5-57 on page 523. Click Next to continue.


    Figure 5-57: TRIP dependencies

  6. Define which service is associated with your resource. The name of the Tivoli Remote Execution Service is "trip", so enter that in the Service name field. There are no start parameters.

    Make sure that the Use Network Name for computer name check box is selected (see Figure 5-58 on page 524). Click Next to continue.


    Figure 5-58: TRIP service name

  7. One of the options available with MSCS is to replicate registry keys between the nodes of a cluster. This option is not required for the TRIP service, but you will use it later when you create the oserv service.

    Click Finish to continue (see Figure 5-59 on page 525).


    Figure 5-59: Registry replication

    The resource has now been created. You will notice that when a resource is created, it is offline. This is normal. You will start the resources after the configuration is complete.

    Next, create the oserv cluster resource. You do this by using the same process used to create the TRIP resource.

  8. Open the Cluster Administrator, right-click your cluster group, and select New -> Resource, as shown in Figure 5-60 on page 526.


    Figure 5-60: Create a new resource

  9. Select a name for the resource. We used oserv in our example, as seen in Figure 5-61 on page 527. Add a description if desired.


    Figure 5-61: Resource name and type setup

    Make sure you specify the resource type to be a Generic Service. Click Next to continue.

  10. Select both nodes as owners for the oserv resource, as shown in Figure 5-62 on page 528. Click Next to continue.


    Figure 5-62: Select owners of the resource

  11. Select all the cluster resources in the cluster group as dependencies for the oserv resource, as seen in Figure 5-63 on page 529. Click Next to continue.


    Figure 5-63: Select resource dependencies

    1. Specify "oserv" as the service name. Make sure to check the box Use Network Name for computer name (see Figure 5-64 on page 530). Click Next to continue.


      Figure 5-64: Service and parameter setup

    2. Click Add and specify the registry key "SOFTWARE\Tivoli" as the key to replicate (see Figure 5-65 on page 531). Click Finish to complete the cluster setup.


      Figure 5-65: Registry replication

  12. At this point, the installation of Framework on an MSCS is almost complete. Now you have to bring the cluster resources online.

    To do this, right-click the cluster group and select Bring Online, as seen in Figure 5-66 on page 532.


    Figure 5-66: Bringing cluster resources online

The Framework service should now fail over whenever the cluster or one of nodes fails.

5.2.2 Tivoli Managed Node

In this section, we cover the Managed Node Framework installation process on an MSCS. The Managed Node installation method we have chosen is via the Tivoli Desktop. However, the same concepts should apply for a Managed Node installed using Tivoli Software Installation Service (SIS), or using the wclient command. The following topics are covered in this section:

  • Installation overview - provides a brief overview of the steps required to install Tivoli Framework on an MSCS Managed Node

  • TRIP installation - describes the installation of the Tivoli Remote Execution Protocol (TRIP), which is a required prerequisite for Managed Node installation

  • Managed Node installation - covers the steps to install a Managed Node on a MSCS from the Tivoli Desktop

  • Manages Node configuration - covers the setup process on the second node, as well as the configuring oserv to bind to the cluster IP address

  • Cluster resource configuration - covers the cluster configuration, which consists of the setup of the oserv and TRIP resources

The Managed Node installation process has many installation steps in common with the installation of the TMR server. For these steps, we refer you back to the previous section for the installation directives

Installation overview

Here we give a brief outline of the Managed Node installation process on an MSCS system. The sections following describe the steps listed here in greater detail.

Figure 5-67 on page 534 illustrates the configuration we use in our example.


Figure 5-67: Tivoli setup

TRIP installation

To install TRIP, follow these steps:

  1. Insert Framework CD 2 in the CD-ROM drive and run setup.exe.

  2. Click Next at the welcome screen,

  3. Click Yes at the license agreement.

  4. Select a local installation directory to install TRIP (c:\tivoli\trip, in our example).

  5. Click Next to start copying files.

  6. Press any key after the TRIP service has been installed.

  7. Click Finish to complete the installation.

  8. Follow steps 1-7 again on node 2 so TRIP is installed on both nodes of the cluster.

Managed Node installation on node 1
  1. Open the Tivoli desktop and log in to the TMR that will own the Managed Node.

  2. Open a policy region where the Managed Node should reside and select Create -> ManagedNode.

  3. Click Add Clients and enter the name associated with the cluster group where the Managed Node will be installed (tivw2kv1, in out example).

  4. Click Select Media and browse to the location of Framework disc 1.

  5. Click Install Options and make sure that the installation directories are all located on the cluster's shared drive (X:\tivoli, in our example). Verify that Arrange for start of the Tivoli daemon at system (re)boot time is unchecked.

  6. Select Account as the default access method, and specify an account and password with administrator access to the Managed Node you are installing.

  7. Click Install & Close to start the installation.

  8. Click Continue Install at the Client Install screen.

  9. Specify a Tivoli Remote Access Account if necessary (in our example, we used the default access method option).

  10. Click Close at the reboot screen. You do not want to reboot at this time.

  11. Click Close after the Client Install window states that it has finished the client install.

Managed Node installation on node 2
  1. Copy tivoliap.dll from node 1 to node 2.

  2. Copy the %SystemRoot%\system32\drivers\etc\Tivoli directory from node1 to node 2.

  3. Move the cluster group from node 1 to node 2.

  4. Source the Tivoli environment.

  5. Create the tivoli account by running %BINDIR%\TAS\Install\ntconfig -e.

  6. Load tivoliap.dll with the LSA by executing wsettap -a.

  7. Set up the TRAA account using wsettap.

  8. Install the autotrace service %BINDIR%\bin\atinstall --quietcopy %BINDIR%\bin.

  9. Install the object dispatcher oinstall -install %DBDIR%\oserv.exe

  10. Start the oserv service:

        net start oserv /-Nali /-k%DBDIR% /-b%BINDIR%\.. 

  11. Change the IP address of the Managed Node from the physical IP to the cluster IP address:

        odadmin odlist change_ip <dispatcher> <cluster ip> TRUE 

  12. Set the oserv to bind to a single IP:

        odadmin set_force_bind TRUE <dispatcher> 

Cluster resource configuration
  1. Open the Microsoft Cluster administrator.

  2. Create a new resource for the TRIP service.

    1. Name the resource TRIP resource (TIVW2KV1 -Trip, in our example). Set the resource type to Generic Service.

    2. Select both nodes as possible owners.

    3. Select the cluster disk, cluster name and cluster IP as dependencies.

    4. Set the service name to "trip" and check the box Use network name for computer name.

    5. There are no registry settings required for the TRIP service.

  3. Create a new resource for the oserv service.

    1. Name the oserv resource (TIVW2KV1 - Oserv, in our example). Set the resource type to Generic Service.

    2. Select both nodes as possible owners.

    3. Select the cluster disk, cluster name and cluster IP as dependencies.

    4. Set the service name to "oserv" and check the box Use network name for computer name.

    5. Set the registry key "SOFTWARE\Tivoli" as key to replicate across nodes.

  4. Bring the cluster group online.

TRIP installation

Tivoli Remote Execution Service (TRIP) must be installed before installing a Tivoli Managed Node. Install TRIP as follows:

  1. Insert Tivoli Framework CD 2 in the CD-ROM drive of node 1 and execute the setup.exe found in the TRIP directory (see Figure 5-68 on page 537).


    Figure 5-68: Start TRIP installation

  2. Click Next past the installation Welcome screen (Figure 5-69).


    Figure 5-69: TRIP Welcome screen

  3. Click Yes at the License agreement (see Figure 5-70 on page 538).


    Figure 5-70: The TRIP license agreement

  4. Select the desired installation directory. We used the local directory c:\tivoli, as shown in Figure 5-71 on page 539. Click Next to continue.


    Figure 5-71: Installation directory configuration

  5. Click Next to start the installation (see Figure 5-72 on page 540).


    Figure 5-72: Installation confirmation

  6. Click any key after the TRIP service has been installed and started (Figure 5-73).


    Figure 5-73: TRIP installation screen

  7. Click Next to complete the installation (see Figure 5-74 on page 541).


    Figure 5-74: TRIP installation completion

  8. Repeat the TRIP installation steps 1-7 on node 2.

Managed Node installation on node 1

.In this section we describe the steps needed to install the Managed Node software on node 1 of the cluster. The Managed Node software will be installed on the cluster's shared drive X:, so you need to make sure that node 1 is the owner of the resource group that contains the X: drive.

We will be initiating the installation from the Tivoli Desktop, so log in the TMR (edinburgh).

  1. After you are logged in to the TMR, navigate to a policy region where the Managed Node will reside and click Create -> ManagedNode (see Figure 5-75 on page 542).


    Figure 5-75: ManagedNode installation

  2. Click Add Clients button and enter the name of the virtual name of the cluster group. In our case, it is tivw2kv1. Click Add & Close (Figure 5-76).


    Figure 5-76: Add Clients dialog

  3. Insert the Tivoli Framework CD 1 in the CD-ROM drive on the TMR server and click Select Media....

    Navigate to the directory where the Tivoli Framework binaries are located on the CD-ROM. Click Set Media & Close (Figure 5-77).


    Figure 5-77: Tivoli Framework installation media

  4. Click Install Options.... Set all installation directories to the shared disk (X:). Make sure you check the boxes When installing, create "Specified Directories if missing and Configure remote start capability of the Tivoli daemon.

    Do not check the box Arrange for start of the Tivoli daemon at system (re)boot time. Let the cluster service handle the oserv service. Click Set to continue (see Figure 5-78 on page 544).


    Figure 5-78: Tivoli Framework installation options

  5. You need to specify the account that Tivoli will use to perform the installation on the cluster. Since you are only installing one Managed Node at this time, use the default access method.

    Make sure the Account radio button is selected, then enter the userid and password of an account on the node 1 with administrative rights on the machine. If a TMR installation password is used on your TMR, enter it now. Click Install & Close (see Figure 5-79 on page 545).


    Figure 5-79: Specify a Tivoli access account

  6. Now the Tivoli installation program will attempt to contact the Managed Node and query it to see what needs to be installed. You should see output similar to Figure 5-80 on page 546.


    Figure 5-80: Client installation screen

  7. If there are no errors, then click Continue Install to begin the installation; see Figure 5-80 on page 546.

  8. If your environment requires the use of a Tivoli Remote Access Account (TRAA), then specify the account here. In our example we selected Use Installation 'Access Method' Account for our TRAA account.

    Click Continue (see Figure 5-81 on page 547).


    Figure 5-81: Tivoli Remote Access Account (TRAA) setup

  9. Select Close at the client reboot window (Figure 5-82). You do not want your servers to reboot until after you have configured them.


    Figure 5-82: Managed Node reboot screen

  10. The binaries will now start to copy from the TMR server to the Managed Node. The installation may take a while, depending on the speed of your network and the type of machines were your installing the ManagedNode software.

    After the installation is complete, you should see the following message at the bottom of the scrolling installation window: Finished client install.

    Click Close to complete the installation (Figure 5-83).


    Figure 5-83: Managed Node installation window

Managed Node installation on node 2

Now you need to replicate manually on node 2 what the Tivoli installation performed on node 1. Because steps 1 to 9 of the Managed Node configuration are the same as the TMR installation of node 2 (see 5.2.1, "TMR server" on page 499), we do not cover those steps in great detail here.

  1. Copy the tivoliap.dll from node 1 to node 2.

  2. Copy the %SystemRoot%\system32\drivers\etc\Tivoli directory from node1 to node 2.

  3. Move the cluster group from node 1 to node 2.

  4. Source the Tivoli environment on node 2.

  5. Create the tivoli account by running %BINDIR%\TAS\Install\ntconfig -e.

  6. Load the tivoliap.dll with the LSA by executing wsettap -a.

  7. Set up the TRAA account by using wsettap.

  8. Install the Autotrace service %BINDIR%\bin\atinstall --quietcopy %BINDIR%\bin.

  9. Install the object dispatcher oinstall -install%DBDIR%\oserv.exe.

  10. Start the oserv service:

        net start oserv /-Nali /-k%DBDIR% /-b%BINDIR%\..S. 


    Figure 5-84: Starting the oserv service

  11. Change the IP address of the Managed Node from the physical IP to the cluster IP address:

        odadmin odlist change_ip <dispatcher> <cluster ip> TRUE 

  12. Set the oserv to bind to a single IP address:

     odadmin set_force_bind TRUE <dispatcher> 


    Figure 5-85: Configure Managed Node IP address

  13. Restart both systems to register tivoliap.dll.

Cluster resource configuration

The steps needed for cluster resource configuration here are the same as for the cluster resource configuration of a TMR as discussed in 5.2.1, "TMR server" on page 499, so refer to that section for detailed information. In this section, we simply guide you through the overall process.

  1. Open the Microsoft Cluster administrator.

  2. Create a new resource for the TRIP service.

    1. Name the resource TRIP resource (TIVW2KV1 -Trip, in our example). Set the resource type to Generic Service.

    2. Select both nodes as possible owners.

    3. Select the cluster disk, cluster name and cluster IP as dependencies.

    4. Set the service name to "trip" and check the box Use network name for computer name.

    5. There are no registry settings required for the TRIP service.

  3. Create a new resource for the oserv service.

    1. Name the oserv resource (TIVW2KV1 - Oserv, in our example). Set the resource type to Generic Service.

    2. Select both nodes as possible owners.

    3. Select the cluster disk, cluster name and cluster IP as dependencies.

    4. Set the service name to "oserv" and check the box Use network name for computer name.

    5. Set the registry key "SOFTWARE\Tivoli" as key to replicate across nodes.

  4. Bring the cluster group online.

5.2.3 Tivoli Endpoints

In this section we provide a detailed overview describing how to install multiple Tivoli Endpoints (TMAs) on a Microsoft Cluster Service (MSCS). The general requirements for this delivery are as follows:

  • Install a Tivoli Endpoint on each physical server in the cluster.

  • Install a Tivoli Endpoint on a resource group in the cluster ("Logical Endpoint"). This Endpoint will have the hostname and IP address of the virtual server.

  • The Endpoint resource will roam with the cluster resources. During a failover, the cluster services will control the startup and shutdown of the Endpoint.

The purposes of this section are to clearly demonstrate what has been put in place (or implemented) by IBM/Tivoli Services, to provide a detailed document of custom configurations, installation procedures, and information that is generally not provided in user manuals. This information is intended to be a starting place for troubleshooting, extending the current implementation, and documentation of further work.

Points to consider

Note the following points regarding IBM's current solution for managing HA cluster environments for Endpoints.

  • The Endpoint for the physical nodes to represent the physical characteristics ("Physical Endpoint"):

    • Always stays at the local system

    • Does not fail over to the alternate node in the cluster

    • Monitors only the underlying infrastructure

  • The Endpoint for every cluster resource group representing the logical characteristics ("Logical Endpoint"):

    • Moves together with the cluster group

    • Stops and starts under control of HA

    • Monitors only the application components within the resource group

  • Several limitations apply (for instance, Endpoints have different labels and listen on different ports)

  • Platforms

    • Solaris, AIX, HP-UX, Windows NT, Windows 2000

    • Platform versions as supported by our products today

Installation and configuration

The complete solution for managing/monitoring the MSCS involves installing three Tivoli Endpoints on the two physical servers. One "Physical Endpoint" will reside on each server, while the third Endpoint will run where the cluster resource is running. For example, if node 1 is the active cluster or contains the cluster group, this node will also be running the "Logical Endpoint" alongside its own Endpoint (see Figure 5-86).


Figure 5-86: Endpoint overview

An Endpoint is installed on each node to manage the physical components, and we call this the "Physical Endpoint". This Endpoint is installed on the local disk of the system using the standard Tivoli mechanism. This Endpoint is installed first, so its instance id is "1" on both physical servers (for example, \Tivoli\lcf\dat\1).

A second Endpoint instance (its instance id is "2") is installed on the shared file system. This Endpoint represents the application that runs on the cluster, and we call it the "Logical Endpoint". The Endpoints will not share any path, cache or content; their disk layout is completely separated.

The Logical Endpoint will have an Endpoint label that is different from the physical Endpoint and will be configured to listen on a different port than the physical Endpoint.

The general steps to implementing this configuration are as follows:

  1. Install the Tivoli Endpoint on node 1, local disk.

  2. Install the Tivoli Endpoint on node 2, local disk.

  3. Manually install the Tivoli Endpoint on the logical server, shared drive X: (while logged onto the currently active cluster node).

  4. Configure the new LCFD service as a "generic service" in the cluster group (using the Cluster Administrator).

  5. Move the cluster group to node 2 and register the new LCFD service on this node by using the lcfd.exe -i command (along with other options).

Environment preparation and configuration

Before beginning the installation, make sure there are no references to "lcfd" in the Windows Registry. Remove any references to previously installed Endpoints, or you may run into problems during the installation.

Note 

This is very important to the success of the installation. If there are any references (typically legacy_lcfd), you will need to delete them using regedt32.exe.

Verify that you have two-way communication to and from the Tivoli Gateways from the cluster server via hostname and IP address. Do this by updating your name resolution system (DNS, hosts files, and so on). We strongly recommend that you enter the hostname and IP address of the logical node in the host's file of each physical node. This will locally resolve the logical server's hostname when issuing the ping -a command.

Finally, note that this solution works only with version 96 and higher of the Tivoli Endpoint.

Install the Tivoli Endpoint on node 1

To install the Tivoli Endpoint on node 1, follow these steps:

  1. Install the Tivoli Endpoint using the standard CD InstallShield setup program on one of the physical nodes in the cluster.

  2. In our case, we leave the ports as default, but enter optional commands to configure the Endpoint and ensure its proper login.


    Figure 5-87: Endpoint advanced configuration

    The configuration arguments in the Other field are:

     -n <ep label> -g <preferred gw> -d3 -D local_ip_interface=<node primary IP> -D bcast_disable=1 

  3. The Endpoint should install successfully and log in to the preferred Gateway. You can verify the installation and login by issuing the following commands on the TMR or Gateway (Figure 5-88 on page 555).


    Figure 5-88: Endpoint login verification

Install the Tivoli Endpoint on node 2

To install the Tivoli Endpoint on node 2, follow these steps:

  1. Install the Tivoli Endpoint on the physical node 2 in the cluster. Follow the same steps and options as in node 1 (refer to "Install the Tivoli Endpoint on node 1" on page 554).

  2. Verify that you have a successful installation and then log in as described.

Manually install the Tivoli Endpoint on the virtual node

To install the Tivoli Endpoint on the virtual node, follow these steps.

Note 

You will only be able to do this from the active cluster server, because the non-active node will not have access to the shared volume X: drive.

  1. On the active node, copy only the Tivoli installation directory (c:\Program Files\Tivoli) to the root of the X: drive. Rename X:\Tivoli\lcf\dat\1 to X:\Tivoli\lcf\dat\2.

    Note 

    Do not use the "Program Files" naming convention on the X: drive.

  2. Edit the X:\Tivoli\lcf\dat\2\last.cfg file, changing all of the references of c:\Program Files\Tivoli\lcf\dat\1 to X:\Tivoli\lcf\dat\2.

  3. On both physical node 1 and physical node 2, copy the c:\winnt\Tivoli\lcf\1 directory to c:\winnt\Tivoli\lcf\2.

  4. On both physical node 1 and physical node 2, edit the c:\winnt\Tivoli\lcf\2\lcf_env.cmd and lcf_env.sh files, replacing all references of c:\Program Files\Tivoli\lcf\dat\1 to X:\Tivoli\lcf\dat2.

  5. Remove the lcfd.id, lcfd.sh, lcfd.log, lcfd.bk and lcf.dat files from the X:\Tivoli\lcf\dat\2 directory.

  6. Add or change the entries listed in Example 5-50 to the X:\Tivoli\lcf\dat\2\last.cfg file.

    Example 5-50: f:\Tivoli\lcf\dat\2\last.cfg file

    start example
     lcfd_port=9497 lcfd_preferred_port=9497 lcfd_alternate_port=9498 local_ip_interface=<IP of the virtual cluster> lcs.login_interfaces=<gw hostname or IP> lcs.machine_name=<hostname of virtual Cluster> UDP_interval=30 UDP_attempts=3 login_interval=120 
    end example

    The complete last.cfg file should resemble the output shown in Figure 5-89 on page 557.


    Figure 5-89: Sample last.cfg file

  7. Execute the following command:

     X:\Tivoli\lcf\bin\w32-ix86\mrt\lcfd.exe -i -n <virtual_name> -C X:\Tivoli\lcf\dat\2 -P 9497 -g <gateway_label> -D local_ip_interface=<virtual_ip_address> 

    Note 

    The IP address and name are irrelevant as long as their label is a unique label with -n specified. Every time the Endpoint logs in, the Gateway registers the IP that contacted it. It will use that IP from that point forward for down calls.

    A single interface cannot be bound to multiple interface machines, so the routing must be very good; otherwise, with every UP call generated, or every time the Endpoint starts, the IP address will be changed if it differs from the Gateway.

    However, if the Endpoint is routing out of an interface that is not reachable by the Gateway, then all downcalls will fail, even though the Endpoint logged in successfully. This will obviously cause some problems with the Endpoint.

  8. Set the Endpoint manager login_interval to a smaller number. The default=270 New=20. Run the following command on the TMR:

     wepmgr set login_interval 20 

Set up physical node 2 to run the Logical Endpoint

To set up the physical node 2 to run the Logical Endpoint, follow these steps:

  1. Move the cluster group containing the X: drive to node 2, using the Cluster Administrator.

  2. On node 2, which is now the active node (the node which you e have not yet registered the logical Endpoint), open a command prompt window and again run the following command to create and register the lcfd-2 service on this machine:

     X:\Tivoli\lcf\bin\w32-ix86\mrt\lcfd.exe -i -n <virtual_name> -C X:\Tivoli\lcf\dat\2 -P 9497 -g <gateway_label> -D local_ip_interface=<virtual_ip_address> 

    The output listed in Figure 5-90 is similar to what you should see.


    Figure 5-90: Registering the lcfd service

  3. Verify that the new service was installed correctly by viewing the services list (use the net start command or Control Pane -> Services). Also view the new registry entries using the Registry Editor. You will see two entries for the lcfd service, "lcfd" and "lcfd-2", as shown in Figure 5-91 on page 559.


    Figure 5-91: lcfd and lcfd-2 services in the registry

  4. Verify that the Endpoint successfully started and logged into the Gateway/TMR and that it is reachable (Figure 5-92).


    Figure 5-92: Endpoint login verification

Configure the cluster resources for the failover

To configure the cluster resources for the failover, follow these steps:

  1. Add a new resource to the cluster.

  2. Log on to the active cluster node and start the Cluster Administrator, using the virtual IP address or hostname.

  3. Click Resource, then right-click in the right-pane and select New -> Resource (Figure 5-93).


    Figure 5-93: Add a new cluster resource

  4. Fill in the information as shown in the next dialog (see Figure 5-94 on page 561).


    Figure 5-94: Name and resource type configuration

  5. Select both TIVW2KV1 and TIVW2KV2 as possible owners of the cluster Endpoint resource (see Figure 5-95 on page 562).


    Figure 5-95: Possible Owners

  6. Move all available resources to the "Resources dependencies" box (see Figure 5-96 on page 563).


    Figure 5-96: Dependency configuration

  7. Enter the new service name of the Endpoint just installed (see Figure 5-97 on page 564).


    Figure 5-97: Add lcfd-2 service name

  8. Click Next past the registry replication screen (see Figure 5-98 on page 565). No registry replication is required.


    Figure 5-98: Registry replication

  9. Click Next at the completion dialog (Figure 5-99).


    Figure 5-99: Completion dialog

  10. Bring the new service resource online by right-clicking the resource and selecting Bring Online (Figure 5-100 on page 566). You will see the icon first change to the resource "book" with a clock, and then it will come online and display the standard icon indicating it is online.


    Figure 5-100: Bring resource group online

  11. Test the failover mechanism and failover of the Cluster Endpoint service, as follows:

    1. Move the resource group from one server to the other, using the Cluster Administrator.

    2. After the resource group has been moved, log into the new active server and verify that Endpoint Service "Tivoli Endpoint-1" is running along side the physical server's Endpoint "Tivoli Endpoint".

    3. Failover again and do the same.



 < Day Day Up > 



High Availability Scenarios with IBM Tivoli Workload Scheduler and IBM Tivoli Framework
High Availability Scenarios With IBM Tivoli Workload Scheduler And IBM Tivoli Framework
ISBN: 0738498874
EAN: 2147483647
Year: 2003
Pages: 92
Authors: IBM Redbooks

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net