4.2 Implementing IBM Tivoli Workload Scheduler in a Microsoft Cluster

 < Day Day Up > 



4.2 Implementing IBM Tivoli Workload Scheduler in a Microsoft Cluster

In this section, we describe how to implement a Tivoli Workload Scheduler engine in a Microsoft Cluster using Microsoft Cluster Service. We cover both a single installation of Tivoli Workload Scheduler, and two copies of Tivoli Workload Scheduler in a mutual takeover scenario. We do not cover how to perform patch upgrades.

For more detailed information about installing IBM Tivoli Workload Scheduler under a windows platform, refer to IBM Tivoli Workload Scheduler Planning and Installation Guide Version 8.2, SC32-1273.

4.2.1 Single instance of IBM Tivoli Workload Scheduler

Figure 4-71 on page 348 shows two Windows 2000 systems in a Microsoft Cluster. In the center of this cluster is a shared disk volume, configured in the cluster as volume X, where we intend to install the Tivoli Workload Scheduler engine.


Figure 4-71: Network diagram of the Microsoft Cluster

Once the cluster is set up and configured properly, as described in 3.3, "Implementing a Microsoft Cluster" on page 138, you can install the IBM Tivoli Workload Scheduler software in the shared disk volume X:.

The following steps will guide you through a full installation.

  1. Ensure you are logged on as the local Administrator.

  2. Ensure that the shared disk volume X:, that it is owned by System 1 (tivw2k1), and that it is online. To verify this, open the Cluster Administrator, as shown in Figure 4-72 on page 349.


    Figure 4-72: Cluster Administrator

  3. Insert the IBM Tivoli Workload Scheduler Installation Disk 1 into the CD-ROM drive.

  4. Change directory to the Windows folder and run the setup program, which is the SETUP.exe file.

  5. Select the language in which you want the wizard to be displayed, and click OK as seen in Figure 4-73.


    Figure 4-73: Installation-Select Language

  6. Read the welcome information and click Next, as seen in Figure 4-74.


    Figure 4-74: Installation-Welcome Information

  7. Read the license agreement, select the acceptance radio button, and click Next, as seen in Figure 4-75.


    Figure 4-75: Installation-License agreement

  8. The Install a new Tivoli Workload Scheduler Agent is selected by default. Click Next, as seen in Figure 4-76.


    Figure 4-76: Installation-Install new Tivoli Workload Scheduler

  9. Specify the IBM Tivoli Workload Scheduler user name. Spaces are not permitted.

    On Windows systems, if this user account does not already exist, it is automatically created by the installation program.

    Note the following:

    • If you specify a domain user, specify the name as domain_name\user_name.

    • If you specify a local user with the same name as a domain user, the local user must first be created manually by an Administrator and then specified as system_name\user_name.

    Also, type and confirm the password.

    Click Next, as seen in Figure 4-77.


    Figure 4-77: Installation user information

  10. If you specified a user name that does not already exist, an information panel is displayed about extra rights that need to be applied. Review the information and click Next.

  11. Specify the installation directory under which the product will be installed.

    The directory cannot contain spaces. On Windows systems only, the directory must be located on an NTFS file system. If desired, click Browse to select a different destination directory, and click Next as shown in Figure 4-78.


    Figure 4-78: Installation install directory

  12. Select the Custom install option and click Next, as shown in Figure 4-79.


    Figure 4-79: Type of Installation

    This option will allow the custom installation of just the engine and not the Framework or any other features.

  13. Select the type of IBM Tivoli Workload Scheduler workstation you would like to install (Master Domain Manager, Backup Master, Fault Tolerant Agent or a Standard Agent), as this installation will only install the parts of the code needed for each configuration.

    If needed, you are able to promote the workstation to a different type of IBM Tivoli Workload Scheduler workstation using this installation program.

    Select Master Domain Manager and click Next, as shown in Figure 4-80.


    Figure 4-80: Type of IBM Tivoli Workload Scheduler workstation

  14. Type in the following information and then click Next, as shown in Figure 4-81:

    1. Company Name as you would like it to appear in program headers and reports. This name can contain spaces provided that the name is not enclosed in double quotation marks (").

    2. The IBM Tivoli Workload Scheduler 8.2 name for this workstation. This name cannot exceed 16 characters, cannot contain spaces, and it is not case sensitive.

    3. The TCP port number used by the instance being installed. It must be a value in the range 1–65535. The default is 31111.


    Figure 4-81: Workstation information

  15. In this dialog box you are allowed to select the Tivoli Plus Module and/or the Connector. In this case we do not need these options, so leave them blank and click Next, as shown in Figure 4-82.


    Figure 4-82: Extra optional features

  16. In this dialog box, as shown in Figure 4-83, you have the option of installing additional languages.


    Figure 4-83: Installation of Additional Languages

    We did not select any additional languages to install at this stage, since this requires the Tivoli Management Framework 4.1 Language CD-ROM be available in addition to Tivoli Framework 4.1 Installation CD-ROM during the install phase.

  17. Review the installation settings and then click Next, as shown in Figure 4-84.


    Figure 4-84: Review the installation

  18. A progress bar indicates that the installation has started, as shown in Figure 4-85.


    Figure 4-85: IBM Tivoli Workload Scheduler Installation progress window

  19. After the installation is complete a final summary panel will be displayed, as shown in Figure 4-86. Click Finish to exit the setup program.


    Figure 4-86: Completion of a successful install

  20. Now that the installation is completed on one side of the cluster (system1), you have to make sure the registry entries are updated on the other side of the cluster pair. The easiest way to do this is by removing the just installed software on system1 (tivw2k1) in the following way:

    1. Make sure that all the services are stopped by opening the Services screen. Go to Start -> Settings -> Control Panel. Then open up Administrative Tools ->Services. Verify that Tivoli Netman, Tivoli Token Service and Tivoli Workload Scheduler services are not running.

    2. Using Windows Explorer, go to the IBM Tivoli Workload Scheduler installation directory x:\win32app\TWS\TWS82 and remove all files and directories in this directory.

    3. Use the Cluster Administrator to verify that the shared disk volume X: is owned by System 2 (tivw2k2), and is online. Open Cluster Administrator, as shown in Figure 4-87.


      Figure 4-87: Cluster Administrator

  21. Now install IBM Tivoli Workload Scheduler on the second system by repeating steps 3 through 18.

  22. To complete IBM Tivoli Workload Scheduler installation, you will need to add a IBM Tivoli Workload Scheduler user to the database. The install process should have created one for you, but we suggest that you verify that the user exists by running the composer program as shown in Example 4-87.

    Example 4-87: Check the user creation

    start example
     C:\win32app\TWS\maestro82\bin>composer TWS for WINDOWS NT/COMPOSER 8.2 (1.18.2.1) Licensed Materials Property of IBM 5698-WKB (C) Copyright IBM Corp 1998,2001 US Government User Restricted Rights Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Installed for user ''. Locale LANG set to "en" -display users tws82#@ CPU id.                 User Name ----------------  --------------------------------------------- TWS82             gb033984 USERNAME TWS82#gb033984 PASSWORD "***************" END AWSBIA251I Found 1 users in @. - 
    end example

    If the user exists in the database, then you will not have to do anything.

  23. Next you need to modify the workstation definition. You can modify this by running the composer modify cpu=TWS82 command. This will display the workstation definition that was created during the IBM Tivoli Workload Scheduler installation in an editor.

    The only parameter you will have to change is the argument Node; it will have to be changed to the IP address of the cluster. Table 4-5 lists and describes the arguments.

    Table 4-5: IBM Tivoli Workload Scheduler workstation definition

    Argument

    Value

    Description

    cpuname

    TWS82

    Type in a workstation name that is appropriate for this workstation. Workstation names must be unique, and cannot be the same as workstation class and domain names.

    Description

    Master CPU

    Type in a description that is appropriate for this workstation.

    OS

    WNT

    Specifies the operating system of the workstation. Valid values include UNIX, WNT, and OTHER.

    Node

    9.3.4.199

    This field is the address of the cluster. This address can be a fully-qualified domain name or an IP address.

    Domain

    Masterdm

    Specify a domain name for this workstation. The default name is MASTERDM.

    TCPaddr

    31111

    Specifies the TCP port number that is used for communications. The default is 31111. If you have two copies of TWS running on the same system, then the port address number must be different.

    For Maestro

     

    This field has no value, because it is a key word to start the extra options for the workstation.

    Autolink

    On

    When set to ON, this specifies whether to open the link between workstations at the beginning of each day during startup.

    Resolvedep

    On

    With this set to ON, this workstation will track dependencies for all jobs and job streams, including those running on other workstations.

    Fullstatus

    On

    With this set to ON, this workstation will be updated with the status of jobs and job streams running on all other workstations in its domain and in subordinate domains, but not on peer or parent domains.

    End

     

    This field has no value, because it is a key word to end the workstation definition.

    Figure 4-88 illustrates the workstation definition.


    Figure 4-88: IBM Tivoli Workload Scheduler Workstation definition

  24. After the workstation definition has been modified, you are able to add the FINAL job stream definition to the database which is the script that creates the next day's production day file. To do this, login as the IBM Tivoli Workload Scheduler installation user and run this command:

     Maestrohome\bin\composer add Sfinal 

    This will add the job and jobstreams to the database.

  25. While still logged in as the IBM Tivoli Workload Scheduler installation user, run the batch file Jnextday:

     Maetsrohome\Jnextday 

    Verify that Jnextday has worked correctly by running the conman program:

     Maestrohome\bin\conman 

    In the output, shown in Example 4-88, you should see in the conman header "Batchman Lives", which indicates that IBM Tivoli Workload Scheduler is installed correctly and is up and running.

    Example 4-88: Header output for conman

    start example
     x:\win32app\TWS\TWS82\bin>conman TWS for WINDOWS NT/CONMAN 8.2 (1.36.1.7) Licensed Materials Property of IBM 5698-WKB (C) Copyright IBM Corp 1998,2001 US Government User Restricted Rights Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Installed for user ''. Locale LANG set to "en" Schedule (Exp) 06/11/03 (#1) on TWS82. Batchman LIVES. Limit: 10, Fence: 0, Audit Level: 0 % 
    end example

  26. When a new workstation is created in an IBM Tivoli Workload Scheduler distributed environment, you need to set the workstation limit of concurrent jobs because the default value is set to 0, which means no jobs will run. To change the workstation limit from 0 to 10, enter the following command:

     Maestrohome\bin\conman limit cpu=tws82;10 

    Verify that the command has worked correctly by running the conman show cpus command:

     Maestrohome\bin\conman sc=tws82 

    The conman output, shown in Example 4-89, contains the number 10 in the fifth column, indicating that the command has worked correctly.

    Example 4-89: conman output

    start example
     C:\win32app\TWS\maestro82\bin>conman sc=tws82 TWS for WINDOWS NT/CONMAN 8.2 (1.36.1.7) Licensed Materials Property of IBM 5698-WKB (C) Copyright IBM Corp 1998,2001 US Government User Restricted Rights Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Installed for user ''. Locale LANG set to "en" Schedule (Exp) 06/11/03 (#1) on TWS82.  Batchman LIVES.  Limit: 10, Fence: 0, Audit Level: 0 sc=tws82 CPUID      RUN  NODE      LIMIT FENCE    DATE   TIME  STATE  METHOD  DOMAIN TWS82       1 *WNT   MASTER   10    0   06/11/03 12:08   I J MASTERDM 
    end example

  27. Before you configure IBM Tivoli Workload Scheduler in the cluster services, you need to set the three IBM Tivoli Workload Scheduler services to manual start up. Do this by opening the Services Screen.

    Go to Start -> Settings -> Control Panel and open Administrative Tools -> Services. Change Tivoli Netman, Tivoli Token Service and Tivoli Workload Scheduler to manual startup.

  28. Now you can configure IBM Tivoli Workload Scheduler in the cluster services by creating a new resource for each of the three IBM Tivoli Workload Scheduler services: Tivoli Netman, Tivoli Token Service, and Tivoli Workload Scheduler. These three new resources have to be created in the same Cluster Services Group as the IBM Tivoli Workload Scheduler installation drive. In this case we used the X: drive, which belongs to cluster group Disk Group1.

  29. First create the new resource Tivoli Token Service, as shown in Figure 4-89.


    Figure 4-89: New Cluster resource

  30. Fill in the first screen (Figure 4-90) as follows, and then click Next:

    Name

    Enter the name you want to use for this resource, such as "Tivoli Token Service".

    Description

    Enter a description of this resource, such as" Tivoli Token Service".

    Resource type

    The resource type of service for "Tivoli Token Service". Select Generic Service.

    Group

    Select the group where you want to create this resource in. It must be created in the same group as any dependences (such as the installation disk drive or network).


    Figure 4-90: Resource values

  31. Now you need to select the possible nodes that this resource can run on. In this case, select both nodes as shown in Figure 4-91. Then click Next.


    Figure 4-91: Node selection for resource

  32. Select all the dependencies that you would like this resource (Tivoli Token Service) to be dependent on.

    In this case, you need the disk, network and IP address to be online before you can start the Tivoli Token Service as shown in Figure 4-92. Then click Next.


    Figure 4-92: Dependencies for this resource

  33. Add in the service parameters for the service "Tivoli Token Service", then click Next, as shown in Figure 4-93.


    Figure 4-93: Resource parameters

    Service name

    To find the service name, open the Windows services panel; go to Start -> Settings -> Control Panel, then open Administrative Tools -> Services.

    Highlight the service, then click Action -> Properties. Under the General tab on the first line you can see the service name, which in this case is tws_tokensrv_tws8_2.

    Start parameters

    Enter any start parameters needed for this service (Tivoli Token Service).

    In this case, there are no start parameters, so leave this field blank.

  34. This screen (Figure 4-94) allows you to replicate registry data to all nodes in the cluster.


    Figure 4-94: Registry Replication

    In the case of this service, "Tivoli Token Service", this is not needed, so leave it blank. Then click Finish.

  35. Figure 4-95 should then be displayed, indicating that the resource has been created successfully. Click OK.


    Figure 4-95: Cluster resource created successfully

  36. Now create a new resource for the Tivoli Netman service by repeating step 27 (shown in Figure 4-89 on page 369).

  37. Fill in the resource values in the following way, then click Next.

    Name

    Enter the name you want to use for this resource, such as "Tivoli Netman Service".

    Description

    Enter a description of this resource, such as "Tivoli Netman Service".

    Resource type

    The resource type of service for Tivoli Netman Service. Select Generic Service.

    Group

    Select the group where you want to create this resource in. It must be created in the same group as any dependences (such as the installation disk drive or network).

  38. Select the possible nodes that this resource can run on.

    In this case select both nodes, then click Next.

  39. Select all the dependencies that you would like this resource (Tivoli Netman Service) to be dependent on.

    In this case we only need the Tivoli Token Service to be online before we can start the Tivoli Netman Service, because Tivoli Token Service will not start until the disk, network and IP address are available, as shown in Figure 4-96.


    Figure 4-96: Dependencies for IBM Tivoli Workload Scheduler Netman service

    Then click Next.

  40. Add in the service parameters for the service "Tivoli Netman Service" with the following parameters, then click Next.

    Service name

    To find the service name, open the Windows services panel. Go to Start -> Settings -> Control Panel, then open Administrative Tools -> Services. Highlight the service, then click Action -> Properties. Under the General tab on the first line you can see the service name, which in this case is tws_netman_tws8_2.

    Start parameters

    Enter start parameters needed for the service "Tivoli Netman Service". In this case, there are no start parameters so leave this field blank.

  41. Repeat steps 32 and 33 by clicking Finish, which should then bring you to a window indicating that the resource was created successfully. Then click OK.

  42. Now create a new resource for the IBM Tivoli Workload Scheduler by repeating step 27, as shown in Figure 4-89 on page 369.

  43. Fill out the resource values in the following way; when you finish, click Next:

    Name

    Enter the name you want to use for this resource, such as "TWS Workload Scheduler".

    Description

    Enter a description of this resource, such as "TWS Workload Scheduler".

    Resource type

    Select the resource type of service for "TWS Workload Scheduler". Select Generic Service.

    Group

    Select the group where you want to create this resource in. It must be created in the same group as any dependences like the installation disk drive or network.

  44. Select the possible nodes that this resource can run on.

    In this case, select both nodes. Then click Next.

  45. Select all dependencies that you would like this resource, "TWS Workload Scheduler", to be dependent on.

    In this case we only need the Tivoli Netman Service to be online before we can start the TWS Workload Scheduler, because Tivoli Netman Service will not start until the Tivoli Token Service is started, and Tivoli Token Service will not start until the disk, network and IP address are available.

    When you finish, click Next.

  46. Add in the service parameters for this service, "TWS Workload Scheduler", with the following parameters, then click Next.

    Service name

    To find the service name, open the Windows services panel. Go to Start -> Settings -> Control Panel, then open Administrative Tools -> Services. Highlight the service, then click Action -> Properties. Under the General tab on the first line you can see the service name, which in this case is tws_maestro_tws8_2.

    Start parameters

    Enter start parameters needed for this service "Tivoli NetmanService". In this case there are no start parameters, so leave this field blank.

  47. Repeat steps 32 and 33 by clicking Finish, which should then display a screen indicating that the resource was created successfully. Then click OK.

  48. At this point all three resources have been created in the cluster. Now you need to change some of the advanced parameters—but only in the TWS Workload Scheduler resource.

    To do this, open the Cluster Administrator tool. Click the Group that you have defined the TWS Workload Scheduler resource in. Highlight the resource and click Action -> Properties, as shown in Figure 4-97.


    Figure 4-97: Cluster Administrator

  1. Now click the Advanced tab, as shown in Figure 4-98, and change the Restart to Do not Restart. Then click OK.


    Figure 4-98: The Advanced tab

4.2.2 Configuring the cluster group

Each cluster group has a set of settings that affect the way the cluster fail over and back again. In this section we cover the different options and how they affect TWS. We describe the three main tabs used when dealing with the properties of the cluster group.

To modify any of these options:

  1. Open Cluster Administrator.

  2. In the console tree (usually the left pane), click the Groups folder.

  3. In the details pane (usually the right pane), click the appropriate group.

  4. On the File menu, click Properties.

  5. On the General tab, next to Preferred owners, click Modify.

The General tab is shown in Figure 4-99. Using this tab, you can define the following:

Name

Enter the name of the cluster.

Description

Enter a description of this cluster.

Preferred owner

Select the preferred owner of this cluster. If no preferred owners are specified, failback does not occur, but if more than one node is listed under preferred owners, priority is determined by the order of the list. The group always tries to fail back to the highest priority node that is available.


Figure 4-99: General tab for Group Properties

The Failover tab is shown in Figure 4-100. Using this table, you can define the following:

Threshold

Enter a number. This number means the number of times the cluster can fail over within a set time period. To set an accurate number, consider how long it takes for all products in this group to come back online. Also consider the fact that if a services is not available on both sides of the cluster then the cluster software will continue to move it from side to side until it becomes available or the time period is reached.

Period

Enter a period of time in which to monitor the cluster, and if it moves more than the threshold number within this period, then do not move any more.


Figure 4-100: Failover tab for Group Properties

The Failback tab is shown in Figure 4-101 on page 383. Using this tab gives you the choice of two options on where this cluster can or cannot fail back, as follows:

Prevent failback

If Prevent fallback is set, and provided that all dependences of the cluster are met, this group will run on this side of the cluster until there is a problem and the group will move again. The other way the group can move is by the Cluster Administrator.

Allow failback

If Allow fallback is set, then you have two further options: Immediately, and Fallback between.

If Immediately is set, then the Group will try to move the cluster back immediately.

If Fallback between is set, which is the preferred option, then you can define a time from and to where you would like the cluster group to move back. We recommend using a period of time before Jnextday, but yet allowing enough time for the Group to come back online before Jnextday has to start. But if no preferred owners are specified for the group, then failback does not occur.


Figure 4-101: Failover tab for Group Properties

4.2.3 Two instances of IBM Tivoli Workload Scheduler in a cluster

In this section, we describe how to install two instances of IBM Tivoli Workload Scheduler 8.2 Engine (Master Domain Manager) in a Microsoft Cluster. The configuration will be in a mutual takeover mode, which means that when one side of the cluster is down, you will have two copies of IBM Tivoli Workload Scheduler running on the same node. This configuration is shown in Figure 4-102 on page 384.


Figure 4-102: Network diagram of the Microsoft Cluster

  1. Before starting the installation, some careful planning must take place. To plan most efficiently, you need the following information.

    Workstation type

    You need to understand both types of workstations to be installed in the cluster, as this may have other dependences (such as JSC and Framework connectivity) as well as installation requirements.

    In this configuration we are installing two Master Domain Managers (MDMs).

    Location of the code

    This code should be installed on a file system that is external to both nodes in the cluster, but also accessible by both nodes. The location should also be in the same part of the file system (or at least the same drive) as the application that the IBM Tivoli Workload Scheduler engine is going to manage.

    You also need to look at the way the two instances of IBM Tivoli Workload Scheduler will work together, so you need to make sure that the directory structure does not overlap.

    Finally, you we need sufficient disk space to install IBM Tivoli Workload Scheduler into. Refer to IBM Tivoli Workload Scheduler Release Notes V 82 SC32-1277, for information about these requirements.

    In this configuration, we will install one copy of IBM Tivoli Workload Scheduler 8.2 in the X drive and the other in the Y drive.

    Installation user

    Each instance of IBM Tivoli Workload Scheduler needs an individual installation user name, because this user is used to start the services for this instance of IBM Tivoli Workload Scheduler. This installation user must exist on both sides of the cluster, because the IBM Tivoli Workload Scheduler instance can run on both sides of the cluster.

    It also needs its own home directory to run in, and this home directory must be the same location directory, for the same reasons described in the Location of the code section.

    In our case, we will use the same names as the Cluster group names. For the first installation, we will use TIVW2KV1; for the second installation, we will use TIVW2KV2.

    Naming convention

    Plan your naming convention carefully, because it is difficult to change some of these objects after installing IBM Tivoli Workload Scheduler (in fact, it is easier to reinstall rather than change some objects).

    The naming convention that you need to consider will be used for installation user names, workstation names, cluster group names, and the different resource names in each of the cluster groups. Use a naming convention that makes it easy to understand and identify what is running where, and that also conforms to the allowed maximum characters for that object.

    Netman port

    This port is used for listening for incoming requests, and because we have a configuration where two instances of IBM Tivoli Workload Scheduler can be running on the same node (mutual takeover scenario), we need to set two different port numbers for each listing instance of IBM Tivoli Workload Scheduler.

    The two port numbers that are chosen must not conflict with any other network products installed on these two nodes.

    In this installation we use port number 31111 for the first installation of TIVW2KV1, and port 31112 for the second installation of YTIVW2KV2.

    IP address

    The IP address that you define in the workstation definition for each IBM Tivoli Workload Scheduler instance should not be an address that is bound to a particular node, but the one that is bound to the cluster group. This IP address should be addressable from the network. If the two IBM Tivoli Workload Scheduler instances are to move separately, then you will need two IP addresses, one for each cluster group.

    In this installation, we use 9.3.4.199 for cluster group TIVW2KV1, and 9.3.4.175 for cluster group TIVW2KV2.

  2. After gathering all the information in step 2 and deciding on a naming convention, you can install the first IBM Tivoli Workload Scheduler engine in the cluster. To do this, repeat steps 1 through to 20 in "4.2.1, "Single instance of IBM Tivoli Workload Scheduler" on page 347, but use the parameters listed in Table 4-6.

    Table 4-6: IBM Tivoli Workload Scheduler workstation definition

    Argument

    Value

    Description

    Installation User Name

    TIVW2KV1

    In our case, we used the name of the cluster group as the installation user name.

    Password

    TIVW2KV1

    To keep the installation simple, we used the same password as the installation user name. However, in a real customer installation, you would use the password provided by the customer.

    Destination Directory

    X:\win32app\t ws\tivw2kv1

    This has to be installed on the disk that is associated with cluster group TIVW2KV1. In our case, that is the X drive.

    Company Name

    IBM ITSO

    This is used for the heading of reports, so enter the name of the company that this installation is for. In our case, we used IBM ITSO.

    Master CPU name

    TIVW2KV1

    Because we are installing a Master Domain Manager, the Master CPU name is the same as This CPU name.

    TCP port Number

    31111

    This specifies the TCP port number that is used for communications. The default is 31111. If you have two copies of IBM Tivoli Workload Scheduler running on the same system, then the port address number must be different.

  3. When you get to step 20, replace the Installation Arguments with the values listed in Table 4-6 on page 386.

  4. When you get to step 22, replace the workstation definition with the arguments listed in Table 4-7.

    Table 4-7: IBM Tivoli Workload Scheduler workstation definition

    Argument

    Value

    Description

    cpuname

    TIVW2KV1

    Verify that the workstation name is TIVW2KV1, as this should be filled in during the installation.

    Description

    Master CPU for the first cluster group

    Enter a description that is appropriate for this workstation.

    OS

    WNT

    Specifies the operating system of the workstation. Valid values include UNIX, WNT, and OTHER.

    Node

    9.3.4.199

    This field is the address that is associated with the first cluster group. This address can be a fully-qualified domain name or an IP address.

    Domain

    Masterdm

    Specify a domain name for this workstation. The default name is MASTERDM.

    TCPaddr

    31111

    Specifies the TCP port number that is used for communication. The default is 31111. If you have two copies of IBM Tivoli Workload Scheduler running on the same system, then the port address number must be different.

    For Maestro

     

    This field has no value, because it is a key word to start the extra options for the workstation.

    Autolink

    On

    When set to ON, this specifies whether to open the link between workstations at the beginning of each day during startup.

    Resolvedep

    On

    When set to ON, this workstation will track dependencies for all jobs and job streams, including those running on other workstations.

    Fullstatus

    On

    With this set to ON, this workstation will be updated with the status of jobs and job streams running on all other workstations in its domain and in subordinate domains, but not on peer or parent domains.

    End

     

    This field has no value, because it is a key word to end the workstation definition.

  5. Now finish off the first installation by repeating steps 23 through to 27.

    However, at step 25, use the following command:

     Maestrohome\bin\conman limit cpu=tivw2kv1;10 

    To verify that this command has worked correctly, run the conman show cpus command:

     Maestrohome\bin\conman sc=tivw2kv1 The 

    The conman output, shown in Example 4-90, contains the number 10 in the fourth column, illustrating that the command has worked correctly.

    Example 4-90: conman output

    start example
     X:\win32app\TWS\tivw2kv1\bin>conman sc=tws82 TWS for WINDOWS NT/CONMAN 8.2 (1.36.1.7) Licensed Materials  Property of IBM 5698-WKB (C) Copyright IBM Corp 1998,2001 US Government User Restricted Rights Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Installed for user ''. Locale LANG set to "en" Schedule (Exp) 06/11/03 (#1) on TIVW2KV1.  Batchman LIVES. Limit: 10, Fence: 0, Audit Level: 0 sc=tivw2kv1 CPUID      RUN   NODE     LIMIT FENCE    DATE   TIME  STATE  METHOD   DOMAIN TIVW2KV1 1  *WNT  MASTER   10    0   06/11/03 12:08   I J            MASTERDM 
    end example

  6. After installing the first IBM Tivoli Workload Scheduler instance in the cluster you can now install the second IBM Tivoli Workload Scheduler engine in the cluster by repeating steps 1 through to 20 in "4.2.1, "Single instance of IBM Tivoli Workload Scheduler" on page 347, using the parameters listed in Table 4-8.

    Table 4-8: IBM Tivoli Workload Scheduler workstation definition

    Argument

    Value

    Description

    Installation User Name

    TIVW2KV2

    In this case, we used the name of the cluster group as the installation user name.

    Password

    TIVW2KV2

    To keep this installation simple, we used the same password as the installation user name, but in a real customer installation you would use the password provided by the customer.

    Destination Directory

    Y:\win32app\tws\tivw2kv2

    This has to be installed on the disk that is associated with cluster group TIVW2KV2; in this case, that is the Y drive.

    Company Name

    IBM ITSO

    This is used for the heading of reports, so enter the name of the Company that this installation is for. In our case, we used "IBM ITSO".

    Master CPU name

    TIVW2KV2

    Because we are installing a Master Domain Manager, the Master CPU name is the same as This CPU name.

    TCP Port Number

    31112

    Specifies the TCP port number that is used for communication. The default is 31111. If you have two copies of IBM Tivoli Workload Scheduler running on the same system, then the port address number must be different.

  7. When you get to step 20, replace the Installation Arguments with the values in Table 4-8.

  8. When you get to step 22, replace the workstation definition with the arguments list in Table 4-9.

    Table 4-9: IBM Tivoli Workload Scheduler workstation definition

    Argument

    Value

    Description

    cpuname

    TIVW2KV2

    Check that the workstation name is TIVW2KV2, as this should be filled in during the installation.

    Description

    Master CPU for the first cluster group

    Type in a description that is appropriate for this workstation.

    OS

    WNT

    Specifies the operating system of the workstation. Valid values include UNIX, WNT, and OTHER.

    Node

    9.3.4.175

    This field is the address that is associated with the first cluster group. This address can be a fully-qualified domain name or an IP address.

    Domain

    Masterdm

    Specify a domain name for this workstation. The default name is MASTERDM.

    TCPaddr

    31112

    Specifies the TCP port number that is used for communication. The default is 31111. If you have two copies of IBM Tivoli Workload Scheduler running on the same system, then the port address number must be different.

    For Maestro

     

    This field has no value, because it is a key word to start the extra options for the workstation.

    Autolink

    On

    When set to ON, it specifies whether to open the link between workstations at the beginning of each day during startup.

    Resolvedep

    On

    With this set to ON, this workstation will track dependencies for all jobs and job streams, including those running on other workstations.

    Fullstatus

    On

    With this set to ON, this workstation will be updated with the status of jobs and job streams running on all other workstations in its domain and in subordinate domains, but not on peer or parent domains.

    End

     

    This field has no value, because it is a key word to end the workstation definition.

  9. Now finish the first installation by repeating steps 23 through 27.

    However, when you reach step 25, use the following command:

     Maestrohome\bin\conman limit cpu=tivw2kv2;10 

    Run the conman show cpus command to verify that the command has worked correctly:

     Maestrohome\bin\conman sc=tivw2kv2 

    The conman output, shown in Example 4-91, contains the number 10 in the fourth column, indicating that the command has worked correctly.

    Example 4-91: conman output

    start example
     Y:\win32app\TWS\tivw2kv2\bin>conman sc=tws82 TWS for WINDOWS NT/CONMAN 8.2 (1.36.1.7) Licensed Materials  Property of IBM 5698-WKB (C) Copyright IBM Corp 1998,2001 US Government User Restricted Rights Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Installed for user ''. Locale LANG set to "en" Schedule (Exp) 06/11/03 (#1) on TIVW2KV2.  Batchman LIVES.  Limit: 10, Fence: 0, Audit Level: 0 sc=tivw2kv2 CPUID      RUN   NODE     LIMIT FENCE    DATE   TIME  STATE  METHOD  DOMAIN TIVW2KV2 1  *WNT  MASTER   10    0   06/11/03 12:08   I J           MASTERDM 
    end example

  10. The two instances of IBM Tivoli Workload Scheduler are installed in the cluster. Now you need to configure the cluster software so that the two copies of IBM Tivoli Workload Scheduler will work in a mutual takeover.

  11. You can configure the two instances of IBM Tivoli Workload Scheduler in the cluster services by creating two sets of new resources for each of the three IBM Tivoli Workload Scheduler services: Tivoli Netman, Tivoli Token Service and Tivoli Workload Scheduler.

    These two sets of three new resources have to be created in the same cluster group as the IBM Tivoli Workload Scheduler installation drive. The first set (TIVW2KV1) was installed in the X drive, so this drive is associated with cluster group "TIVW2KV1" . The second set (TIVW2KV2) was installed in the Y drive, so this drive is associated with cluster group "TIVW2KV2".

  12. Create the new resource "Tivoli Token Service" for the two IBM Tivoli Workload Scheduler engines by repeating steps 28 through to 34 in 4.2.1, "Single instance of IBM Tivoli Workload Scheduler" on page 347. Use the parameters in Table 4-10 on page 392 for the first set (TIVW2KV1), and use the parameters in Table 4-11 on page 392 for the second set (TIVW2KV2).

    Table 4-10: Tivoli Token Service definition for first instance

    REF figure

    Argument

    Value

    Description

    4-90

    Name

    ITIVW2KV1 - Token Service

    Enter the name of the new resource. In our case, we used the cluster group name followed by the service.

    4-90

    Description

    Tivoli Token Service for the first instance

    Enter a description of this resource "Tivoli Token Service for the first instance".

    4-90

    Resource type

    Generic Service

    Select the resource type of service for "ITIVW2KV1 - Token Service". Select Generic Service.

    4-90

    Group

    ITIVW2KV1

    Select the group where you want to create this resource in.

    4-93

    Service name

    tws_tokensrv_ TIVW2KV1

    Enter the service name; this can be found in the services panel.

    4-93

    Start parameters

     

    This service does not need any start parameters, so leave this blank.

    Table 4-11: Tivoli Token Service definition for second instance

    REF figure

    Argument

    Value

    Description

    4-90

    Name

    ITIVW2KV2 - Token Service

    Enter the name of the new resource. In our case, we used the cluster group name followed by the service.

    4-90

    Description

    Tivoli Token Service for the second instance

    Enter a description of this resource "Tivoli Token Service for the second instance".

    4-90

    Resource type

    Generic Service

    Select the resource type of service for "ITIVW2KV2 - Token Service". Select Generic Service.

    4-90

    Group

    ITIVW2KV2

    Select the group where you want to create this resource in.

    4-93

    Service name

    tws_tokensrv_ TIVW2KV2

    Enter the service name; this can be found in the services panel

    4-93

    Start parameters

     

    This service dose not need any start parameters, so leave this blank.

  13. Create the new resource "Tivoli Netman Service" for the two IBM Tivoli Workload Scheduler engines by repeating steps 35 through to 40 in 4.2.1, "Single instance of IBM Tivoli Workload Scheduler" on page 347.

    Use the parameters in Table 4-12 for the first set (TIVW2KV1) and use the parameters in Table 4-13 for the second set (TIVW2KV2) below.

    Table 4-12: Tivoli Netman Service definition for first instance

    REF figure

    Argument

    Value

    Description

    4-90

    Name

    ITIVW2KV1 - Netman Service

    Enter the name of the new resource. In this case, we used the cluster group name followed by the service.

    4-90

    Description

    Tivoli Netman Service for the first instance

    Enter a description of this resource "Tivoli Netman Service for the first instance".

    4-90

    Resource type

    Generic Service

    Select the resource type of service for "ITIVW2KV1 - Netman Service". Select Generic Service.

    4-90

    Group

    ITIVW2KV1

    Select the group where you want to create this resource in.

    4-93

    Service name

    tws_netman_T IVW2KV1

    Type in the service name; this can be found in the services panel.

    4-93

    Start parameters

     

    This service does not need any start parameters, so leave this blank.

    4-96

    Resource Dependencies

    ITIVW2KV1 - Token Service

    The only resource dependency is the ITIVW2KV1 - Token Service.

    Table 4-13: Tivoli Netman Service definition for second instance

    REF Figure

    Argument

    Value

    Description

    4-90

    Name

    ITIVW2KV2 - Netman Service

    Enter the name of the new resource. In our case, we used the cluster group name followed by the service.

    4-90

    Description

    Tivoli Netman Service for the second instance

    Enter a description of this resource "Tivoli Netman Service for the second instance".

    4-90

    Resource type

    Generic Service

    Select the resource type of service for "ITIVW2KV2 - Netman Service". Select Generic Service.

    4-90

    Group

    ITIVW2KV2

    Select the group where you want to create this resource in.

    4-93

    Service name

    tws_netman_TIVW2KV2

    Type in the service name; this can be found in the services panel.

    4-93

    Start parameters

     

    This service does not need any start parameters, so leave this blank.

    4-96

    Resource Dependencies

    ITIVW2KV2 - Token Service

    The only resource dependency is the ITIVW2KV2 - Token Service.

  14. Create the new resource "Tivoli Workload Scheduler" for the two IBM Tivoli Workload Scheduler engines by repeating steps 41 through to 48 in 4.2.1, "Single instance of IBM Tivoli Workload Scheduler" on page 347.

    Use the parameters in Table 4-14 for the first set (TIVW2KV1) and use the parameters in Table 4-15 on page 395 for the second set (TIVW2KV2).

    Table 4-14: Tivoli Workload Scheduler definition for first instance

    REF figure

    Argument

    Value

    Description

    4-90

    Name

    ITIVW2KV1 - Tivoli Workload Scheduler

    Enter the name of the new resource. In our case, we used the cluster group name followed by the service.

    4-90

    Description

    Tivoli Workload Scheduler for the first instance

    Enter a description of this resource "Tivoli Workload Scheduler for the second instance".

    4-90

    Resource type

    Generic Service

    Select the resource type of service for "ITIVW2KV1 - Tivoli Workload Scheduler". Select Generic Service.

    4-90

    Group

    ITIVW2KV1

    Select the where you want to create this resource in.

    4-93

    Service name

    tws_maestro_TIVW2KV1

    Enter the service name; this can be found in the services panel.

    4-93

    Start parameters

     

    This service does not need any start parameters, so leave this blank.

    4-96

    Resource Dependencies

    ITIVW2KV1 - Netman Service

    The only resource dependency is the ITIVW2KV1 - Netman Service.

    Table 4-15: Tivoli Workload Scheduler definition for second instance

    REF figure

    Argument

    Value

    Description

    4-90

    Name

    ITIVW2KV2 - Tivoli Workload Scheduler

    Enter the name of the new resource. In our case, we used the cluster group name followed by the service.

    4-90

    Description

    Tivoli Workload Scheduler for the second instance

    Enter a description of this resource "Tivoli Workload Scheduler for the second instance".

    4-90

    Resource type

    Generic Service

    Select the resource type of service for "ITIVW2KV2 - Tivoli Workload Scheduler". Select Generic Service.

    4-90

    Group

    ITIVW2KV2

    Select the group where you want to create this resource in.

    4-93

    Service name

    tws_maestro_TIVW2KV2

    Enter the service name; this can be found in the services panel.

    4-93

    Start parameters

     

    This service does not need any start parameters, so leave this blank.

    4-96

    Resource Dependencies

    ITIVW2KV2 - Netman Service

    The only resource dependency is the ITIVW2KV2 - Netman Service.

  15. All resources are set up and configured correctly. Now configure the cluster groups by going through the steps in 4.2.2, "Configuring the cluster group" on page 379.

    Use the parameters in Table 4-16 on page 396 for the first set (TIVW2KV1), and use the parameters in Table 4-13 on page 393 for the second set (TIVW2KV2).

    Table 4-16: Cluster group settings for first instance

    REF figure

    Argument

    Value

    Description

    4-99 General tab

    Name

    ITIVW2KV1 Group

    This name should be there by default. If it is not, then verify that the correct group is selected.

    4-99 General tab

    Description

    This group is for the first instance of IBM Tivoli Workload Scheduler

    Enter a description of this group.

    4-99 General Tab

    Preferred owner

    TIVW2KV1

    Select the preferred owner for this group. We selected TIVW2KV1.

    4-100 Failover Tab

    Threshold

    10

    Enter a number to define that this group can fail over within a set time period.

    4-100 Failover Tab

    Period

    6

    Enter a number to define that this group can fail over within this period. We selected 6 hours.

    4-101 Failback Tab

    Allow failback

    Check Allow Failback

    This will enable the facility to failback to the preferred owner.

    4-101 Failback Tab

    Failback between

    4 and 6

    Enter the time range that you would like the group to failback.

  16. You now have the two instances of IBM Tivoli Workload Scheduler engine installed on both sides and configured within the cluster, and also the cluster configured in the best way to satisfy IBM Tivoli Workload Scheduler.

  17. To test this installation, open the Cluster Administrator. Expand the group to show the two groups. Highlight one "TIVW2KV1". Go to File -> Move Group. All resources should go offline, and then the owner should change from TIVW2K1 to TIVW2K2. Then all resources should come back online, with the new owner.

4.2.4 Installation of the IBM Tivoli Management Framework

The IBM Tivoli Management Framework (Tivoli Framework) is used as an authenticating layer for any user that is using the Job Scheduling Console to connect with the IBM Tivoli Workload Scheduler engine. There are two products that get installed in the Framework: Job Scheduling Services (JSS), and the Job Scheduling Console (JSC). They make up the connection between the Job Scheduling Console and the IBM Tivoli Workload Scheduler engine, as shown in Figure 4-103.


Figure 4-103: IBM Tivoli Workload Scheduler user authentication flow

There are a number of ways to install the Tivoli Framework. You can install the Tivoli Framework separately from the IBM Tivoli Workload Scheduler engine. In this case, install the Tivoli Framework before installing IBM Tivoli Workload Scheduler.

If there is no Tivoli Framework installed on the system, you can use the Full install option when installing IBM Tivoli Workload Scheduler. This will install the Tivoli Management Framework 4.1, Job Scheduling Services (JSS), Job Scheduling Connector (JSC), and add the Tivoli Job Scheduling administration user.

In this section, we describe how to install the IBM Tivoli Management Framework separately. After or before IBM Tivoli Workload Scheduler is configured for Microsoft Cluster and made highly available, you can add IBM Tivoli Management Framework so that the Job Scheduling Console component of IBM Tivoli Workload Scheduler can be used.

Note 

IBM Tivoli Management Framework should be installed prior to IBM Tivoli Workload Scheduler Connector installation. For instructions on installing a TMR server, refer to Chapter 5 of Tivoli Enterprise Installation Guide Version 4.1, GC32-0804.

Here, we assume that you have already installed Tivoli Management Framework, and have applied the latest set of fix packs.

Because the IBM Tivoli Management Framework is not officially supported in a mutual takeover mode, we will install on the local disk on each side of the cluster, as shown in Figure 4-104.


Figure 4-104: Installation location for TMRs

The following instructions are only a guide to installing the Tivoli Framework. For more detailed information, refer to Tivoli Enterprise Installation Guide Version 4.1, GC32-0804.

To install Tivoli Framework, follow these steps:

  1. Select node1 to install the Tivoli Framework on. In our configuration, node 1 is called TIVW2K1.

  2. Insert the Tivoli Management Framework (1 of 2) CD into the CD-ROM drive, or map the CD from a drive on a remote system.

  3. From the taskbar, click Start, and then select Run to display the Run window.

  4. In the Open field, type x:\setup, where x is the CD-ROM drive or the mapped drive. The Welcome window is displayed.

  5. Click Next. The License Agreement window is displayed.

  6. Read the license agreement and click Yes to accept the agreement. The Accounts and File Permissions window is displayed.

  7. Click Next. The Installation Password window is displayed.

  8. 7.In the Installation Password window, perform the following steps:

    1. In the Password field, type an installation password, if desired. If you specify a password, this password must be used to install Managed Nodes, to create interregional connections, and to perform any installation using Tivoli Software Installation Service.

      Note 

      During installation the specified password becomes the installation and the region password. To change the installation password, use the odadmin region set_install_pw command. To change the region password, use the odadmin region set_region_pw command.

      Note that if you change one of these passwords, the other password is not automatically changed.

    2. Click Next. The Remote Access Account window is displayed.

  9. In the Remote Access Account window, perform the following steps:

    1. Type the Tivoli remote access account name and password through which Tivoli programs will access remote file systems. If you do not specify an account name and password and you use remote file systems, Tivoli programs will not be able to access these remote file systems.

      Note 

      If you are using remote file systems, the password must be at least one character. If the password is null, the object database is created, but you cannot start the object dispatcher (the oserv service).

    2. Click Next. The Setup Type window is displayed.

  10. In the Setup Type window, do the following:

    1. Select one of the following setup types:

      • Typical - Installs the IBM Tivoli Management Framework product and its documentation library.

      • Compact - Installs only the IBM Tivoli Management Framework product.

      • Custom - Installs the IBM Tivoli Management Framework components that you select.

    2. Accept the default destination directory or click Browse to select a path to another directory on the local system.

      Note 

      Do not install on remote file systems or share Tivoli Framework files among systems in a Tivoli environment.

    3. Click Next. If you selected the Custom option, the Select Components window is displayed. If you selected Compact or Typical, go to step 12.

  11. (Custom setup only) In the Select Components window, do the following:

    1. Select the components to install. From this window you can preview the disk space required by each component, as well as change the destination directory.

    2. If desired, click Browse to change the destination directory.

    3. Click Next. The Choose Database Directory window is displayed.

  12. In the Choose Database Directory window, do the following:

    1. Accept the default destination directory, or click Browse to select a path to another directory on the local system.

    2. Click Next. The Enter License Key window is displayed.

  13. In the Enter License Key window, do the following:

    1. In the Key field, type: "IBMTIVOLIMANAGEMENTREGIONLICENSEKEY41".

    2. Click Next. The Start Copying Files window is displayed.

  14. Click Next. The Setup Status window is displayed.

  15. After installing the IBM Tivoli Management Framework files, the setup program initializes the Tivoli object dispatcher server database. When the initialization is complete, you are prompted to press any key to continue.

  16. If this is the first time you installed IBM Tivoli Management Framework on this system, you are prompted to restart the machine.

    Tip 

    Rebooting the system loads the TivoliAP.dll file.

  17. After the installation completes, configure the Windows operating system for SMTP e-mail. From a command line prompt, enter the following commands:

     %SystemRoot%\system32 \drivers \etc \tivoli \setup_env.cmd bash wmailhost hostname 

  18. Tivoli Management Framework is installed on node 1, so now install it on node 2. In our configuration, node 2 is called TIVW2K2.

  19. Log into node 2 (TIVW2K2) and repeat steps 2 through to 17.

4.2.5 Installation of Job Scheduling Services

To install IBM Workload Scheduler Job Scheduling Services 8.2, you must have the following component installed within your IBM Tivoli Workload Scheduler 8.2 network:

  • Tivoli Framework 3.7.1 or 4.1

You must install the Job Scheduling Services on the Tivoli Management Region server or on a Managed Node on the same workstation where the Tivoli Workload Scheduler engine code is installed.

Note 

You only have to install this component if you wish to monitor or access the local data on the Tivoli Workload Scheduler engine by the Job Scheduling Console.

You can install and upgrade the components of the Job Scheduling Services using any of the following installation mechanisms:

  • By using an installation program, which creates a new Tivoli Management Region server and automatically installs or upgrades the IBM Workload Scheduler Connector and Job Scheduling Services

  • By using the Tivoli desktop, where you select which product and patches to install on which machine

  • By using the winstall command provided by Tivoli Management Framework, where you specify which products and patches to install on which machine

Here we provide an example of installing the Job Scheduling Services using the Tivoli Desktop. Ensure you have set the Tivoli environment by issuing the command c:\windirsystem32\drivers\etc\Tivoli\setup_env.cmd, then follow these steps:

Note 

Before installing any new product into the Tivoli Management Region server. make a backup of the Tivoli database.

  1. First select node1 to install the Tivoli Job Scheduling Services on. In our configuration, node 1 is called TIVW2K1.

  2. Open the Tivoli Desktop on TIVW2K1.

  3. From the Desktop menu choose Install, then Install Product. The Install Product window is displayed.

  4. Click Select Media to select the installation directory. The File Browser window is displayed.

  5. Type or select the installation path. This path includes the directory containing the CONTENTS.LST file.

  6. Click Set Media & Close. You return to the Install Product window.

  7. In the Select Product to Install list, select Tivoli Job Scheduling Services v. 1.2.

  8. In the Available Clients list, select the nodes to install on and move them to the Clients to Install On list.

  9. In the Install Product window, click Install. The Product Install window is displayed, which shows the operations to be performed by the installation program.

  10. Click Continue Install to continue the installation, or click Cancel to cancel the installation.

  11. The installation program copies the files and configures the Tivoli database with the new classes. When the installation is complete, the message Finished product installation appears. Click Close.

  12. Now select node 2 to install the Tivoli Job Scheduling Services on. In our configuration, node 2 is called TIVW2K2.

  13. Repeat steps 2 through to 11.

4.2.6 Installation of Job Scheduling Connector

To install IBM Workload Scheduler Connector 8.2, you must have the following components installed within your Tivoli Workload Scheduler 8.2 network:

  • Tivoli Framework 3.7.1 or 4.1

  • Tivoli Job Scheduling Services 1.3

You must install IBM Tivoli Workload Scheduler Connector on the Tivoli Management Region server or on a Managed Node on the same workstation where the Tivoli Workload Scheduler engine code is installed.

Note 

You only have to install this component if you wish to monitor or access the local data on the Tivoli Workload Scheduler engine by the Job Scheduling Console.

You can install and upgrade the components of IBM Tivoli Workload Scheduler Connector using any of the following installation mechanisms:

  • By using an installation program, which creates a new Tivoli Management Region server and automatically installs or upgrades IBM Workload Scheduler Connector and Job Scheduling Services

  • By using the Tivoli Desktop, where you select which product and patches to install on which machine

  • By using the winstall command provided by Tivoli Management Framework, where you specify which products and patches to install on which machine

Connector installation and customization varies, depending on whether your Tivoli Workload scheduler master is on a Tivoli Server or a Managed Node.

  • When the Workload Scheduler master is on a Tivoli server, you must install both Job Scheduling Services and the Connector on the Tivoli server of your environment. You must also create a Connector instance for the Tivoli server. You can do this during installation by using the Create Instance check box and completing the required fields. In this example, we are installing the connector in this type of configuration.

  • When the Workload Scheduler master is on a Managed Node, you must install Job Scheduling Services on the Tivoli Server and on the Managed Node where the master is located. You must then install the Connector on the Tivoli server and on the same nodes where you installed Job Scheduling Services. Ensure that you do not select the Create Instance check box.

  • If you have more than one node where you want to install the Connector (for example, if you want to access the local data of a fault-tolerant agent through the Job Scheduling Console), you can install Job Scheduling Services and the connector on multiple machines. However, in this case you should deselect the Create Instance check box.

Following is an example of how to install the Connector using the Tivoli Desktop. Ensure you have installed Job Scheduling Services and have set the Tivoli environment. Then follow these steps:

Note 

Before installing any new product into the Tivoli Management Region server, make a backup of the Tivoli database.

  1. Select node 1 to install Tivoli Job Scheduling Connector on. In our configuration, node 1 is called TIVW2K1.

  2. Open the Tivoli Desktop on TIVW2K1.

  3. From the Desktop menu choose Install, then Install Product. The Install Product window is displayed.

  4. Click Select Media to select the installation directory. The File Browser window is displayed.

  5. Type or select the installation path. This path includes the directory containing the CONTENTS.LST file.

  6. Click Set Media & Close. You will return to the Install Product window.

  7. In the Select Product to Install list, select Tivoli TWS Connector v. 8.2 The Install Options window is displayed.

  8. This window enables you to:

    • Install the Connector only.

    • Install the Connector and create a Connector instance.

  9. To install the Connector without creating a Connector instance, leave the Create Instance check box blank and leave the General Installation Options fields blank. These fields are used only during the creation of the Connector Instance.

  10. To install the Connector and create a Connector Instance:

    1. Select the Create Instance check box.

    2. In the TWS directory field, specify the directory where IBM Tivoli Workload Scheduler is installed.

    3. In the TWS instance name field, specify a name for the IBM Tivoli Workload Scheduler instance on the Managed Node. This name must be unique in the network. It is preferable to use the name of the scheduler agent as the instance name.

  11. Click Set to close the Install Options window and return to the Install Product window.

  12. In the Available Clients list, select the nodes to install on and move them to the Clients to Install On list.

  13. In the Install Product window, click Install. The Product Install window is displayed, which shows you the progress of the installation.

  14. Click Continue Install to continue the installation, or click Cancel to cancel the installation.

  15. The installation program copies the files and configures the Tivoli database with the new classes. When the installation is complete, the message Finished product installation appears. Click Close.

  16. Now select node 2 to install the Tivoli Job Scheduling Connector on. In our configuration, node 2 is called TIVW2K2.

  17. Repeat steps 2 through to 15.

4.2.7 Creating Connector instances

You need to create one Connector instance on each Framework server (one on each side of the cluster) for each engine that you want to access with the Job Scheduling Console. If you selected the create instance check box when running the installation program or installing from the Tivoli desktop, you do not need to perform the following procedure, but in our environment we do needed to do this.

To create Connector instances from the command line, ensure you set the Tivoli environment, then enter the following command on the Tivoli server or Managed Node where you installed the Connector that you need to access through the Job Scheduling Console:

 wtwsconn.sh -create -h node -n instance_name -t TWS_directory 

So in our case we need to run this four times, twice on one Framework server, and twice on the other, using these parameters:

First, on node TIVW2K1

 wtwsconn.sh -create -n TIVW2K1_rg1 -t X:\win32app\TWS\TWS82 wtwsconn.sh -create -n TIVW2K2_rg1 -t Y:\win32app\TWS\TWS82 

Then on node TIVW2K2

 wtwsconn.sh -create -n TIVW2K1_rg2 -t X:\win32app\TWS\TWS82 wtwsconn.sh -create -n TIVW2K2_rg2 -t Y:\win32app\TWS\TWS82 

4.2.8 Interconnecting the two Tivoli Framework Servers

Now that we have successfully installed and configured the two instances of the IBM Tivoli Workload Scheduler engine on the shared disk system in the Microsoft Cluster (4.2.3, "Two instances of IBM Tivoli Workload Scheduler in a cluster" on page 383) and the two Tivoli Management Frameworks, one on each workstation in the cluster on the local disk (4.2.4, "Installation of the IBM Tivoli Management Framework" on page 396).

Also we have successfully installed the Job Scheduling Services (4.2.5, "Installation of Job Scheduling Services" on page 401), and Job Scheduling Connectors in both of the Tivoli Management Framework.

We now need to share the IBM Tivoli Management Framework resources so that if one side of the cluster is down, then the operator can log into the other Tivoli Management Framework and see both IBM Tivoli Workload Scheduler engines through the connectors. To achieve this we need to share the resources between the two Tivoli Framework servers; this is called interconnection.

Framework interconnection is a complex subject. We will show how to interconnect the Framework servers for our environment, but you should plan your interconnection if your installation of IBM Tivoli Workload Scheduler is part of a larger Tivoli Enterprise environment

To interconnect the Framework servers for IBM Tivoli Workload Scheduler for the environment used in this redbook, first ensure you have set the Tivoli environment by issuing c:\windirsystem32\drivers\etc\Tivoli\setup_env.cmd

Then follow these steps:

  1. Before starting, make a backup of the IBM Tivoli Management Framework object database using the wbkupdb command. Log onto each as the Windows Administrator, and run a backup of the object database on each node.

  2. Run the wlookup commands on the cluster node 1 to determine that the Framework objects exists before interconnecting them. The syntax of the command is:

     wlookup -Lar ManagedNode 

    and

     wlookup -Lar MaestroEngine 

  3. Run the same wlookup commands on the other node in the cluster to see if the objects exist.

  4. Interconnect the Framework servers in a two-way interconnection using the wconnect command. For a full description of how to use this command, refer to Tivoli Management Framework Reference Manual Version 4.1, SC32-0806. While logged on to node TIVW2K1, enter the following command:

     wconnect -c none -l administrator -m Two-way -r none tivw2k2 

    Note 

    The two-way interconnection command only needs to be performed on one of the connections. If you have two cluster nodes, you only need to run the wconnect command on one of them.

  5. Use the wlsconn and odadmin commands to verify that the interconnection has worked between the two Framework servers. Look at the output of the wlsconn command; it will contain the primary IP hostname of the node that is interconnected to in the preceding step.

    In our environment, the primary IP hostname of cluster node TIVW2K2 is found under the SERVER column in the output of the wlsconn command. The same value is found under the Hostname(s) column in the output of the odadmin command, on the row that shows the Tivoli region ID of the cluster node.

  6. Interconnecting Framework servers only establishes a communication path. The Framework resources that need to be shared between Framework servers have to be pulled across the servers by using an explicit updating command.

    Sharing a Framework resource shares all the objects that the resource defines. This enables Tivoli administrators to securely control which Framework objects are shared between Framework servers, and to control the performance of the Tivoli Enterprise environment by leaving out unnecessary resources from the exchange of resources between Framework servers. Exchange all relevant Framework resources among cluster nodes by using the wupdate command.

    In our environment we exchanged the following Framework resources:

    • ManagedNode

    • MaestroEngine

    • MaestroDatabase

    • MaestroPlan

    • SchedulerEngine

    • SchedulerDatabase

    • SchedulerPlan

    Important: 

    The wupdate command must be run on all cluster nodes, even on two-way interconnected Framework servers.

    The SchedulerEngine Framework resource enables the interconnected scheduling engines to present themselves in the Job Scheduling Console. The MaestroEngine Framework resource enables the wmaeutil command to manage running instances of Connectors.

  7. Now verify the exchange of the Framework resources has worked. You can use the wlookup command with the following parameters:

     wlookup -Lar ManagedNode 

    and

     wlookup -Lar MaestroEngine 

    When you use the command wlookup with the parameter "ManagedNode", you will see the two nodes in this cluster. When you use the same command with the parameter "MaestroEngine", you should see four names that are associated with the two connector instances.

  8. Run the same sequence of wlookup commands, but on the cluster node on the opposite side of the interconnection. The output from the commands should be identical to the same commands run on the cluster node in the preceding step.

  9. Log into both cluster nodes through the Job Scheduling Console, using the service IP labels of the cluster nodes and the root user account. All scheduling engines (corresponding to the configured Connectors) on all cluster nodes appear. Those scheduling engines marked inactive are not active because the resource group is not running on that cluster node.

  10. Set up a periodic job to exchange Framework resources by using the wupdate command shown in the preceding steps. The frequency that the job should run at depends upon how often changes are made to the Connector objects. For most sites, best practice is a daily update about an hour before Jnextday. Timing it before Jnextday makes the Framework resource update compatible with any changes to the installation location of IBM Tivoli Workload Scheduler. These changes are often timed to occur right before Jnextday is run.

4.2.9 Installing the Job Scheduling Console

The Job Scheduling Console can be installed on any workstation that has a TCP/IP connection. However, to use the Job Scheduling Console Version 1.3 you should have the following components installed within your IBM Tivoli Workload Scheduler 8.2 network:

  • Tivoli Framework 3.7.1 or 4.1

  • Tivoli Job Scheduling Services 1.3

  • IBM Tivoli Workload Scheduler Connector 8.2

For a full description of the installation, refer to IBM Tivoli Workload Scheduler Job Scheduling Console User's Guide Feature Level 1.3, SC32-1257, and to IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices, SG24-6628.

For the most current information about supported platforms and system requirements, refer to IBM Tivoli Workload Scheduler Job Scheduling Console Release Notes, SC32-1258.

An installation program is available for installing the Job Scheduling Console. You can install directly from the CDs. Alternatively, copy the CD to a network drive and map that network drive. You can install the Job Scheduling Console using any of the following installation mechanisms:

  • By using an installation wizard that guides the user through the installation steps

  • By using a response file that provides input to the installation program without user intervention

  • By using Software Distribution to distribute the Job Scheduling Console files

Here we provide an example of the first method, using the installation wizard interactively. The installation program can perform a number of actions:

  • Fresh install

  • Add new languages to an existing installation

  • Repair an existing installation

Here we assume that you are performing a fresh install. The installation is exactly the same for a non-cluster installation as for a clustered environment.

  1. Insert the IBM Tivoli Workload Scheduler Job Scheduling Console CD 1 in the CD-ROM drive.

  2. Navigate to the JSC directory.

  3. Locate the directory of the platform on which you want to install the Job Scheduling Console, and run the setup program for the operating system on which you are installing:

    • Windows: setup.exe

    • UNIX: setup.bin

  4. The installation program is launched. Select the language in which you want the program to be displayed, and click OK.

  5. Read the welcome information and click Next.

  6. Read the license agreement, select the acceptance radio button, and click Next.

  7. Select the location for the installation, or click Browse to install to a different directory. Click Next.

    Note 

    The Job Scheduling Console installation directory inherits the access rights of the directory where the installation is performed. Because the Job Scheduling Console requires user settings to be saved, it is important to select a directory in which users are granted access rights.

  8. On the dialog displayed, you can select the type of installation you want to perform:

    • Typical. English and the language of the locale are installed. Click Next.

    • Custom. Select the languages you want to install and click Next.

    • Full. All languages are automatically selected for installation. Click Next.

  9. A panel is displayed where you can select the locations for the program icons. Click Next.

  10. Review the installation settings and click Next. The installation is started.

  11. When the installation completes, a panel will either display a successful installation, or it will contain a list of which items failed to install and the location of the log file containing the details of the errors.

  12. Click Finish.



 < Day Day Up > 



High Availability Scenarios with IBM Tivoli Workload Scheduler and IBM Tivoli Framework
High Availability Scenarios With IBM Tivoli Workload Scheduler And IBM Tivoli Framework
ISBN: 0738498874
EAN: 2147483647
Year: 2003
Pages: 92
Authors: IBM Redbooks

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net