10.4 Create the Cluster


10.4 Create the Cluster

Now comes the "fun" part – creating a single-node cluster. To create a cluster, we use the clu_create(8) command. The clu_create command performs the following:

  • It prompts the System Administrator for configuration information necessary for the creation of a single-node cluster.

  • It labels disks, creates root (/), /usr, and /var file systems, and the first member's boot disk, copies data to the newly created file systems, creates CDSLs, and updates system configuration information.

  • A quorum disk is optionally configured.

  • The next step is the kernel build of the first cluster member. At this point, the System Administrator will be given the opportunity to modify the kernel configuration file.

  • The final step is the configuration of boot-related SRM console variables: bootdef_dev, boot_reset, and boot_dev. This allows for the boot of the cluster from the cluster member's boot_partition.

Now without further ado, let's build a cluster!

10.4.1 The clu_create(8) Command

Let's start the process by issuing the clu_create command. A nice feature of the clu_createcommand is that it tells you exactly what it will do and then prompts you to verify if that is indeed what you really want to do.

 # /usr/sbin/clu_create This is the TruCluster Creation Program You will need the following information in order to create a cluster:     - Cluster name (a hostname which is also used as the       default cluster alias)     - Cluster alias IP address     - Clusterwide root disk and partition (for example, dsk4b)     - Clusterwide usr  disk and partition (for example, dsk4g)     - Clusterwide var  disk and partition (for example, dsk4h)     - Quorum disk device (for example, dsk4)     - Number of votes assigned to the quorum disk     - Member ID     - Number of votes assigned to this member     - First member's boot disk (for example, dsk5)     - First member's virtual cluster interconnect IP name     - First member's virtual cluster interconnect IP address     - First member's physical cluster interconnect devices     - First member's NetRAIN device name     - First member's physical cluster interconnect IP address The program will prompt for this information, offering a default value when one is available. To accept the default value, press Return. If you need help responding to a prompt, either type the word 'help' or type a question mark (?) at the prompt. The program does not begin to create a cluster until you answer all the prompts, and confirm that the answers are correct. Cluster creation involves the following steps:    Labeling disks (when required)    Creating AdvFS domains    Copying the files on the current root, usr, and var      partitions to the clusterwide partitions    Creating additional CDSLs    Updating configuration files    Building a kernel and copying it to the first member's boot disk After the kernel is built and copied, you will halt the system and boot it using the first member's boot disk. Do you want to continue creating the cluster? [yes]:  

10.4.1.1 Enter the Cluster Alias Name

The first thing that we are prompted for is the cluster alias name. Here we check the Cluster Preparation Checklist and Worksheet for the cluster alias name and IP address that we had planned to use.

 Each cluster has a unique cluster name, which is a hostname used to identify the entire cluster. Enter a fully-qualified cluster name []:babylon5.dec.com Checking cluster name: babylon5.dec.com You entered 'babylon5.dec.com' as your cluster name. Is this correct? [yes]:  

This will become the value of the clubase:cluster_name attribute, also known as the Default Cluster Alias (see Chapter 16 for more information).

10.4.1.2 Enter the Cluster Alias IP Address

 The cluster alias IP address is the IP address associated with the default cluster alias. (192.168.168.1 is an example of an IP address.) Enter the cluster alias IP address [ ]: Checking cluster alias IP address:192.168.0.70 You entered '192.168.0.70' as the IP address for the default cluster alias. Is this correct? [yes]:  

The cluster alias IP address is placed in the hosts file in the /etc directory.

10.4.1.3 Enter the cluster_root, cluster_usr, and cluster_var Partitions

Using the disk and partition planning information from the Cluster Preparation Checklist and Worksheet, we enter the partitions set aside for cluster_root, cluster_usr, and cluster_var.

 The cluster root partition is the disk partition (for example, dsk4b) that will hold the clusterwide root (/) file system.     Note: The default 'a' partition on most disks is not large     enough to hold the clusterwide root AdvFS domain. Enter the device name of the cluster root partition []:dsk6a Checking the cluster root partition: dsk6a You entered 'dsk6a' as the device name of the cluster root partition. Is this correct? [yes]:  

 The cluster usr partition is the disk partition (for example, dsk4g) that will contain the clusterwide usr (/usr) file system.     Note: The default 'g' partition on most disks is usually     large enough to hold the clusterwide usr AdvFS domain. Enter the device name of the cluster usr partition []:dsk6g Checking the cluster usr partition: dsk6g You entered 'dsk6g' as the device name of the cluster usr partition. Is this correct? [yes]:  

 The cluster var device is the disk partition (for example, dsk4h) that will hold the clusterwide var (/var) file system.     Note: The default 'h' partition on most disks is usually     large enough to hold the clusterwide var AdvFS domain. Enter the device name of the cluster var partition []:dsk6h Checking the cluster var partition: dsk6h You entered 'dsk6h' as the device name of the cluster var partition. Is this correct? [yes]:  

10.4.1.4 Enter the Quorum Disk

In planning this cluster, we set aside a quorum disk as we plan to have an even number of members in the cluster. In this section, we are prompted for the name of the quorum disk that was set aside.

 Do you want to define a quorum disk device at this time? [yes]:  The quorum disk device is the name of the disk (for example, 'dsk5') that will be used as this cluster quorum disk. Enter the device name of the quorum disk []:dsk5 Checking the quorum disk device: dsk5 The device you have selected for the quorum disk must be re-labeled with new cnx partition data. Performing this operation may cause data contained on this device to be destroyed. Do you want to use this device anyway? [yes]:  You entered 'dsk5' as the device name of the quorum disk device. Is this correct? [yes]:  

Next, we are prompted for the number of votes the quorum disk will have. We choose to have the quorum disk be assigned 1 vote; however, if we were to operate the cluster as a single-member cluster for awhile, then this should be set to zero votes.

 By default the quorum disk is assigned '1' vote(s). To use this default value, press Return at the prompt. The number of votes for the quorum disk is an integer usually 0 or 1. If you select 0 votes then the quorum disk will not contribute votes to the cluster. If you select 1 vote then the quorum disk must be accessible to boot and run a single member cluster. Enter the number of votes for the quorum disk [1]:  Checking number of votes for the quorum disk: 1 You entered '1' as the number votes for the quorum disk. Is this correct? [yes]:  

10.4.1.5 Enter the Member ID

Here we are prompted for the cluster member ID. As this is the first member of the cluster, we select the default of "1"; however, it can be any integer from 1 to 63.

By default, no matter what the member ID of the first cluster member, it is required that the first cluster member is assigned 1 vote.

 The default member ID for the first cluster member is '1'. To use this default value, press Return at the prompt. A member ID is used to identify each member in a cluster. Each member must have a unique member ID, which is an integer in the range 1-63, inclusive. Enter a cluster member ID [1]:  Checking cluster member ID: 1 You entered '1' as the member ID. Is this correct? [yes]:  By default the 1st member of a cluster is assigned '1' vote(s). Checking number of votes for this member: 1 

This will become the value of the generic:memberid attribute.

10.4.1.6 Enter the Member Boot Disk

From the planning we have done, enter the disk that we will use for this member's boot disk. Given that we had pre-configured our disk's partitions differently than the default partition sizes (and set the partitions "in-use"), notice the dialogue around the partition sizes. This is how we were able to use the appropriate partition sizes for swap.

 Each member has its own boot disk, which has an associated device name; for example, 'dsk5'. Enter the device name of the member boot disk []:dsk1 Checking the member boot disk: dsk1 The specified disk contains the required 'a', 'b', and 'h' partitions. The current partition sizes are acceptable for a member's boot disk. You can either keep the current disk partition layout or have the installation program relabel the disk. If the program relabels the disk, the new label will contain the following partitions and sizes (in blocks):    Current                 New    -------                 ---    a: 524288               a: 524288    b: 1572864              b: 16777216    h: 2048                 h: 2048 Do you want to use the current disk partitions? [yes]:  You entered 'dsk1' as the device name of this member's boot disk. Is this correct? [yes]:  

10.4.1.7 Enter the Cluster Interconnect IP Name

Prior to TruCluster Server version 5.1A, there was only one type of cluster interconnect supported – the Memory Channel card. As such, at this point in the creation of the cluster, instead of "ics0" as the virtual cluster interconnect device, "mc0" would be presented as the cluster interconnect interface device. If you are building either a version 5.0A or a version 5.1 cluster, please substitute "mc0" for "ics0".

By default, the cluster interconnect IP name is the individual system's name with "-ics0" appended to it.[3] Prior to TruCluster Server version 5.1A, the default interconnect IP name is the individual system's name appended with "-mc0". For more information on this subject, please see section 4.4.1.5.

As this cluster interconnect IP name will be used for the internode communications among the cluster members, we recommend using this default value.

 Device 'ics0' is the default virtual cluster interconnect device Checking virtual cluster interconnect device: ics0 The virtual cluster interconnect IP name 'molari-ics0' was formed by appending '-ics0' to the system's hostname. To use this default value, press Return at the prompt. Each virtual cluster interconnect interface has a unique IP name (a hostname) associated with it. Enter the IP name for the virtual cluster interconnect [molari-ics0]:  Checking virtual cluster interconnect IP name: molari-ics0 You entered 'molari-ics0' as the IP name for the virtual cluster interconnect. Is this name correct? [yes]:  

10.4.1.8 Enter the Cluster Interconnect IP Address

By default, the subnet 10.0.0 is used as the private network for internode communications between the cluster members. Any subnet that is or will be a private network and that is not in your DNS configuration should also work. For more information, please see sections 4.4.1.5 and 4.4.2.3.

Also by default, the 10.0.0 subnet appended with the member ID is provided as a default cluster interconnect IP address. For simplicity purposes and since it is a reserved address per RFC 1918 (private and non-routable), using this default as the cluster interconnect IP address makes good sense.

 The virtual cluster interconnect IP address '10.0.0.1' was created by replacing the last byte of the default virtual cluster interconnect network address '10.0.0.0' with the previously chosen member ID '1'. To use this default value, press Return at the prompt. The virtual cluster interconnect IP address is the IP address associated with the virtual cluster interconnect IP name. (192.168.168.1 is an example of an IP address.) Enter the IP address for the virtual cluster interconnect [10.0.0.1]:  Checking virtual cluster interconnect IP address: 10.0.0.1 You entered '10.0.0.1' as the IP address for the virtual cluster interconnect. Is this address correct? [yes]:  

10.4.1.9 Enter the Cluster Interconnect Type

This section of the inputs for the clu_create command is new and not found in TruCluster Server version 5.0A or 5.1. With the addition of the Ethernet Local Area Network (LAN) as a cluster interconnect in TruCluster Server version 5.1A, there had to be some way to specify between using Memory Channel and LAN as a cluster interconnect. In this section, we will be allowed to select which cluster interconnect type to use for our cluster. For more detailed information on configuring the different types of cluster interconnect hardware, please review section 4.4.

Here we will provide two separate examples using the different cluster interconnect types. The first example uses the Memory Channel, and the second uses the Ethernet LAN.

  • Using Memory Channel as a cluster interconnect is very straightforward. We simply select Memory Channel as the type of interconnect, verify that this is what we want, and we are done.

     What type of cluster interconnect will you be using?     Selection      Type of Interconnect ----------------------------------------------------------------------         1          Memory Channel         2          Local Area Network         3          None of the above         4          Help         5          Display all options again ---------------------------------------------------------------------- Enter your choice [1]:1 

     You selected option '1' for the cluster interconnect Is that correct? (y/n) [y]:y 

     Device 'mc0' is the default physical cluster interconnect interface device Checking physical cluster interconnect interface device name(s): mc0 

  • Using an Ethernet LAN as a cluster interconnect is a bit more complex than using a Memory Channel card. Instead of selecting Memory Channel, we select Local Area Network as the type of interconnect and then verify that this is indeed the interconnect that we want. However, we are not finished with our selections.

     What type of cluster interconnect will you be using?     Selection     Type of Interconnect ----------------------------------------------------------------------          1        Memory Channel          2        Local Area Network          3        None of the above          4        Help          5        Display all options again ---------------------------------------------------------------------- Enter your choice [1]:2 

     You selected option '2' for the cluster interconnect Is that correct? (y/n) [y]:  

    In this example, we use a gigabit Ethernet LAN card as a cluster interconnect, and we must now tell the software the type of device that it is. In this case, the device is "alt0". For the purposes of our example and as we are only using one Ethernet device as a cluster interconnect, we choose not to put this Ethernet device into a NetRAIN set. For more information on NetRAIN and the benefits of using NetRAIN, please refer to Chapters 9 and 12.

     The physical cluster interconnect interface device is the name of the physical device(s) which will be used for low level cluster node communications. Examples of the physical cluster interconnect interface device name are: tu0, ee0, and nr0. Enter the physical cluster interconnect device name(s) []:alt0 Would you like to place this Ethernet device into a NetRAIN set? [yes]:no Checking physical cluster interconnect interface device name(s):alt0 You entered 'alt0' as your physical cluster interconnect interface device name(s). Is this correct? [yes]:  

    Now that the software knows what the physical cluster interconnect device is, we need the physical cluster interconnect IP name and IP address. The physical cluster interconnect IP name is created from the word "member" appended with the member ID and "-icstcp0". In our example, this would be member1-icstcp0. This is done automatically for us.

  • The default physical cluster interconnect IP address is created by appending the member ID "1" to the default private subnet 10.1.0. Again, we decide to accept this default for the sake of simplicity as it is a reserved address per RFC 1918 (private and non-routable) and it is a good choice to meet our needs in creating a cluster.

     The physical cluster interconnect IP name ' member1-icstcp0' was formed by appending '-icstcp0' to the word 'member' and the member ID. Checking physical cluster interconnect IP name: member1-icstcp0 The physical cluster interconnect IP address '10.1.0.1' was created by replacing the last byte of the default cluster interconnect network address '10.1.0.0' with the previously chosen member ID '1'. To use this default value, press Return at the prompt. The cluster physical interconnect IP address is the IP address associated with the physical cluster interconnect IP name. (192.168.168.1 is an example of an IP address.) Enter the IP address for the physical cluster interconnect [10.1.0.1]:  Checking physical cluster interconnect IP address: 10.1.0.1 You entered '10.1.0.1' as the IP address for the physical cluster interconnect. Is this address correct? [yes]:  

    For more information on configuring an Ethernet LAN card as a cluster interconnect, please see section 4.4.2.

10.4.1.10 Input Summary

Now that we have completed all the inputs required to create a single node cluster, we are given a chance to review all that we have entered. Given that we have two separate examples using the different cluster interconnect types, we also have two different configuration summaries.

  • This first configuration summary is when we used Memory Channel as a cluster interconnect. If all the entries look good, we just answer "yes" when prompted if we want to create the cluster.

     You entered the following information:     Cluster name:                                                babylon5.dec.com     Cluster alias IP Address:                                    192.168.0.70     Clusterwide root partition:                                  dsk6a     Clusterwide usr partition:                                   dsk6g     Clusterwide var partition:                                   dsk6h     Clusterwide i18n partition:                                  Not-Applicable     Quorum disk device:                                          dsk5     Number of votes assigned to the quorum disk:                 1     First member's member ID:                                    1     Number of votes assigned to this member:                     1     First member's boot disk:                                    dsk1     First member's virtual cluster interconnect device name:     ics0     First member's virtual cluster interconnect IP name:         molari-ics0     First member's virtual cluster interconnect IP address:      10.0.0.1     First member's physical cluster interconnect devices         mc0     First member's NetRAIN device name                           Not-Applicable     First member's physical cluster interconnect IP address      Not-Applicable If you want to change any of the above information, answer 'n' to the following prompt. You will then be given an opportunity to change your selections. Do you want to continue to create the cluster? [yes]:  

  • This last configuration summary is for when we used the Ethernet LAN as a cluster interconnect. Again, if we agree that all the entries look good, we just answer "yes" when prompted if we want to create the cluster.

    Notice that there is a subtle difference between using the Memory Channel and the Ethernet LAN as a cluster interconnect. Aside from the physical cluster interconnect device being different, can you tell if anything else is different? What about the physical cluster interconnect IP address? When we use the Ethernet LAN as a cluster interconnect, we are actually asked for this as an input.

     You entered the following information:     Cluster name:                                              babylon5.dec.com     Cluster alias IP Address:                                  192.168.0.70     Clusterwide root partition:                                dsk6a     Clusterwide usr partition:                                 dsk6g     Clusterwide var partition:                                 dsk6h     Clusterwide i18n partition:                                Not-Applicable     Quorum disk device:                                        dsk5     Number of votes assigned to the quorum disk:               1     First member's member ID:                                  1     Number of votes assigned to this member:                   1     First member's boot disk:                                  dsk1     First member's virtual cluster interconnect device name:   ics0     First member's virtual cluster interconnect IP name:       molari-ics0     First member's virtual cluster interconnect IP address:    10.0.0.1     First member's physical cluster interconnect devices       alt0     First member's NetRAIN device name                         Not-Applicable     First member's physical cluster interconnect IP address    10.1.0.1 If you want to change any of the above information, answer 'n' to the following prompt. You will then be given an opportunity to change your selections. Do you want to continue to create the cluster? [yes]:  

Now that we are satisfied with all our entries and it matches our plan for the creation of our cluster, from the Cluster Preparation Worksheet and Checklist, we are ready to configure the cluster.

10.4.1.11 Configuring the Cluster

As we have already provided the clu_create software and all the inputs required to create a single-node cluster, let's see what happens during the actual configuration and creation of the cluster.

First, the member disk and the quorum disk are labeled, and the cnx partition on each is initialized.

 Creating required disk labels.   Creating disk label on member disk : dsk1   Initializing cnx partition on member disk : dsk1h   Creating disk label on quorum disk : dsk5   Initializing cnx partition on quorum disk : dsk5h 

Next, the AdvFS domains for all our system level file systems are created for the cluster and for the first cluster member. These AdvFS domains are cluster_root, cluster_usr, cluster_var, and finally, the first cluster member's boot_partition (root1_domain).

 Creating AdvFS domains:   Creating AdvFS domain 'root1_domain#root' on partition '/dev/disk/dsk1a'.   Creating AdvFS domain 'cluster_root#root' on partition '/dev/disk/dsk6a'.   Creating AdvFS domain 'cluster_usr#usr' on partition '/dev/disk/dsk6g'.   Creating AdvFS domain 'cluster_var#var' on partition '/dev/disk/dsk6h'. 

Now that the cluster-common file systems are created, we start to populate these file systems with data from the original Tru64 UNIX system disk. As we have stated previously, Tru64 UNIX provides the foundation for TruCluster Server. And you thought we were just kidding

 Populating clusterwide root, usr, and var file systems:   Copying root file system to 'cluster_root#root'.   Copying usr file system to 'cluster_usr#usr'.   Copying var file system to 'cluster_var#var'. 

Once all the data is copied from the original Tru64 UNIX system disk, Context Dependent Symbolic Links (CDSLs) are created in all the new system level file systems. Now we start to see a bit of the magic that we call the "Cluster Hooks." We will expand on our discussion of the "Cluster Hooks" in Chapter 12.

 Creating Content Dependent Symbolic Links (CDSLs) for file systems:   Creating CDSLs in root file system.   Creating CDSLs in usr file system.   Creating CDSLs in var file system.   Creating links between clusterwide file systems 

In the next stage, the first cluster member's root file system is populated.

 Populating member's root file system. 

Finally, in this next section, system level files are either created or updated based upon the configuration entries that we previously provided. As we provided separate configurations for the two examples using the different types of cluster interconnects, we will continue to use separate examples for the two different cluster interconnect configurations.

  • This is the output from the configuration using Memory Channel as a cluster interconnect.

     Modifying configuration files required for cluster operation:   Creating /etc/fstab file.   Configuring cluster alias.   Updating /etc/hosts - adding IP address '192.168.0.70' and hostname 'babylon5.dec.com'   Updating member-specific /etc/inittab file with 'cms' entry.   Updating /etc/hosts - adding IP address '10.0.0.1' and hostname 'molari-ics0'   Updating /etc/rc.config file.   Updating /etc/sysconfigtab file.   Retrieving cluster_root major and minor device numbers.   Creating cluster device file CDSLs.   Updating /.rhosts - adding hostname 'babylon5.dec.com'.   Updating /etc/hosts.equiv - adding hostname 'babylon5.dec.com'   Updating /.rhosts - adding hostname 'molari-ics0'.   Updating /etc/hosts.equiv - adding hostname 'molari-ics0'   Updating /etc/ifaccess.conf - adding deny entry for 'ee0'   Updating /etc/ifaccess.conf - adding deny entry for 'sl0'   Finished updating member1-specific area. 

  • This is the output from a configuration using Ethernet LAN as the cluster interconnect.

     Modifying configuration files required for cluster operation:   Creating /etc/fstab file.   Configuring cluster alias.   Updating /etc/hosts - adding IP address '192.168.0.70' and hostname 'babylon5.dec.com'   Updating member-specific /etc/inittab file with 'cms' entry.   Updating /etc/hosts - adding IP address '10.0.0.1' and hostname 'molari-ics0'   Updating /etc/hosts - adding IP address '10.1.0.1' and hostname 'member1- icstcp0'   Updating /etc/rc.config file.   Updating /etc/sysconfigtab file.   Retrieving cluster_root major and minor device numbers.   Creating cluster device file CDSLs.   Updating /.rhosts - adding hostname 'babylon5.dec.com'.   Updating /etc/hosts.equiv - adding hostname 'babylon5.dec.com'   Updating /.rhosts - adding hostname 'molari-ics0'.   Updating /etc/hosts.equiv - adding hostname 'molari-ics0'   Updating /.rhosts - adding hostname 'member1-icstcp0'.   Updating /etc/hosts.equiv - adding hostname 'member1-icstcp0'   Updating /etc/ifaccess.conf - adding deny entry for 'ee0'   Updating /etc/ifaccess.conf - adding deny entry for 'sl0'   Finished updating member1-specific area. 

  • Next, the new kernel for the first cluster member is built and copied into place.

     Building a kernel for this member.   Saving kernel build configuration.   The kernel will now be configured using the doconfig program. *** KERNEL CONFIGURATION AND BUILD PROCEDURE *** Saving /sys/conf/MOLARI as /sys/conf/MOLARI.bck *** PERFORMING KERNEL BUILD ***         Working....Mon Mar 25 12:03:40 PST 2002 The new kernel is /sys/MOLARI/vmunix   Finished running the doconfig program.   The kernel build was successful and the new kernel    has been copied to this member's boot disk.   Restoring kernel build configuration. 

10.4.1.12 Updating the SRM Console Variables

In this next section, the boot-dependent SRM console variables are either created or updated to allow for the boot from the newly created cluster member's boot disk. It is important to note that these boot-dependent SRM console variables are not being set to root (/) but to the cluster member's boot disk.

 Updating console variables   Setting console variable 'bootdef_dev' to dsk1   Setting console variable 'boot_dev' to dsk1   Setting console variable 'boot_reset' to ON   Saving console variables to non-volatile storage 

10.4.1.13 Completing the Creation of the New Single-Node Cluster

Finally, the clu_create command completes and asks if we want to reboot the system to bring up the newly created single-node cluster.

 clu_create: Cluster created successfully. To run this system as a single member cluster it must be rebooted. If you answer yes to the following question clu_create will reboot the system for you now. If you answer no, you must manually reboot the system after clu_create exits. Would you like clu_create to reboot this system now? [yes]  

10.4.2 Check the Cluster Configuration

After the system is rebooted, it comes back up as a new single-node cluster. Let's check and see what has changed.

10.4.2.1 Checking the File Systems

With regard to the file systems, things look a little different. Using the mount command, we see three new cluster file systems that are used for root (/), /usr, and /var. We also see the new cluster member's boot_partition is separate.

 # /sbin/mount cluster_root#root on / type advfs (rw) cluster_var#var on /var type advfs (rw) root1_domain#root on /cluster/members/member1/boot_partition type advfs (rw) cluster_usr#usr on /usr type advfs (rw) /proc on /proc type procfs (rw) 

Taking a closer look, we see that four new AdvFS domains have been created for these file systems. We also see that the AdvFS domains for the original Tru64 UNIX system level file systems are still available but not mounted.

 # ls /etc/fdmns advfslock_cluster_root           cluster_root/ .advfslock_cluster_usr           cluster_usr/ .advfslock_cluster_var           cluster_var/ .advfslock_root1_domain          root1_domain/ .advfslock_root_domain           root_domain/ .advfslock_usr_domain            usr_domain/ .advfslock_var_domain            var_domain/ 

By looking at the directories for the domains, you can verify that the devices we chose were set. Here is one possible approach:

 # for i in $(ls -l /etc/fdmns | awk '/^d/ { print $9 }') > do >  cd /etc/fdmns/$i >  print "\n[$PWD]" >  ls -l | awk '{ print "\t",$9,$10,$11 }' > done 
 [/etc/fdmns/cluster_root]         dsk6a -> /dev/disk/dsk6a [/etc/fdmns/cluster_usr]         dsk6g -> /dev/disk/dsk6g [/etc/fdmns/cluster_var]         dsk6h -> /dev/disk/dsk6h [/etc/fdmns/root1_domain]         dsk1a -> /dev/disk/dsk1a [/etc/fdmns/root_domain]         dsk0a -> /dev/disk/dsk0a [/etc/fdmns/usr_domain]         dsk0g -> /dev/disk/dsk0g [/etc/fdmns/var_domain]         dsk0h -> /dev/disk/dsk0h 

10.4.2.2 Checking the SRM Console Variables

We also see that the boot-related SRM console variables are still set from the creation of the cluster.

 # /sbin/consvar -v –l | grep boot boot_dev = dsk1 bootdef_dev = dsk1 booted_dev = dsk1 boot_file = booted_file = boot_osflags = A booted_osflags = A boot_reset = ON 

10.4.2.3 Checking the clu_create Command Logs

To review what occurred during the creation of the single-node cluster, we encourage you to review the logs created by the clu_create command. This log file is /cluster/admin/clu_create.log and is created or updated every time the clu_create command is executed.

10.4.2.4 Checking the Cluster Using hwmgr(8) Command

Using the hwmgr command, you can also obtain information about the new cluster.

 # /sbin/hwmgr -view cluster Member ID     State    Member HostName ---------     -----    ---------------     1         UP       molari.dec.com (localhost) 

[3]This is for TruCluster Server version 5.1A or higher.




TruCluster Server Handbook
TruCluster Server Handbook (HP Technologies)
ISBN: 1555582591
EAN: 2147483647
Year: 2005
Pages: 273

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net