There is an old saying, "Trust but verify." In this next section, we will verify that the single-node cluster was created properly and is operational. It is important to verify that the configuration of the cluster actually works before proceeding any further and adding additional cluster nodes. We do this by checking and verifying each of the cluster's software subsystems.
The clu_check_config(8) command allows us to check the overall configuration of the new cluster and its subsystems. This command executes other commands that check the individual subsystems.
# /usr/sbin/clu_check_config Starting Cluster Configuration Check... ***************** Log Start ***************** Sun Apr 21 00:59:06 PDT 2002 ***** ***** Output from running clu_get_info -full ***** Cluster information for cluster babylon5 Number of members configured in this cluster = 1 memberid for this member = 1 Cluster incarnation = 0x15512 Cluster expected votes = 2 Current votes = 2 Votes required for quorum = 1 Quorum disk = dsk5h Quorum disk votes = 1 Information on each cluster member Cluster memberid = 1 Hostname = molari.dec.com Cluster interconnect IP name = molari-ics0 clu_check_config : no configuration errors or warnings were detected Cluster interconnect IP address = 10.0.0.1 Member state = UP Member base O/S version = Compaq Tru64 UNIX V5.1A (Rev. 1885) Member cluster version = TruCluster Server V5.1A (Rev. 1312) Member running version = INSTALLED Member name = molari Member votes = 1 csid = 0x20001 ***** ***** Output from running cfsmgr –v ***** Domain or filesystem name = cluster_root#root Mounted On = / Server Name = molari Server Status : OK Domain or filesystem name = root1_domain#root Mounted On = /cluster/members/member1/boot_partition Server Name = molari Server Status : OK Domain or filesystem name = cluster_var#var Mounted On = /var Server Name = molari Server Status : OK Domain or filesystem name = cluster_usr#usr Mounted On = /usr Server Name = molari Server Status : OK ***** ***** Output from running cluamgr -s all ***** ***** Running cluamgr on member molari-ics0 Status of Cluster Alias: babylon5.dec.com netmask: 0 aliasid: 1 flags: 7<ENABLED,DEFAULT,IP_V4> connections rcvd from net: 29 connections forwarded: 22 connections rcvd within cluster: 14 data packets received from network: 8364 data packets forwarded within cluster: 943 datagrams received from network: 2190 datagrams forwarded within cluster: 37 datagrams received within cluster: 3063 fragments received from network: 0 fragments forwarded within cluster: 0 fragments received within cluster: 0 Member Attributes: memberid: 1, selw=3, selp=1, rpri=1 flags=11<JOINED,ENABLED> ***** ***** Checking daemons on members ***** ***** ***** Checking member molari-ics0 daemons aliasd is RUNNING aliasd_niff is RUNNING /sbin/kloadsrv is RUNNING /usr/sbin/evmd is RUNNING /usr/sbin/niffd is RUNNING /usr/sbin/portmap is RUNNING /usr/sbin/caad is RUNNING /usr/sbin/clu_wall is RUNNING /usr/sbin/xntpd is RUNNING /usr/sbin/gated is RUNNING /usr/sbin/clu_mibs is RUNNING /usr/sbin/rdginit is RUNNING /usr/sbin/snmpd is RUNNING /usr/sbin/syslogd is RUNNING /usr/sbin/binlogd is RUNNING /usr/sbin/mountd is RUNNING /usr/sbin/nfsd is RUNNING /usr/sbin/smsd is RUNNING /usr/sbin/clu_wall is RUNNING /usr/sbin/xntpd is RUNNING /usr/sbin/gated is RUNNING /usr/sbin/clu_mibs is RUNNING /usr/sbin/rdginit is RUNNING /usr/sbin/snmpd is RUNNING /usr/sbin/syslogd is RUNNING /usr/sbin/binlogd is RUNNING /usr/sbin/mountd is RUNNING /usr/sbin/nfsd is RUNNING /usr/sbin/smsd is RUNNING ***** ***** Checking time synchronization between members ***** ***** molari-ics0: delay:0.000976 offset:0.000488 Sun Apr 21 00:59:10 2002 check_cdsl_config :Checking installed CDSLs check_cdsl_config :Successfully verified CDSLs configuration
Let's now check the Connection Manager subsystem. The clu_get_info(8) command provides detailed information about the cluster and all the cluster members. This command is also executed from the clu_check_config command.
# /usr/sbin/clu_get_info -full Cluster information for cluster babylon5 Number of members configured in this cluster = 1 memberid for this member = 1 Cluster incarnation = 0xb63fd Cluster expected votes = 2 Current votes = 2 Votes required for quorum = 1 Quorum disk = dsk5h Quorum disk votes = 1 Information on each cluster member Cluster memberid = 1 Hostname = molari.dec.com Cluster interconnect IP name = molari-ics0 Cluster interconnect IP address = 10.0.0.1 Member state = UP Member base O/S version = Compaq Tru64 UNIX V5.1A (Rev. 1885) Member cluster version = TruCluster Server V5.1A (Rev. 1312) Member running version = INSTALLED Member name = molari Member votes = 1 csid = 0x10001
The clu_quorum(8) command is used for configuring or deleting a quorum disk and adjusting the quorum disk votes, individual member node votes, and overall expected votes. This command can also be used to display information about what quorum is in a cluster. For more detailed information concerning the clu_quorum command, please read Chapter 17 on the Connection Manager or the clu_quorum(8) reference page.
In any event, let's look at quorum in our new single-node cluster using the clu_quorum command.
# /usr/sbin/clu_quorum Cluster Quorum Data for: babylon5 as of Sun Apr 21 01:27:16 PDT 2002 Cluster Common Quorum Data Quorum disk: dsk5h File: /etc/sysconfigtab.cluster Attribute File Value expected votes 2 Member 1 Quorum Data Host name: molari.dec.com Status: UP File: /cluster/members/member1/boot_partition/etc/sysconfigtab Attribute Running Value File Value current votes 2 N/A quorum votes 1 N/A expected votes 2 2 node votes 1 1 qdisk votes 1 1 qdisk major 19 19 qdisk minor 384 384
The cluamgr(8) command is used to manage and report information and statistics about cluster aliases. As this cluster was just created, there should only be information about one cluster alias – the default cluster alias. Let's see what information is provided.
# /usr/sbin/cluamgr -s all Status of Cluster Alias: babylon5.dec.com netmask: 0 aliasid: 1 flags: 7<ENABLED,DEFAULT,IP_V4> connections rcvd from net: 30 connections forwarded: 22 connections rcvd within cluster: 17 data packets received from network: 8796 data packets forwarded within cluster: 1361 datagrams received from network: 2190 datagrams forwarded within cluster: 37 datagrams received within cluster: 3068 fragments received from network: 0 fragments forwarded within cluster: 0 fragments received within cluster: 0 Member Attributes: memberid: 1, selw=3, selp=1, rpri=1 flags=11<JOINED,ENABLED>
As we can see, the cluster alias is "babylon5.dec.com" and it only has one cluster member in the cluster. This is evident by the memberid field.
More information on cluster aliases can found in Chapter 16 on the Cluster Alias Subsystem.
The command used to configure and report on the DRD subsystem is the drdmgr(8) command. Let's use the drdmgr command to look at the properties of the disk containing the cluster_root file system.
First, let's find out what disk is used for the cluster_root domain.
# /sbin/showfdmn cluster_root Id Date Created LogPgs Version Domain Name 3d013c89.0000fc70 Thu Apr 21 16:06:49 2002 512 4 cluster_root Vol 512-Blks Free % Used Cmode Rblks Wblks Vol Name 1L 1048576 769360 27% on 256 256/dev/disk/dsk6a
Now let's use the drdmgr command to examine dsk6.
# /sbin/drdmgr dsk6 View of Data from member molari as of 2002-04-21:01:24:17 Device Name: dsk6 Device Type: Direct Access IO Disk Device Status: OK Number of Servers: 1 Server Name: molari Server State: Server Access Member Name: molari Open Partition Mask: 0xc1 < a g h > Statistics for Client Member: molari Number of Read Operations: 8189 Number of Write Operations: 910513 Number of Bytes Read: 145510400 Number of Bytes Written: 8068251648
Notice that molari is both the server and the client (Access Member Name) for this disk. Also notice that three partitions are open on this disk (from the Open Partition Mask) – partitions a, g, and h. These partitions correspond to the AdvFS domains for cluster_root, cluster_usr, and cluster_var.
Now let's look at the quorum disk.
# /sbin/drdmgr dsk5 View of Data from member molari as of 2002-04-21:01:29:39 Device Name: dsk5 Device Type: Direct Access IO Disk Device Status: OK Number of Servers: 1 Server Name: molari Server State: Server Access Member Name: molari Open Partition Mask: 0 Statistics for Client Member: molari Number of Read Operations: 200979 Number of Write Operations: 200977 Number of Bytes Read: 205805568 Number of Bytes Written: 102903808
As you can see, there appears to be quite a bit of I/O activity to this disk but there does not appear to be any open partitions. As this is the quorum disk, the Connection Manager does not have any partitions open.
For more detailed information on the DRD subsystem, please review Chapter 15 on the Device Request Dispatcher.
The cfsmgr(8) command is used to manage and gather information on the mounted file systems in a cluster.
With the creation of a single-node cluster, let's see what information the cfsmgr command provides.
# /sbin/cfsmgr Domain or filesystem name = cluster_root#root Mounted On = / Server Name = molari Server Status : OK Domain or filesystem name = root1_domain#root Mounted On = /cluster/members/member1/boot_partition Server Name = molari Server Status : OK Domain or filesystem name = cluster_var#var Mounted On = /var Server Name = molari Server Status : OK Domain or filesystem name = cluster_usr#usr Mounted On = /usr Server Name = molari Server Status : OK
Since this is a single-node cluster, it is reasonable to expect that the server for all the file systems is the only member in the cluster. For more detailed information on the Cluster File System subsystem, please refer to Chapter 13.
The caa_stat(8) command is used to obtain status of applications under CAA subsystem control. Applications under CAA subsystem control can usually run on only one cluster node at a time. CAA is the high availability mechanism to failover an application if the cluster node where it was running becomes unavailable.
By default, usually cluster_lockd is configured as a CAA on a new cluster. Using the caa_stat command provides for statistics on its state.
# /usr/bin/caa_stat -t Name Type Target State Host ------------------------------------------------------------ cluster_lockd application ONLINE ONLINE molari dhcp application OFFLINE OFFLINE named application OFFLINE OFFLINE
For more information on CAA, please review Chapters 23 and 24 on Cluster Application Availability.
From the following output, we can see that we have a network up and available on the Memory Channel interconnect.
# netstat -I ics0 Name Mtu Network Address Ipkts Ierrs Opkts Oerrs Coll ics0 7000 <Link> ics0:42.0.0.0.0.1 82417 0 82791 0 0 ics0 7000 10.0.0 molari-ics0 82417 0 82791 0 0
Notice that the alias for our cluster, babylon5, does not appear on the interface associated with the subnet that we are using.
# netstat -I ee0 Name Mtu Network Address Ipkts Ierrs Opkts Oerrs Coll ee0 1500 <Link> 0:50:8b:ae:fe:dc 1172451 0 115869 0 0 ee0 1500 DLI none 1172451 0 115869 0 0 ee0 1500 138.127.89 molari 1172451 0 115869 0 0