If you have LSM configured on your standalone system prior to creating the cluster, then LSM will be configured automatically. Once LSM is configured in a cluster, every member subsequently added to the cluster will have LSM configured.
If you did not configure LSM prior to creating the cluster, do not despair – LSM can be configured on a running cluster. In fact, it is pretty straightforward to configure LSM in a cluster. Configuring a running cluster to use LSM is only slightly different than configuring LSM on a standalone system.
Locate an unused disk (or partition) on a shared bus for use as the first disk in the rootdg.
You can use the hwmgr(8) command to determine which devices are on a shared bus. Once you have found a potential device, make sure that it is not in use. There are a few things you should check to verify that you are not about to lose important data.
Use cfsmgr(8) command or our homegrown cfs script to see the mounted file systems and their devices. This will alert you to the devices currently in use.
# cfs -s | grep dsk / [cluster_root#root] (dsk1a): /usr [cluster_usr#usr] (dsk1g): /var [cluster_var#var] (dsk1h): /kits [extra#kits] (dsk8h): /u1 [home#u1] (dsk7h): /cluster/members/member1/boot_partition [root1_domain#root] (dsk2a): /cluster/members/member2/boot_partition [root2_domain#root] (dsk3a):
Use the clu_quorum(8) or clu_get_info(1) command to see which device is being used as the cluster quorum disk.
# clu_quorum | grep dsk Quorum disk: dsk4h # clu_get_info | grep dsk Quorum disk = dsk4h
Check the device's disk label using the disklabel(8) command to see which partitions are unused and whether or not the partition you have in mind overlaps an "in use" partition.
# disklabel dsk5 | grep -p "8 part" 8 partitions: # size offset fstype fsize bsize cpg # ~Cyl values a: 1293637 0 4.2BSD 1024 8192 16 # 0 - 385* b: 3940694 1293637 unused 0 0 # 385*- 1557* c: 17773524 0 unused 0 0 # 0 - 5289* d: 3389144 5234331 unused 0 0 # 1557*- 2566* e: 4180140 8623475 unused 0 0 # 2566*- 3810* f: 4969909 12803615 unused 0 0 # 3810*- 5289* g: 5959156 5234331 unused 0 0 # 1557*- 3331* h: 6580037 11193487 unused 0 0 # 3331*- 5289*
Check the /etc/fdmns subdirectories. These subdirectories contain symbolic links to the devices used by the AdvFS domains on the cluster. Do you remember the fln Korn shell function we created in chapter 6? If not, you can use an ls(1) command piped into the awk(1) command for this example or turn to chapter 6 and check it out.
# fln /etc/fdmns/* | sort | uniq dsk0a -> /dev/disk/dsk0a dsk0g -> /dev/disk/dsk0g dsk1a -> /dev/disk/dsk1a dsk1g -> /dev/disk/dsk1g dsk1h -> /dev/disk/dsk1h dsk2a -> /dev/disk/dsk2a dsk3a -> /dev/disk/dsk3a dsk6c -> /dev/disk/dsk6c dsk7a -> /dev/disk/dsk7a dsk7g -> /dev/disk/dsk7g dsk7h -> /dev/disk/dsk7h dsk8h -> /dev/disk/dsk8h
The equivalent ls and awk commands are:
# ls -l /etc/fdmns/* | awk '{ print $9,$10,$11 }' | sort | uniq
or:
# ls -lR /etc/fdmns | awk '/->/ { print $9,$10,$11 }' | sort
On one member in the cluster, run the volsetup(8) command.
[molari] # volsetup dsk5h LSM: Creating Logical Storage Manager device special files. Checking for an existing LSM configuration Initialize vold and the root disk group: Add disk dsk5h to the root disk group as dsk5h: Addition of disk dsk5h as dsk5h succeeded. Initialization of vold and the root disk group was successful. volwatch daemon started - mail only You must run 'volsetup -s' on each additional node in the cluster to initially set up LSM, unless using clu_add_member to add additional nodes now. The clu_add_member utility will automatically start LSM on the new node.
On all other existing members, run the volsetup command with the "-s" switch.
[sheridan] # volsetup -s Starting LSM... LSM: Creating Logical Storage Manager device special files. LSM volwatch Service started - mail only
The "-s" switch is used to update the member's /etc/inittab file, create member-specific LSM devices, and start the LSM daemons on the member.
Note | If we had not already added the second member to the cluster with the clu_add_member(8) command, then step 3 would have been unnecessary because any new member added to the cluster, now that LSM is configured, will be automatically configured to use LSM. You will not need to use the "volsetup -s" command for any newly added members, only existing members. |