14.7 To Migrate or Encapsulate? That is the Question


14.7 To Migrate or Encapsulate? That is the Question

In this section, we will discuss when you may want or need to migrate versus when you may want or need to encapsulate.

  • When you migrate an AdvFS domain to an LSM volume, you are actually moving your data from the storage currently associated with the domain to an LSM volume composed of different underlying storage. To migrate an AdvFS domain to LSM, use the volmigrate command. Conversely, to migrate from LSM to an AdvFS domain, use the volunmigrate command.

    Prior to TruCluster Server version 5.1A, the volmigrate and volunmigrate commands did not exist, so migrating an AdvFS domain would have to be done manually using one of two methods: either backing up and restoring the data from each file system in the domain to an LSM volume, or using the addvol(8) command to add an LSM volume to the AdvFS domain and then removing the non-LSM volume with the rmvol(8) command. But even these methods were not supported for the cluster_root file system.

    Notice that we mention only AdvFS domains with regard to migration because the volmigrate command only supports AdvFS domains.

    You can migrate a domain while the cluster is up and running – no reboot is required!

  • When you encapsulate a disk, a disk partition, or an AdvFS domain, the underlying storage becomes LSM volumes. Unlike migration, encapsulation does not require additional storage because the current storage is simply brought under LSM control. To encapsulate a disk, a disk partition, or an AdvFS domain, use the volencap command.

    If you encapsulate a disk, a disk partition, or an AdvFS domain that is currently in use, encapsulation will not be completed until a reboot is performed. If, however, the device(s) are dormant, the encapsulation can be completed without a reboot.

Table 14-8 is a comparison table we put together to help you to decide which method to use, given a particular disk, disk partition, or AdvFS domain.

Table 14-8: LSM Migration or Encapsulation

To Migrate or Encapsulate?

Scenario: You want to use LSM to Monitor, Stripe, or Manage a disk, disk partition, or AdvFS domain that already contain data you want to keep.

Disk, Disk Partition, or AdvFS Domain

V5.1A

V5.1

V5.0A

Migrate

Encapsulate

Encapsulate

Encapsulate

The cluster_root domain

[√]

[x]

[x]

[x]

The cluster_usr and/or cluster_var domain(s)

[√]

[√]

[√]

[√]

A member's swap partition

[x]

[√]

[x]

[x]

A member's boot_partition

[x]

[x]

[x]

[x]

A member's cnx partition

[x]

[x]

[x]

[x]

The cluster's quorum disk

[x]

[x]

[x]

[x]

Your application data.

[√]

[√]

[√]

[√]

[√]= Supported,

[x]= Unsupported

If you are using V5.1A and the table indicates that you have a choice between migration and encapsulation, then your decision should be based on your site policy and/or configuration restrictions. Table 14-9 can be used to help you decide between migration and encapsulation on V5.1A.

Table 14-9: LSM Migration or Encapsulation Decision Tree for V5.1A

Migration of Encapsulation Decision Tree

Question

Yes

No

  1. Is the data in an AdvFS domain?

goto 2

encapsulate

  1. Is the domain cluster_root?

goto 3

goto 4

  1. Do you have additional storage?

migrate

[x]

  1. Is the domain cluster_usr or cluster_var?

goto 5

goto 8

  1. Do you have additional storage?

goto 6

goto 7

  1. Do you want to reuse the existing storage?

goto 7

migrate

  1. Can you reboot the cluster?

encapsulate

[x]

  1. Is the domain use?

goto 9

goto 10

  1. Can you unmount the file system?

goto 10

goto 5

  1. Do you have additional storsge?

goto 11

encapsulate

  1. Do you want to reuse the existing storage?

encapsulate

migrate

[x]= can't get there from here

Note

The cluster-common domains (cluster_root, cluster_usr, and cluster_var) must be brought into the root disk group (rootdg).

14.7.1 Migrate Using the volmigrate(8) Command

If you have an AdvFS domain and additional storage, you can migrate your data to LSM volumes using the volmigrate command. In fact, this is the only way to get cluster_root into LSM.

The advantage to performing a migration rather than an encapsulation is that a domain that is currently in use can be migrated to LSM volumes without having to reboot the cluster. The disadvantage is that you need additional unused storage and the data that you want to migrate must be in an AdvFS domain.

Note

The volmigrate command uses the AdvFS addvol(8) command. You will need the ADVFS-UTILITIES license PAK in order to use the addvol command.

To migrate an AdvFS domain to one or more LSM volumes, use the following procedure:

  1. Identify unused storage that is large enough to hold the AdvFS domain.

    • Find out the size of the domain with the showfdmn(8) command.

       # showfdmn cluster_root                Id            Date    Created      LogPgs       Version        Domain Name 3acde49b.0004ec69 Fri Apr 6 11:45:31    2001         512             4        cluster_root   Vol   512-Blks     Free   % Used    Cmode    Rblks     Wblks     Vol Name    1L    1048576   644544      39%       on      256       256     /dev/disk/dsk1a 

    • Locate a disk large enough to hold the domain.

      In our case, since we are looking to migrate cluster_root, we also want to find a device on the shared bus. Since we know that cluster_root is currently on a shared bus, and dsk1 is the disk that we are using for cluster_root, then let's find the bus where dsk1 is located.

       # hwmgr -view device -dsf dsk1   HWID:    Device Name            Mfg           Model             Location ------------------------------------------------------------------------------     50:    /dev/disk/dsk1c        COMPAQ        BD009635C3        bus-3-targ-0-lun-0 

       # hwmgr -show scsi -bus 3           SCSI                        DEVICE   DEVICE    DRIVER  NUM    DEVICE FIRST  HWID:    DEVICEID      HOSTNAME      TYPE     SUBTYPE   OWNER   PATH   FILE   VALID PATH ------------------------------------------------------------------------------------------    50:    2             molari        disk     none      2       1      dsk1   [3/0/0]    51:    3             molari        disk     none      2       1      dsk2   [3/1/0]    52:    4             molari        disk     none      2       1      dsk3   [3/2/0]    53:    5             molari        disk     none      2       1      dsk4   [3/3/0]    54:    6             molari        disk     none      2       1      dsk5   [3/4/0]    55:    7             molari        disk     none      2       1      dsk6   [3/5/0] 

      From our discussion back in section 14.4, we determined that dsk5 was the only disk we had on our cluster that was available. In fact, we already used it to configure LSM. Since we only used a partition on the disk, we will use another available partition on dsk5 to migrate our cluster_root domain. We can find an unused partition by using the disklabel command.

       # disklabel -r dsk5 | grep -p "8 part" 8 partitions: #           size    offset        fstype       fsize     bsize     cpg     #  ~Cyl values  a:      1293637         0        unused        1024      8192             #       0 - 385*  b:      3940694   1293637        unused           0         0             #    385*- 1557*  c:     17773524         0        unused           0         0             #      0 - 5289*  d:      3389144   5234331        unused           0         0             #   1557*- 2566*  e:      4180140   8623475        unused           0         0             #   2566*- 3810*  f:      4969909  12803615        unused           0         0             #   3810*- 5289*  g:      5959156   5234331        unused           0         0             #   1557*- 3331*  h:      6580037  11193487       LSMsimp                                   #   3331*- 5289* 

      Partition "a" looks large enough, so we will use it.

  2. Backup the data in the domain – just in case.

  3. Define the disk within LSM.

     # voldisksetup -i dsk5a 
  4. Add the disk to the rootdg.

     # voldg adddisk clu_root=dsk5a 
  5. Migrate!

     # volmigrate cluster_root clu_root volassist -Ucluroot make cluster_rootvol 1048576 clu_root init=active nlog=0 addvol /dev/vol/rootdg/cluster_rootvol cluster_root rmvol /dev/disk/dsk1a cluster_root rmvol: Removing volume '/dev/disk/dsk1a' from domain 'cluster_root' rmvol: Removed volume '/dev/disk/dsk1a' from domain 'cluster_root' 

  6. Verify that the migration succeeded.

    • Check the cluster_root domain.

       # showfdmn cluster_root                Id            Date    Created     LogPgs     Version      Domain Name 3acde49b.0004ec69 Fri Apr 6 11:45:31    2001        512           4      cluster_root    Vol    512-Blks      Free  % Used   Cmode     Rblks     Wblks  Vol Name     2L     1048576    644608     39%      on     65536     65536  /dev/vol/rootdg/clust er_rootvol 

      The cluster_root domain is now using the LSM volume, "cluster_rootvol" device special file located in the /dev/vol/rootdg directory.

    • Check LSM.

       # volprint -Aht cluster_rootvol Disk group: rootdg V   NAME         USETYPE       KSTATE    STATE    LENGTH    READPOL    PREFPLEX PL  NAME         VOLUME        KSTATE    STATE    LENGTH    LAYOUT     NCOL/WID  MODE SD  NAME         PLEX          DISK      DISKOFFS LENGTH    [COL/]OFF  DEVICE    MODE v  cluster_rootvol cluroot     ENABLED   ACTIVE   1048576   SELECT     - pl cluster_rootvol-01 cluster_rootvol    ENABLED  ACTIVE    1048576 CONCAT -     RW sd clu_root-01  cluster_rootvol-01  cluster_root  0        1048576  0   dsk5a  ENA 

Note

If for some reason you want to reverse your migration decision, you can use the volunmigrate command. In the following example, we will move the cluster_root domain back to the original partition.

 # volunmigrate cluster_root dsk1a addvol /dev/disk/dsk1a cluster_root rmvol /dev/vol/rootdg/cluster_rootvol cluster_root rmvol: Removing volume '/dev/vol/rootdg/cluster_rootvol' from domain 'cluster_root' rmvol: Removed volume '/dev/vol/rootdg/cluster_rootvol' from domain 'cluster_root' voledit -g rootdg -rf rm cluster_rootvol 

 # voldg rmdisk clu_root 
 # voldisk rm dsk5a 

The cluster_root domain is back to its original device.

14.7.1.1 Migrate and Mirror the Cluster-Common File Systems

One of the main reasons for the volmigrate command is to provide an online process to migrate the cluster_root domain as we illustrated earlier in this section. Why is this important? To avoid having to shutdown the entire cluster.

Adding support for cluster_root (and swap) in V5.1A allows for lower cost cluster configurations by utilizing LSM to mirror the cluster-common domains across buses for a NSPOF failure solution.

So, we thought, if you have a cluster without a multiple-bus, dual-redundant hardware RAID controller, you might like to see an example of mirroring your cluster-common devices using LSM.

Here are the steps you will need to take:

  1. Locate a disk (or disks) large enough to migrate cluster_root, cluster_usr, and cluster_var domains.

    Ideally, you will want the disk(s) to be on a shared bus other than the bus where the cluster-common domains are currently located. Additionally, choose the disk(s) at least the same size as the current cluster-common disk(s) so that you can reuse the original cluster-common disk(s) as the mirror.

    Our cluster-common domains are on the same disk, so we will be using another disk of the same size for the remainder of this example.

  2. Save the disk label in case you want to go back to the original (non-LSM) configuration.

     # disklabel –r dsk1 > /dsk1_noLSM.lbl 
  3. Add the disk(s) to LSM that you will be using as your destination disks.

    In this case, we will be using dsk6. We are going to use the entire disk and let LSM create the volumes it needs instead of splitting the disk into separate partitions for each domain.

     # voldisksetup –i dsk6 
  4. Add the disk(s) to the root disk group.

     # voldg adddisk dsk6 
  5. Migrate the cluster-common domains (this may take a while – be patient).

     # volmigrate cluster_root dsk6 volassist -Ucluroot make cluster_rootvol 1048576 dsk6 init=active nlog=0 addvol /dev/vol/rootdg/cluster_rootvol cluster_root rmvol /dev/disk/dsk1b cluster_root rmvol: Removing volume '/dev/disk/dsk1b' from domain 'cluster_root' rmvol: Removed volume '/dev/disk/dsk1b' from domain 'cluster_root' 

     # volmigrate cluster_usr dsk6 volassist -Ufsgen -g rootdg make cluster_usrvol 8389344 dsk6 init=active addvol /dev/vol/rootdg/cluster_usrvol cluster_usr rmvol /dev/disk/dsk1g cluster_usr rmvol: Removing volume '/dev/disk/dsk1g' from domain 'cluster_usr' rmvol: Removed volume '/dev/disk/dsk1g' from domain 'cluster_usr' 

     # volmigrate cluster_var dsk6 volassist -Ufsgen -g rootdg make cluster_varvol 8323280 dsk6 init=active addvol /dev/vol/rootdg/cluster_varvol cluster_var rmvol /dev/disk/dsk1h cluster_var rmvol: Removing volume '/dev/disk/dsk1h' from domain 'cluster_var' rmvol: Removed volume '/dev/disk/dsk1h' from domain 'cluster_var' 

  6. Verify that the migration was successful.

    You can use the volprint and showfdmn commands. Additionally, verify the disk labels with the disklabel command. Of course, the fact that the cluster is running is a good indication that everything went as planned.

  7. Add the original cluster-common disk(s) to LSM.

     # voldisksetup -i dsk1 
  8. Add the original cluster-common disk(s) to the root disk group.

     # voldg adddisk dsk1 
  9. Mirror the volumes (this may take a while – be patient).

     # volmirror dsk6 dsk1 
  10. Add a log plex for the mirrored volumes.

    We will use dsk5h, since it is a different physical disk within the root disk group. The volassist(8) reference page recommends not placing a log plex on the same physical disk as the volume data.

     # volassist addlog cluster_usrvol dsk5h # volassist addlog cluster_varvol dsk5h 
  11. Verify that the mirror completed successfully.

     # volprint cluster_rootvol cluster_usrvol cluster_varvol Disk group: rootdg TY   NAME        ASSOC        KSTATE     LENGTH   PLOFFS     STATE     TUTIL0    PUTIL0 v    cluster_rootvol  cluroot ENABLED   1048576   -          ACTIVE    -         - pl   cluster_rootvol-02 cluster_rootvol ENABLED   1048576 -  ACTIVE    -         - sd   dsk1-01      cluster_rootvol-02    ENABLED   1048576 0  -         -         - pl   cluster_rootvol-01 cluster_rootvol ENABLED   1048576 -  ACTIVE    -         - sd   dsk6-01   cluster_rootvol-01       ENABLED   1048576 0  -         -         - v    cluster_usrvol   fsgen    ENABLED  8389344    -         ACTIVE    -         - pl   cluster_usrvol-02  cluster_usrvol  ENABLED   8389344  - ACTIVE    -         - sd   dsk1-02  cluster_usrvol-02         ENABLED   8389344 0  -         -         - pl   cluster_usrvol-03   cluster_usrvol ENABLED   LOGONLY - ACTIVE     -         - sd   dsk5-01   cluster_usrvol-03  ENABLED  195     LOG    -            -         - pl   cluster_usrvol-01  cluster_usrvol  ENABLED  8389344  -  ACTIVE    -         - sd   dsk6-03  cluster_usrvol-01  ENABLED    8389344    0  -            -         - v   cluster_varvol   fsgen       ENABLED    8323280       -  ACTIVE    -         - pl  cluster_varvol-02  cluster_varvol  ENABLED    8323280 -  ACTIVE    -         - sd  dsk1-03        cluster_varvol-02 ENABLED   8323280 0  -            -         - pl  cluster_varvol-03  cluster_varvol ENABLED   LOGONLY   - ACTIVE     -         - sd  dsk5-02          cluster_varvol-03 ENABLED    195 LOG    -         -         - pl  cluster_varvol-01 cluster_varvol ENABLED     8323280  - ACTIVE     -         - sd  dsk6-02        cluster_varvol-01  ENABLED  8323280 0    -          -         - 

Note

We could have reduced the number of steps taken in the previous example if we had had an extra disk so that we would not have had to reuse the original cluster-common disk. For example, with an extra disk, the approach becomes simpler. Repeat the first two steps.

  1. Add the disks to LSM that you will be using as your destination disks.

     # voldisksetup -i dsk6 dsk7 
  2. Add the disks to the root disk group.

     # voldg adddisk dsk6 # voldg adddisk dsk7 
  3. Migrate and mirror the cluster-common domains (this may take a while – be patient).

     # volmigrate -m 2 cluster_root dsk6 dsk7 volassist -Ucluroot make cluster_rootvol 1048576 nmirror=2 dsk6 dsk7 init=active nlog=0 addvol /dev/vol/rootdg/cluster_rootvol cluster_root rmvol /dev/disk/dsk1b cluster_root rmvol: Removing volume '/dev/disk/dsk1b' from domain 'cluster_root' rmvol: Removed volume '/dev/disk/dsk1b' from domain 'cluster_root' 

     # volmigrate -m 2 cluster_usr dsk6 dsk7 volassist -Ufsgen -g rootdg make cluster_usrvol 8389344 nmirror=2 dsk6 dsk7 init =active addvol /dev/vol/rootdg/cluster_usrvol cluster_usr rmvol /dev/disk/dsk1g cluster_usr rmvol: Removing volume '/dev/disk/dsk1g' from domain 'cluster_usr' rmvol: Removed volume '/dev/disk/dsk1g' from domain 'cluster_usr' 

     # volmigrate -m 2 cluster_var dsk6 dsk7 volassist -Ufsgen -g rootdg make cluster_varvol 8323280 nmirror=2 dsk6 dsk7 init =active addvol /dev/vol/rootdg/cluster_varvol cluster_var rmvol /dev/disk/dsk1h cluster_var rmvol: Removing volume '/dev/disk/dsk1h' from domain 'cluster_var' rmvol: Removed volume '/dev/disk/dsk1h' from domain 'cluster_var' 

  4. Verify that the migration and mirroring were successful.

    You can use the volprint and showfdmn commands. Additionally, verify the disk labels with the disklabel command. Of course, the fact that the cluster is running is a good indication that everything went as planned.

  5. Remove the log plex from cluster_usrvol and cluster_varvol because they are on the same disk that contains volume data. Add a new log plex from another disk.

    Locate the log plexes.

     # volprint | grep LOG pl cluster_usrvol-03 cluster_usrvol ENABLED LOGONLY  -    ACTIVE    -     - sd dsk6-03 cluster_usrvol-03 ENABLED  195   LOG         -          -     - pl cluster_varvol-03 cluster_varvol  ENABLED LOGONLY -    ACTIVE     -     - sd dsk6-05 cluster_varvol-03 ENABLED 195 LOG              -         -     - 

    Remove the log plexes.

     # volplex -v cluster_usrvol dis cluster_usrvol-03 # voledit -rf rm cluster_usrvol-03 
     # volplex -v cluster_varvol dis cluster_varvol-03 # voledit -rf rm cluster_varvol-03 

    Add in the new log plexes from a different disk with the root disk group.

     # volassist addlog cluster_usrvol dsk5h # volassist addlog cluster_varvol dsk5h 
  6. Verify that everything completed successfully.

     # volprint cluster_rootvol cluster_usrvol cluster_varvol Disk group: rootdg TY NAME        ASSOC         KSTATE       LENGTH       PLOFFS     STATE    TUTIL0   PUTIL0 v  cluster_rootvol cluroot   ENABLED     1048576       -          ACTIVE   -        - pl cluster_rootvol-01   cluster_rootvol    ENABLED     1048576  - ACTIVE   -        - sd dsk6-01      cluster_rootvol-01     ENABLED     1048576 0      -        -        - pl cluster_rootvol-02   cluster_rootvol  ENABLED    1048576 -     ACTIVE   -        - sd   dsk7-01       cluster_rootvol-02  ENABLED   1048576 0        -        -        - v  cluster_usrvol  fsgen           ENABLED  8389344 -             ACTIVE   -        - pl cluster_usrvol-01  cluster_usrvol   ENABLED  8389344   -       ACTIVE   -        - sd dsk6-02       cluster_usrvol-01   ENABLED   8389344 0          -        -        - pl cluster_usrvol-02 cluster_usrvol ENABLED     8389344   -       ACTIVE   -        - sd dsk7-02       cluster_usrvol-02  ENABLED    8389344 0          -        -        - pl cluster_usrvol-03  cluster_usrvol  ENABLED    LOGONLY  -       ACTIVE   -        - sd  dsk5h-01     cluster_usrvol-03    ENABLED    195  LOG         -        -        - v  cluster_varvol  fsgen          ENABLED     8323280  -          ACTIVE   -        - pl cluster_varvol-01     cluster_varvol   ENABLED 8323280  -      ACTIVE   -        - sd dsk7-03       cluster_varvol-01    ENABLED   8323280 0         -        -        - pl cluster_varvol-02 cluster_varvol  ENABLED       8323280 -      ACTIVE   -        - sd  dsk6-04  cluster_varvol-02      ENABLED      8323280 0        -        -        - pl  cluster_varvol-03    cluster_varvol   ENABLED  LOGONLY -      ACTIVE   -        - sd  dsk5h-02     cluster_varvol-03         ENABLED 195 LOG        -        -        - 

14.7.2 Encapsulate Using the volencap(8) Command

When you use the volencap command to encapsulate an AdvFS domain, disk, or disk partition, the data that exists on the device is left unchanged. This is a good thing because if you decide to remove LSM you will not have to recreate or restore your data.

The volencap command expects as input a partition, disk, or domain:

  • If you choose to have an individual disk partition encapsulated, a nopriv disk and an LSM volume are created from the disk partition.

  • If you choose to have an entire disk encapsulated, a nopriv disk and an LSM volume are created for each partition on the disk that is not marked unused.

  • If you choose to have an AdvFS domain encapsulated, all AdvFS volumes in the domain become LSM volumes.

The volencap command creates files in the /etc/vol/reconfig.d directory that are used by the volreconfig(8) command to create the LSM volumes, plexes, and subdisks from the devices that were input to the volencap command.

14.7.2.1 Encapsulating an AdvFS Domain

Any domain except for the cluster_root domain can be encapsulated. To place the cluster_root domain under LSM control, you must use the volmigrate command (see section 14.7.1 earlier in this chapter).

As long as the domain is not in use, encapsulation does not require a reboot. If you can, unmount the file systems in the domain you want to encapsulate so that you can avoid having to reboot the cluster.

To encapsulate a domain, do the following:

 # volencap tcrhb Setting up encapsulation for tcrhb.     For AdvFS domain tcrhb:       - Creating nopriv disk dsk6c. The following disks are queued up for encapsulation or use by LSM:   dsk6c Please consult the Cluster Administration Guide for steps that you will need to follow to complete the encapsulation. 

You can see which devices are scheduled for encapsulation by issuing the volencap command with the "-s" switch:

 # volencap -s The following disks are queued up for encapsulation or use by LSM:   dsk6c 

In order for the encapsulation process to complete, we need to run the volreconfig command.

Note

If you decide not to finish the encapsulation process, you cancel the encapsulation for one device by using the volencap command with the "-k <diskname|partition>" switch or cancel all queued encapsulation requests with the "-k –a" switch:

 # volencap -k dsk6c 

-OR-

 # volencap –k -a 

The volreconfig command completes the encapsulation process provided that the devices that are queued for encapsulation are not in use.

 # volreconfig Encapsulating dsk6c. The following disks were encapsulated successfully:         dsk6c 

You can verify that the encapsulation succeeded by using the volprint command.

 # volprint | grep dsk6 dm  dsk6c-AdvFS   dsk6c        -          17773524     -        -         -         - v   vol-dsk6c     fsgen        ENABLED    17773524     -        ACTIVE    -         - pl  vol-dsk6c-01  vol-dsk6c    ENABLED    17773524     -        ACTIVE    -         - sd  dsk6c-01      vol-dsk6c-01 ENABLED    17773524     0        -         -         - 

You can also verify that the domain volume links were updated with the fln function (or you can use an "ls -l" command).

 # fln /etc/fdmns/tcrhb rootdg.vol-dsk6c -> /dev/vol/rootdg/vol-dsk6c 

14.7.2.1.1 Encapsulating cluster_usr or cluster_var

The major difference between encapsulating any other domain and the cluster_usr or cluster_var domains is simply that /usr and /var cannot be unmounted. In other words, in order to encapsulate cluster_usr or cluster_var, you must shutdown and reboot the entire cluster!

Therefore, if you plan to bring these domains under LSM control, we recommend either running the volencap command after the cluster is first created and before any members have been added, or using the volmigrate command to migrate the domains to LSM volumes instead of encapsulating them.

14.7.2.2 Encapsulating a Disk Partition

If a partition has a UFS file system or is a raw partition (i.e., used by an application like a database), you can encapsulate one or more disk partitions as follows:

 # volencap dsk5a dsk8h dsk12b 
 # volreconfig 

Make sure that the partitions are not active or you will need to reboot before the encapsulation will be completed.

14.7.2.3 Encapsulating a Disk

Encapsulating an entire disk is similar to encapsulating a partition except that all partitions currently in use on the disk will be queued for encapsulation. To encapsulate a disk, use the disk's base name (i.e., omit the partition letter).

 # volencap dsk5 
 # volreconfig 

Make sure that the partitions are not active or you will need to reboot before the encapsulation will be completed.

14.7.2.4 Encapsulating a Swap Partition

The volencap command has a special keyword, "swap", that can be used to encapsulate a cluster member's swap devices set in the swapdevice attribute listed in the /etc/sysconfigtab file. The "swap" keyword is member-specific.

 # volencap swap Setting up encapsulation for dsk2b.     - Creating simple disk dsk2f for config area (privlen=4096).       Warning: space taken from -> dsk2b dsk2f     - Creating nopriv disk dsk2b for molari-swap. The following disks are queued up for encapsulation or use by LSM:   dsk2f dsk2b Please consult the Cluster Administration Guide for steps that you will need to follow to complete the encapsulation. 

 # volreconfig EXEC:  voledit -rf rm molari-swap01 EXEC:  voldg rmdisk dsk2b EXEC:  voldisk rm dsk2b lsm:voldisk: ERROR: Failed to obtain locks:         dsk2b: no such object in the configuration The system will need to be rebooted in order to continue with LSM volume encapsulation of:  dsk2f dsk2b 

At this point the volreconfig command will prompt you to reboot the system. We will give the users five minutes.

 Would you like to either quit and defer encapsulation until later or commence system shutdown now? Enter either 'quit' or time to be used with the shutdown(8) command (e.g., quit, now, 1, 5): [quit] 5 Shutdown at 13:45 (in 5 minutes) [pid 525876] #           *** System shutdown message from root@molari.tcrhb.com *** System going down in 5 minutes         ... Place selected disk partitions under LSM control. ... System going down in 2 minutes ... System going down in 60 seconds ... System going down in 30 seconds ... 

You can encapsulate all cluster members' swap partitions by specifying all the devices as parameters to the volencap command and then issuing the volreconfig command on every member, but we think it is easier and less risky to simply issue the volencap and volreconfig commands on each member as follows:

 [molari] # /usr/sbin/volencap swap ; /sbin/volreconfig 

 [molari] # rsh sheridan-ics0 "/usr/sbin/volencap swap ; /sbin/volreconfig" 

Finish the encapsulation process by rebooting each member one at a time.

For additional information, see the TruCluster Server Cluster Administration Guide and the volencap(8) reference page.

14.7.3 Unencapsulation

There is no volunencap command, so the only way to unencapsulate a previously encapsulated disk, disk partition, or AdvFS domain is the hard way.

14.7.3.1 Unencapsulating an AdvFS Domain

In section 14.7.2.1 we encapsulated our tcrhb domain. In this section we'll remove the domain from LSM.

To remove an encapsulated AdvFS domain from LSM, follow these steps:

  1. Unmount the file systems.

  2. Stop the volume(s).

     # volume stop vol-dsk6c 
  3. Remove the volume(s).

     # voledit -rf rm vol-dsk6c 
  4. Remove the disk from the disk group.

     # voldisk list | grep dsk6 dsk6c        nopriv       dsk6c-AdvFS      rootdg      online 

     # voldg rmdisk dsk6c-AdvFS 
  5. Save the disk's label.

     # disklabel -r dsk6 > /tmp/dsk6.lbl 
  6. Remove the disk from LSM.

     # voldisk rm dsk6c 
  7. Edit the saved label, changing the "fstype" from "LSMnoprv" back to "AdvFS".

     # grep -E "LSM|AdvFS" /tmp/dsk6.lbl   c:     17773524            0    LSMnoprv                     #             0  -   5289* 

     # perl -i.orig -pe 's/LSMnoprv/ AdvFS/g' /tmp/dsk6.lbl 

    There are three spaces before "AdvFS" in the previous example.

     # grep -E "LSM|AdvFS" /tmp/dsk6.lbl   c:       17773524         0              AdvFS                #            0  -    5289* 

  8. Replace the disk's label.

     # disklabel -rR dsk6 /tmp/dsk6.lbl 
  9. Fix the AdvFS domain's volume link(s).

     # find /etc/fdmns -name '*dsk6*' /etc/fdmns/tcrhb/rootdg.vol-dsk6c 

     # cd /etc/fdmns/tcrhb ; ln –s /dev/disk/dsk6c ; rm rootdg.vol-dsk6c ; fln dsk6c -> /dev/disk/dsk6c 

  10. Mount the file system(s).

     # mount tcrhb#fafrak /fafrak 

     # df /fafrak Filesystem     512-blocks      Used    Available      Capacity      Mounted on tcrhb#fafrak   17773520      157782     17604000         1%         /fafrak 

14.7.3.2 Unencapsulating a Disk or Disk Partition

Unencapsulating a disk or disk partition is similar to unencapsulating an AdvFS domain detailed in the previous section. Follow steps 1-8 replacing "AdvFS" where appropriate. For example, if the disk partition contains a UFS file system, you would replace the "fstype" with "4.2BSD".

14.7.3.3 Unencapsulating a Swap Partition

The following steps are needed to unencapsulate the swap partition from LSM. This procedure assumes that the swap partition that was previously encapsulated was on the member's boot disk.

  1. Remove the LSM volume from the vm:swapdevice attribute from the member's sysconfigtab file.

    Create a stanza file for the vm:swapdevice attribute.

     # cat > swap.stanza vm: ^D 

     # sysconfigdb -l vm | grep swapdevice >> swap.stanza ; cat swap.stanza vm:           swapdevice = /dev/vol/rootdg/molari-swap01 

    If you are going to remove the swap device from the member's root disk, then you will also need to remove the lsm:lsm_rootdev_is_volume and set the value to zero.

     # sysconfigdb -l lsm >> swap.stanza ; cat swap.stanza vm:               swapdevice = /dev/vol/rootdg/molari-swap01 lsm:               lsm_rootdev_is _volume = 2 

     # perl -i -pe 's/_volume = 2/_volume = 0/' swap.stanza ; cat swap.stanza vm:           swapdevice = /dev/vol/rootdg/molari-swap01 lsm:           lsm_rootdev_is_volume = 0 

    Now that we have created the stanza file that we need, we can update the member's sysconfigtab file.

     # sysconfigdb -r -f swap.stanza vm 

     # sysconfigdb -m -f swap.stanza lsm Warning: duplicate attribute in lsm: was lsm_rootdev_is_volume = 2, now lsm_root dev_is_volume = 0 

  2. Locate the swap volume's subdisk and whether or not there is a private region partition on the physical disk where the subdisk is located.

     # volprint molari-swap01 Disk group: rootdg TY  NAME       ASSOC       KSTATE     LENGTH    PLOFFS   STATE    TUTIL0   PUTIL0 v   molari-swap01 swap     ENABLED    1568768   -        ACTIVE   -        - pl  molari-swap01-01  molari-swap01 ENABLED 1568768 -    ACTIVE   -        - sd dsk2b -01 molari-swap01-01 ENABLED 1568768 0   -        -        - 

    It appears that the swap volume is using /dev/disk/dsk2b. Verify this.

     # voldisk list | grep -E "dsk2[a-h]|^DEV" DEVICE       TYPE       DISK        GROUP        STATUS dsk2b        nopriv     dsk2b       rootdg       online dsk2f        simple     dsk2f       rootdg       online 

    The output indicates that dsk2f is likely a private region on the disk. Before we jump to any conclusions, we can verify this as well.

     # disklabel –r dsk2 | grep LSM    b:       1568768       524288      LSMnoprv                 #            156*- 622*    f:          4096      2093056       LSMsimp                 #            622*- 624* 

    The fact that the f partition is only 4096 sectors and it is the only other LSM partition on the disk besides the swap partition is a good indicator that dsk2f is a private region.

  3. Remove the swap disk's private region partition.

     # voldg rmdisk dsk2f 
     # voldisk rm dsk2f 
  4. Reboot the member.

     # shutdown -sr +5 "Unencapsulating swap from LSM" 
  5. Once the member reboots, login and remove the member's swap volume from LSM.

     # voledit -rf rm molari-swap01 
  6. Remove the swap partition from LSM.

     # voldg rmdisk dsk2b 
     # voldisk rm dsk2b 
  7. Edit the member's root disk's disklabel to merge the private region (partition f) back into the swap partition. In this example, we will be using the perl command to quickly edit the label – you can just as easily use any editor you feel comfortable using in lieu of the three perl commands.

    Save the label.

     # disklabel -r dsk2 | tee boot_partition.lbl | grep -p "8 part" 8 partitions: #            size        offset       fstype     fsize     bsize     cpg    #    ~Cyl values   a:       524288             0        AdvFS                                #       0 - 156*   b:      1568768        524288       unused         0         0            #     156*- 622*   c:     17773524             0       unused         0         0            #       0 - 5289*   d:            0             0       unused         0         0            #       0 - 0   e:            0             0       unused         0         0            #       0 - 0   f:         4096       2093056       unused         0         0            #     622*- 624*   g:     15674324       2097152        AdvFS                                #     624*- 5289*   h:         2048      17771476          cnx                                #    5289*- 5289* 

    Edit the label.

     # perl -i -pe 's/1568768/1572864/' boot_partition.lbl # perl -i -pe 's/4096/ 0/' boot_partition.lbl # perl -i -pe 's/2093056/ 0/' boot_partition.lbl 

     # grep -p "8 part" boot_partition.lbl 8 partitions: #            size     offset     fstype     fsize    bsize           cpg    #    ~Cyl values   a:       524288          0      AdvFS                                     #       0 - 156*   b:      1572864     524288     unused         0        0                  #     156*- 622*   c:     17773524          0     unused         0        0                  #       0 - 5289*   d:            0          0     unused         0        0                  #       0 - 0   e:            0          0     unused         0        0                  #       0 - 0   f:            0          0     unused         0        0                  #     622*- 624*   g:     15674324    2097152      AdvFS                                     #     624*- 5289*   h:         2048   17771476        cnx                                     #    5289*- 5289* 

    Put the label back on the member's root disk.

     # disklabel -rRt advfs dsk2 boot_partition.lbl 
  8. Allow the member to start using the swap partition.

     # swapon -a /dev/disk/dsk2b 
  9. Add the vm:swapdevice attribute to the member's sysconfigtab file.

     # cat > swap_noLSM.stanza vm:           swapdevice = /dev/disk/dsk2b ^D 
     # sysconfigdb -m -f swap_noLSM.stanza vm 




TruCluster Server Handbook
TruCluster Server Handbook (HP Technologies)
ISBN: 1555582591
EAN: 2147483647
Year: 2005
Pages: 273

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net