7.11 Other VxVM Tasks

     

7.11 Other VxVM Tasks

VxVM is a huge topic in itself. Before we go on to something else, we should mention some other aspects to the software that may prove useful.

7.11.1 Deport and import of a disk group

The concept of deporting and importing disk groups is not new to HP-UX administrators. In LVM, it is known as vgexport and vgimport . The ideas surrounding VxVM deporting and importing are exactly the same; you deport a disk group in order for your system to disassociate itself with the entire disk group, while another node can then import the disk group to gain access to the data held on the disk. A classic example of this is a High Availability Cluster such as Serviceguard, where one node (usually) will have exclusive access to the disk group. Multi-system, simultaneous access to a disk group goes beyond this and requires the use of the shared flag set for the disk group (see vxdg set ). There is also the Cluster Volume Manager product to consider for such a solution.

Before deporting a disk group, we should umount all active filesystems and ideally stop all volumes. This will require all user processes accessing these volumes to be stopped . In this example, I am deporting the ora1 disk group from node hpeos003 :

 

 root@hpeos003[]  vxdg list  NAME         STATE           ID rootdg       enabled  1067611334.1025.hpeos003 ora1         enabled  1067622419.1110.hpeos003  root@hpeos003[]  root@hpeos003[]  vxvol -g ora1 stopall  root@hpeos003[]  vxdg deport ora1  root@hpeos003[]  vxdg list  NAME         STATE           ID rootdg       enabled  1067611334.1025.hpeos003 root@hpeos003[] 

The deporting host can optionally specify a new hostid of the machine that will be doing the importing . The hostid is stored on the disk to avoid two nodes trying to access the disk group simultaneously . I will now import this disk group into node hpeos004 . I will clear any outstanding import locks ( -C ) to ensure that there is no confusion that this is now my disk group.

 

 root@hpeos004[]  vxdg list  NAME         STATE           ID rootdg       enabled  1068612545.1025.hpeos004 root@hpeos004[] root@hpeos004[]  vxdg C import ora1  root@hpeos004[]  vxdg list  NAME         STATE           ID rootdg       enabled  1068612545.1025.hpeos004 ora1         enabled  1067622419.1110.hpeos003 root@hpeos004[] 

When imported, all the volumes in a disk group are DISABLED . We need to remember to start all the volumes to gain access to the data.

 

 root@hpeos004[]  vxvol -g ora1 startall  root@hpeos004[] root@hpeos004[]  vxprint -g ora1  more  TY NAME         ASSOC        KSTATE   LENGTH   PLOFFS STATE    TUTIL0  PUTIL0 dg ora1         ora1         -        -        -      -        -       - dm ora_disk1    c5t4d0       -        71682048 -      -        -       - dm ora_disk2    c5t5d0       -        -        -      -        -       - dm ora_disk3    c0t12d0      -        71682048 -      -        -       - dm ora_disk4    c0t13d0      -        71682048 -      -        -       - dm ora_spare    c0t14d0      -        71682048 -      SPARE    -       - v  archive      RAID 5        ENABLED  4194304  -      ACTIVE   -       - pl archive-01   archive      ENABLED  4194304  -      ACTIVE   -       - sd ora_disk3-UR-002 archive-01 ENABLED 2097152 0      -        -       - sd ora_disk2-02 archive-01   ENABLED  2097152  0      -        -       - sd ora_disk4-04 archive-01   ENABLED  2097152  0      -        -       - pl archive-03   archive      ENABLED  1440     -      LOG      -       - sd ora_spare-02 archive-03   ENABLED  1440     0      -        -       - v  chkpt1       fsgen        ENABLED  5242880  -      ACTIVE   -       - pl chkpt1-01    chkpt1       ENABLED  5242880  -      ACTIVE   -       - sd ora_disk4-01 chkpt1-01    ENABLED  5242880  0      -        -       - pl chkpt1-02    chkpt1       ENABLED  5242880  -      ACTIVE   -       - sd ora_spare-03 chkpt1-02    ENABLED  5242880  0      -        -       - ... root@hpeos004[] 

If the import failed, I can force an import to happen with the “f option. If a disk is missing or has become corrupt, the target system may want to import the disk group anyway, in order to access and possibly to try to repair whatever data is available. A problem with using the force option ( -f ) is that we could have two systems access the same disk group simultaneously; that's a very scary thought, and it should be avoided at all costs. In a Serviceguard cluster, this is performed along with clearing the hostid locks ( -C ) and assigning a temporary name to the disk group ( -t ). This is undertaken only as a last resort in order for one host to be able to import the disk group (possibly to effect repairs ).

We can deport disk groups with a different name. This effectively allows us to rename the disk group because we can immediately import the disk group straight back into our system. We can also perform a rename when we intend to deport the disk group to another node. Due to VxVM Disk Discovery Layer, other nodes will automatically see new devices because they are created/made available. If we ever see an error complaining that VxVM cannot see a particular disk, we can always get VxVM to reread all disks by issuing a vxdctl enable .

7.11.2 Dynamic relayout

Like most things in VxVM, you can perform many tasks while users are accessing the data. Changing the layout of a plex is another such task. There are a number of reasons to change the layout of a plex:

  • To convert a simple concatenated volume to a stripe-mirror volume to achieve redundancy

  • To convert a RAID 5 volume to a mirrored volume for better write performance

  • To convert a mirrored volume to a RAID 5 volume to save space

  • Change stripe unit sizes or add columns to RAID 5 or striped volumes to achieve the desired performance

  • To convert a mirrored concatenated plex to a striped mirrored plex

We must be extremely careful here about the difference between a relayout and a convert . There is a very simple way to remember it:

  • A relayout operation involves the copying of data at the disk level in order to change the structure of the volume. This option involves the transformation of nonlayered volumes. Any changes or additions to the underlying infrastructure will be performed during the relayout process

  • A convert operation changes the resilience level of a volume, i.e., to convert a volume from nonlayered to layered , or vice versa. The convert operation does not copy data; it only changes the way that the data is referenced. This operation specifically switches between mirror-concat and concat-mirror layouts or between mirror-stripe and stripe-mirror layouts. You cannot use this command to change the number of stripes or the stripe unit width, or to change to other layout types.

There are only a few operations that you cannot do while a relayout / convert is happening:

  • Create a snapshot during relayout

  • Change the number of mirrors during relayout

  • Perform multiple relayouts at the same time

  • Perform relayout on a volume with a sparse plex

You also have to appreciate that the work involved to relayout / convert a volume is quite considerable. VxVM will create a number of temporary structures during the relayout process in order to facilitate the operation. If the operation is a relayout , large quantities of data will be copied to these temporary structures (a scratch pad area). If our original volume is over 1GB in size , the relayout / convert will consume 1GB of free space while the relayout / convert is occurring. We can specify the amount of space to use for the scratch pad. The more space we give, the quicker the relayout / convert will be and, hence, the less impact it will have on other IO occurring in the system. It should be obvious that we should try to schedule relayouts for out-of-hours when user-IO will, we hope, be at a minimum. In this example, I am performing a relayout on a RAID 5 volume in order to transform it into a concat-mirror volume.

 

 root@hpeos004[]  vxprint -g ora1 archive  TY NAME         ASSOC        KSTATE   LENGTH   PLOFFS STATE    TUTIL0  PUTIL0 v  archive      RAID 5        ENABLED  4194304  -      ACTIVE   -       - pl archive-01   archive      ENABLED  4194304  -      ACTIVE   -       - sd ora_disk3-UR-002 archive-01 ENABLED 2097152 0      -        -       - sd ora_disk2-02 archive-01   ENABLED  2097152  0      -        -       - sd ora_disk4-04 archive-01   ENABLED  2097152  0      -        -       - pl archive-03   archive      ENABLED  1440     -      LOG      -       - sd ora_spare-02 archive-03   ENABLED  1440     0      -        -       - root@hpeos004[] root@hpeos004[]  vxassist -g ora1 relayout archive layout=concat-mirror tmpsize=2G  

While this command is running, we can monitor the status of the relayout using the vxrelayout command:

 

 root@hpeos004[]  vxrelayout -g ora1 status archive  RAID 5,  columns=3,  stwidth=16 -->  CONCAT-MIRROR  Relayout stopped,  0.00% completed. root@hpeos004[] 

If, as in this case, the relayout has been stopped, we can start it back up again:

 

 root@hpeos004[]  vxrelayout -g ora1 start archive  root@hpeos004[] 

This is a relayout and hence causes data to be copied within the disk group. The resulting volume is a layered volume involving subvolumes . We can see this from the output of vxprint :

 

 root@hpeos004[]  vxprint -tvpsr -g ora1 archive  RV NAME         RLINK_CNT    KSTATE   STATE    PRIMARY  DATAVOLS  SRL RL NAME         RVG          KSTATE   STATE    REM_HOST REM_DG    REM_RLNK V  NAME         RVG          KSTATE   STATE    LENGTH   READPOL   PREFPLEX UTYPE PL NAME         VOLUME       KSTATE   STATE    LENGTH   LAYOUT    NCOL/WID MODE SD NAME         PLEX         DISK     DISKOFFS LENGTH   [COL/]OFF DEVICE   MODE SV NAME         PLEX         VOLNAME  NVOLLAYR LENGTH   [COL/]OFF AM/NM    MODE DC NAME         PARENTVOL    LOGVOL SP NAME         SNAPVOL      DCO dm ora_disk1    c5t4d0       simple   1024     71682048 - dm ora_disk3    c0t12d0      simple   1024     71682048 - dm ora_disk4    c0t13d0      simple   1024     71682048 - v  archive      -            ENABLED  ACTIVE   4194304  SELECT    -        fsgen pl archive-01   archive      ENABLED  ACTIVE   4194304  CONCAT    -        RW   sv archive-Ds01 archive-01   archive-d01 1     4194304  0         3/3      ENA   v2 archive-d01  -            ENABLED  ACTIVE   4194304  SELECT    -        fsgen p2 archive-dp01 archive-d01  ENABLED  ACTIVE   4194304  CONCAT    -        RW s2 ora_disk3-UR-03 archive-dp01 ora_disk3 13034848 2097152 0      c0t12d0  ENA s2 ora_disk3-02 archive-dp01 ora_disk3 7689568 2097088  2097152   c0t12d0  ENA s2 ora_disk3-03 archive-dp01 ora_disk3 9786656 64       4194240   c0t12d0  ENA p2 archive-dp02 archive-d01  ENABLED  ACTIVE   4194304  CONCAT    -        RW s2 ora_disk1-06 archive-dp02 ora_disk1 12932448 4194304 0         c5t4d0   ENA p2 archive-dp03 archive-d01  ENABLED  ACTIVE   LOGONLY  CONCAT    -        RW s2 ora_disk4-07 archive-dp03 ora_disk4 13733856 33      LOG       c0t13d0  ENA root@hpeos004[] 

Should I want to return this volume to its previous layout policy, the vxrelayout reverse command is always available.

7.11.3 LVM to VxVM conversion

We saw earlier the ability to make a copy of an entire root volume group. There is a similar concept in converting an existing LVM volume group into a VxVM disk group, or back again. The command is a menu-driven interface called vxvmconvert . The interface is not difficult to navigate:

 

 root@hpeos003[]  vxvmconvert  Volume Manager Support Operations Menu: VolumeManager/LVM_Conversion  1      Analyze LVM Volume Groups for Conversion  2      Convert LVM Volume Groups to VxVM  3      Roll back from VxVM to LVM  list   List disk information  listvg List LVM Volume Group information  ?      Display help about menu  ??     Display help about the menuing system  q      Exit from menus Select an operation to perform:  q  Goodbye. root@hpeos003[] 

We can't convert a root volume group; that's the job of the vxcp_lvmroot command. There are some issues with using MWC and converting it to DRL, but on the whole the utility seems quite capable.

7.11.4 Dynamic Multipathing (DMP)

Dynamic Multipathing (DMP) is the ability to send IO down multiple controllers to a single disk. LVM has a similar, if slightly less capable option known as Alternate PV Links, which need to be set up on a disk-by-disk basis. With VxVM, it goes beyond the capabilities of protecting against the failure of an interface to providing load balancing of IO across multiple paths. As of VxVM 3.2, DMP is automatically enabled at boot time. The vxconfigd uses an internal utility known as vxdiskconfig whenever new devices are connected to the system. This will run ioscan and vxdctl enable to ensure that HP-UX and VxVM can see the new devices. This is known as the VxVM Device Discovery Layer (DDL). Support for different types of disk array is managed by the vxddladm command.

 

 root@hpeos003[]  vxddladm listsupport  LIB_NAME                         ARRAY_TYPE   VID          PID ======================================================================= libvxautoraid.sl                 A/A          HP           C3586A libvxautoraid.sl                 A/A          HP           C5447A libvxautoraid.sl                 A/A          HP           A5257A libvxdgc.sl                      A/P          DGC          all libvxeccs.sl                     A/A          ECCS         all libvxemc.sl                      A/A          EMC          SYMMETRIX libvxfc60.sl                     A/P          HP           A5277A libvxhds.sl                      A/A          HITACHI      OPEN-* libvxhitachi.sl                  A/PG         HITACHI      DF350 libvxhitachi.sl                  A/PG         HITACHI      DF400 libvxhitachi.sl                  A/PG         HITACHI      DF500 libvxnec.sl                      A/A          NEC          DS1200 libvxnec.sl                      A/A          NEC          DS1200F libvxnec.sl                      A/A          NEC          DS3000SL libvxnec.sl                      A/A          NEC          DS3000SM libvxnec.sl                      A/A          NEC          DS3001 libvxnec.sl                      A/A          NEC          DS3002 libvxnec.sl                      A/A          NEC          DS1000 libvxnec.sl                      A/A          NEC          DS1000F libvxnec.sl                      A/A          NEC          DS1100 libvxnec.sl                      A/A          NEC          DS1100F libvxnec.sl                      A/A          NEC          DS3011 libvxnec.sl                      A/A          NEC          DS1230 libvxnec.sl                      A/A          NEC          DS450 libvxnec.sl                      A/A          NEC          DS450F libvxnec.sl                      A/A          NEC          iStorage     1000 libvxnec.sl                      A/A          NEC          iStorage     2000 libvxnec.sl                      A/A          NEC          iStorage     4000 libvxshark.sl                    A/A          IBM          2105 libvxstorcomp.sl                 A/A          StorComp     OmniForce libvxva.sl                       A/A          HP           A6188A libvxva.sl                       A/A          HP           A6189A libvxxp256.sl                    A/A          HP           OPEN-* libvxfujitsu.sl                  A/A          FUJITSU      GR710 libvxfujitsu.sl                  A/A          FUJITSU      GR720 libvxfujitsu.sl                  A/A          FUJITSU      GR730 libvxfujitsu.sl                  A/A          FUJITSU      GR740 libvxfujitsu.sl                  A/A          FUJITSU      GR820 libvxfujitsu.sl                  A/A          FUJITSU      GR840 libvxveritas.sl                  A/PF         VERITAS      all root@hpeos003[] 

We can add and remove support for new disk arrays with this command as well. If we have multiple paths to a device, we should see them if we run the vxdmpadm command. Here, we can get a list of all enclosures on our system:

 

 root@hpeos003[]  vxdmpadm listctlr all  CTLR-NAME       ENCLR-TYPE      STATE          ENCLR-NAME ========================================================= c0              OTHER_DISKS     ENABLED      OTHER_DISKS c1              OTHER_DISKS     ENABLED      OTHER_DISKS c3              OTHER_DISKS     ENABLED      OTHER_DISKS c4              OTHER_DISKS     ENABLED      OTHER_DISKS c5              OTHER_DISKS     ENABLED      OTHER_DISKS root@hpeos003[] 

An enclosure type of OTHER_DISKS will, unfortunately , not allow me to configure DMP for these devices. If I wanted to establish which other paths I had to a particular disk, I would use the vxdmpadm command.

 

 root@hpeos003[]  vxdmpadm getsubpaths ctlr=c0  NAME         STATE         PATH-TYPE  DMPNODENAME  ENCLR-TYPE   ENCLR-NAME ========================================================================== c0t0d0       ENABLED        -        c0t0d0       OTHER_DISKS  OTHER_DISKS c0t1d0       ENABLED        -        c0t1d0       OTHER_DISKS  OTHER_DISKS c0t2d0       ENABLED        -        c0t2d0       OTHER_DISKS  OTHER_DISKS c0t3d0       ENABLED        -        c0t3d0       OTHER_DISKS  OTHER_DISKS c0t4d0       ENABLED        -        c0t4d0       OTHER_DISKS  OTHER_DISKS c0t5d0       ENABLED        -        c0t5d0       OTHER_DISKS  OTHER_DISKS root@hpeos003[] 

In this example, the DMPNODENAME is the same as the original name for the device. If I wanted to establish all the paths to an individual device, I could use the following:

 

 root@hpeos003[]  vxdmpadm getsubpaths dmpnodename=c0t0d0  NAME         STATE         PATH-TYPE  CTLR-NAME  ENCLR-TYPE   ENCLR-NAME ======================================================================== c0t0d0       ENABLED        -        c0         OTHER_DISKS  OTHER_DISKS root@hpeos003[] 

As you can see, I have only one path to these disks. If I had two paths, I could disable IO via that path with a vxdmpadm disable ctlr=c0 command. This would switch IO to other active paths.

As such, there isn't much to configure, to be quite honest. VxVM DMP uses a load-balancing policy to spread IO across all paths to a disk, although some people would argue that it isn't true multipathing because it sends 1MB of data to one path before moving to the next path. This behavior can be controlled by the kernel parameter dmp_pathswitch_blks_shift . The default value of this parameter is 10. This defines a power of 2 (2 10 ) blocks (1 block = 1KB) of contiguous data that are sent over a DMP path before switching to another path (hence the 2 10 x 1024 = 1MB). For some intelligent disk arrays with large internal caches, tuning this parameter may produce better IO performance for certain IO patterns. As with any change of this nature, a baseline test should be established first, before attempting any changes. The baseline can be used to measure any improvements or degradation in performance.

7.11.5 VxVM diagnostic commands

We have looked at a number of VxVM scenarios and used many of the standard tools to set up and correct problems with our configuration. Interfacing with the internal VxVM structures is undertaken using standard VxVM commands. The kernel itself has no knowledge of the layout of the on-disk VxVM structures. The vxconfigd is the primary interface to reading information on disk and loading the relevant kernel data structures via the vols pseudo-driver. The space inside the private region is tightly packed, so much so, that most of the structures within it are not human readable. Consequently, as well as having the standard tools (with man pages), we have utilities in the /etc/vx/diag.d/ directory that can read the structures on the disk for us. HP Support staff folks use these tools when trying to diagnose and troubleshoot particular VxVM problems. For us as advanced administrators, it is useful to know of their existence just in case we are attempting some form of recovery ourselves :

 

 root@hpeos003[diag.d]  ll  total 1376 dr-xr-xr-x   2 bin        bin           1024 Feb 18  2003 macros.d -r-xr-xr-x   1 bin        bin          45056 Sep 10  2001 vxaslkey -r-xr-xr-x   1 bin        bin          61440 Oct  8  2001 vxautoconfig -r-xr-xr-x   1 bin        bin         237568 Oct  8  2001 vxconfigdump -r-xr-xr-x   1 bin        bin          94208 Sep 10  2001 vxdmpdbprint -r-xr-xr-x   1 bin        bin           6398 Jul 12  2001 vxdmpdebug -r-xr-xr-x   1 bin        bin          20480 Sep  4  2001 vxdmpinq -r-xr-xr-x   1 bin        bin          77824 Oct  8  2001 vxkprint -r-xr-xr-x   1 bin        bin         135168 Oct  8  2001 vxprivutil -r-xr-xr-x   1 bin        bin          24576 Sep 17  2001 vxscsi root@hpeos003[diag.d] 

The standard tool for reading the Private Region is vxdisk list .

 

 root@hpeos003[diag.d]  vxdisk list c3t15d0  Device:    c3t15d0 devicetag: c3t15d0 type:      simple hostid:    hpeos003 disk:      name=disk01 id=1067611348.1048.hpeos003 timeout:   30 group:     name=rootdg id=1067611334.1025.hpeos003 info:      privoffset=128 flags:     online ready private autoconfig autoimport imported pubpaths:  block=/dev/vx/dmp/c3t15d0 char=/dev/vx/rdmp/c3t15d0 version:   2.2 iosize:    min=1024 (bytes) max=64 (blocks) public:    slice=0 offset=1152 len=35562666 private:   slice=0 offset=128 len=1024 update:    time=1067611382 seqno=0.5 headers:   0 248 configs:   count=1 len=727 logs:      count=1 len=110 Defined regions:  config   priv 000017-000247[000231]: copy=01 offset=000000 enabled  config   priv 000249-000744[000496]: copy=01 offset=000231 enabled  log      priv 000745-000854[000110]: copy=01 offset=000000 enabled  lockrgn  priv 000855-000919[000065]: part=00 offset=000000 Multipathing information: numpaths:   1 c3t15d0 state=enabled root@hpeos003[diag.d] 

We also have the diagnostic tool vxprivutil . This command has a number of options:

scan : Prints the private region header

list : Prints the private region header and table of contents

dumplog : Prints the on-disk kernel log

dumpconfig : Prints the on-disk configuration database

set : Changes disk attributes, such as dgname , hostid , diskid , dgid , and flags ; use with extreme caution.

 

 root@hpeos003[diag.d]  ./vxprivutil scan /dev/rdsk/c3t15d0  diskid:  1067611348.1048.hpeos003 group:   name=rootdg id=1067611334.1025.hpeos003 flags:   private autoimport hostid:  hpeos003 version: 2.2 iosize:  1024 public:  slice=0 offset=1152 len=35562666 private: slice=0 offset=128 len=1024 update:  time: 1067611382  seqno: 0.5 headers: 0 248 configs: count=1 len=727 logs:    count=1 len=110 root@hpeos003[diag.d] 

It is still possible to read structures directly off the disk, but it is to be discouraged due to the tightly packed nature of the data. Here, I am reading the universally unique identifier ( uuid ) from the header of the disk:

 

 root@hpeos003[diag.d]  echo "0x2002C?64c"  adb /dev/dsk/c3t15d0    2002C:          1067611348.1048.hpeos003   root@hpeos003[diag.d] root@hpeos003[diag.d] 

The uuid is not the disk media name, which is not guaranteed to be unique across systems. The uuid is composed of a time.seqno.hostname . The time is the time of the disk's initialization as returned by the time (2) system call and is, therefore, the number of seconds since Thursday, 1 January 1970. The seqno is a sequence number that starts at 1024 and changes every time the configuration changes. The hostname is the current system's hostname. Obviously, we would need to know the internal structure of the disk header and/or the configuration database, which is information not readily available! Consequently, we normally stick to the commands and utilities supplied with the product.



HP-UX CSE(c) Official Study Guide and Desk Reference
HP-UX CSE(c) Official Study Guide and Desk Reference
ISBN: N/A
EAN: N/A
Year: 2006
Pages: 434

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net