7.4 VxVM Striping and Mirroring (RAID 01 and 10)

     

7.4 VxVM Striping and Mirroring (RAID 0/1 and 1/0)

A major problem with striping on its own is that it leaves us extremely vulnerable to effectively losing all our data stored in the striped volume if one disk in the stripe set fails. One solution is to introduce a level of redundancy into the configuration in the form of mirroring. With VxVM, we can choose to mirror a striped volume (RAID 0/1) or to stripe a mirrored volume (RAID 1/0). The differences may seem inconsequential at this time. We look at both configurations and I hope you will see that RAID 0/1 and RAID 1/0 actually are very different.

In this example, I am creating a mirrored striped volume using ordered allocation.

 

 root@hpeos003[]  vxassist -g ora1 -o ordered make data2 4G layout=mirror-stripe   ora_disk1 ora_disk3 ora_disk2 ora_disk4  root@hpeos003[] 

The striping will occur over disks ora_disk1 and ora_disk3 , and then the mirroring will occur on disks ora_disk2 and ora_disk4 . We can see this by the way the subdisks have been created.

 

 root@hpeos003[]  vxprint -g ora1 -rtvps data2  RV NAME         RLINK_CNT KSTATE   STATE    PRIMARY  DATAVOLS  SRL RL NAME         RVG       KSTATE   STATE    REM_HOST REM_DG    REM_RLNK V  NAME         RVG       KSTATE   STATE    LENGTH   READPOL   PREFPLEX UTYPE PL NAME         VOLUME    KSTATE   STATE    LENGTH   LAYOUT    NCOL/WID MODE SD NAME         PLEX      DISK     DISKOFFS LENGTH   [COL/]OFF DEVICE   MODE SV NAME         PLEX      VOLNAME  NVOLLAYR LENGTH   [COL/]OFF AM/NM    MODE DC NAME         PARENTVOL LOGVOL SP NAME         SNAPVOL   DCO dm ora_disk1    c0t4d0    simple   1024     71682048 - dm ora_disk2    c0t5d0    simple   1024     71682048 - dm ora_disk3    c4t12d0   simple   1024     71682048 - dm ora_disk4    c4t13d0   simple   1024     71682048 - v  data2        -         ENABLED  ACTIVE   4194304  SELECT    -        fsgen pl data2-01     data2     ENABLED  ACTIVE   4194304  STRIPE    2/64     RW   sd ora_disk1-03 data2-01  ora_disk1 8738144 2097152  0/0       c0t4d0   ENA     sd ora_disk3-02 data2-01  ora_disk3 3495264 2097152  1/0       c4t12d0  ENA   pl data2-02     data2     ENABLED  ACTIVE   4194304  STRIPE    2/64     RW   sd ora_disk2-03 data2-02  ora_disk2 3597664 2097152  0/0       c0t5d0   ENA     sd ora_disk4-03 data2-02  ora_disk4 5345280 2097152  1/0       c4t13d0  ENA   root@hpeos003[] 

Diagrammatically, volume data2 would look something like what we see in Figure 7-3:

Figure 7-3. A mirror-stripe volume.
graphics/07fig03.gif

In this configuration, the addition of a second plex gives us redundancy for the volume. Should a disk fail, e.g., ora_disk1 , plex data2-01 becomes detached . This has no affect on users accessing their data because plex data2-02 is still online . The problems come when we lose a second disk. If we lose a second disk, e.g., ora_disk2 , its associated plex ( data2-02 ) would also become detached . At this time, the users would lose access to their data, because we have no attached and online plexes. This is referred to as a traditional mirror . As far as RAID levels are concerned , this is known as a RAID 0/1; the mirroring occurs after or above the striping. In a traditional mirror, the probability of a volume surviving the failure of two disks is not great. In fact, if we think about our example above, the volume will survive a failure only if the two disks that fail belong to the same plex; for example, if ora_disk1 and ora_disk3 both fail, plex data2-01 will become detached . Similarly, if ora_disk2 and ora_disk4 fail, plex data2-02 will become detached . Any other combination of failures will result in both plexes becoming detached and the volume being inaccessible. To alleviate the problem of losing two disks, a traditional mirror would employ a third plex and two additional disks. There is an alternative with VxVM. It is known as a stripe-mirror layout.

In the context of RAID levels, a stripe-mirror layout is a RAID 1/0 where the mirroring occurs before or below the level of the striping. If we are to change the layout of our volume above, somehow we would have to establish the mirroring first; in other words, ora_disk1 and ora_disk2 would need individual plexes associated with them. Similarly, ora_disk3 and ora_disk4 would need to be mirrored. The resulting plexes could form some kind of intermediate subvolumes that could then form the basis of our striping. This is exactly what a stripe-mirror layout does. The subvolumes and intervening layers go to make a stripe-mirror layout a layered volume . The underlying concepts and rules of VxVM still hold true for layered volumes , so a volume is made up of plexes and plexes are made up of subdisks. For layered volumes, VxVM will create additional subvolumes. Subvolumes allow VxVM to adhere to its own configuration rules. I will create a stripe-mirror volume called data3 . The command is not too difficult; it's getting your head around the underlying concept that can be a little tricky:

 

 root@hpeos003[]  vxassist -g ora1 -o ordered make data2 4G layout=stripe-mirror ora_disk1 graphics/ccc.gif ora_disk3 ora_disk2 ora_disk4  root@hpeos003[] root@hpeos003[]  vxprint -g ora1 -rtvps data3  RV NAME         RLINK_CNT KSTATE   STATE    PRIMARY  DATAVOLS  SRL RL NAME         RVG       KSTATE   STATE    REM_HOST REM_DG    REM_RLNK V  NAME         RVG       KSTATE   STATE    LENGTH   READPOL   PREFPLEX UTYPE PL NAME         VOLUME    KSTATE   STATE    LENGTH   LAYOUT    NCOL/WID MODE SD NAME         PLEX      DISK     DISKOFFS LENGTH   [COL/]OFF DEVICE   MODE SV NAME         PLEX      VOLNAME  NVOLLAYR LENGTH   [COL/]OFF AM/NM    MODE DC NAME         PARENTVOL LOGVOL SP NAME         SNAPVOL   DCO dm ora_disk1    c0t4d0    simple   1024     71682048 - dm ora_disk2    c0t5d0    simple   1024     71682048 - dm ora_disk3    c4t12d0   simple   1024     71682048 - dm ora_disk4    c4t13d0   simple   1024     71682048 - v  data3        -         ENABLED  ACTIVE   4194304  SELECT    data3-03 fsgen pl data3-03     data3     ENABLED  ACTIVE   4194304  STRIPE    2/64     RW sv data3-S01    data3-03  data3-L01 1       2097152  0/0       2/2      ENA v2 data3-L01    -         ENABLED  ACTIVE   2097152  SELECT    -        fsgen p2 data3-P01    data3-L01 ENABLED  ACTIVE   2097152  CONCAT    -        RW s2 ora_disk1-05 data3-P01 ora_disk1 10835296 2097152 0         c0t4d0   ENA p2 data3-P02    data3-L01 ENABLED  ACTIVE   2097152  CONCAT    -        RW s2 ora_disk2-05 data3-P02 ora_disk2 5694816 2097152  0         c0t5d0   ENA sv data3-S02    data3-03  data3-L02 1       2097152  1/0       2/2      ENA v2 data3-L02    -         ENABLED  ACTIVE   2097152  SELECT    -        fsgen p2 data3-P03    data3-L02 ENABLED  ACTIVE   2097152  CONCAT    -        RW s2 ora_disk3-04 data3-P03 ora_disk3 5592416 2097152  0         c4t12d0  ENA p2 data3-P04    data3-L02 ENABLED  ACTIVE   2097152  CONCAT    -        RW s2 ora_disk4-05 data3-P04 ora_disk4 7442432 2097152  0         c4t13d0  ENA root@hpeos003[] 

I never found the vxprint output for layered volumes easy to understand. Here's a picture of what it looks like (see Figure 7-4).

Figure 7-4. A stripe-mirror volume.
graphics/07fig04.gif

As you can see, in order to achieve this, VxVM had to create a number of additional objects . A VxVM object is, on average, 256 bytes in size . If we are going to create a large number of layered volumes, we could run out of space in the private region of the disk. The private region is 1MB by default; we can change the size of the private region but only when the disk is first initialized . Thereafter, the private region cannot be resized. Consequently, we need to be careful of our use of layered volumes . A rule-of-thumb is that we use layered volumes for large volumes, i.e., above 4GB, and non-layered volumes for smaller volumes. Now that we know how a layered volume is constructed , does it give us any improvement in the access to our data when we experience two disk failures? Table 7-2 summarizes when we lose access to our data.

Table 7-2. Losing Access to a Stripe-Mirror Volume

Volume Status

ora_disk1

ora_disk2

ora_disk3

ora_disk4

Down

FAIL

FAIL

   

Up

FAIL

 

FAIL

 

Up

FAIL

   

FAIL

Up

 

FAIL

FAIL

 

Up

 

FAIL

 

FAIL

Down

   

FAIL

FAIL


The only time we lose access to the volume is when we lose an entire column from the stripe set. Any other two-disk loss renders the individual plexes as being detached , but there is always another volume with an attached , online plex providing access to the data. Compare this to what would happen if we lost two disks in a mirror-stripe volume (Table 7-3).

Table 7-3. Losing Access to a Mirror-Stripe Volume

Volume Status

ora_disk1

ora_disk2

ora_disk3

ora_disk4

Down

FAIL

FAIL

   

Up

FAIL

 

FAIL

 

Down

FAIL

   

FAIL

Down

 

FAIL

FAIL

 

Up

 

FAIL

 

FAIL

Down

   

FAIL

FAIL


The problem is that if a disk fails in a mirror-stripe layout, the entire plex is detached; this is why losing, for example, ora_disk1 and ora_disk4 causes the volume to be in a Down state, whereas in a stripe-mirror layout, it remains Up . As we add more subdisks to each plex, the chance of a mirror-stripe volume surviving a two-disk failure approaches but never equals 50 percent. For a stripe-mirror layout, as we add more subdisks to a plex, the chance of surviving a two-disk failure approaches but never equals 100 percent. We have effectively halved the risk of failure.

If we are using the GUI (/opt/VRTSob/bin/eva ), a stripe-mirror is known as a Striped Pro volume. There is a layered volume that uses a layout policy known as concat-mirror . This works in a similar way to a stripe-mirror , except that the top-level volume has one concatenated plex and the component subdisks are mirrored. The GUI refers to these volumes as a Concatenated Pro volume.



HP-UX CSE(c) Official Study Guide and Desk Reference
HP-UX CSE(c) Official Study Guide and Desk Reference
ISBN: N/A
EAN: N/A
Year: 2006
Pages: 434

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net