RAID 5


RAID 5 is the only one of the original parity RAID definitions still in heavy use today. RAID 2 was defined for a specific type of disk technology that has become obsolete. RAID 3 synchronizes I/O operations over multiple members, but that approach has turned out to be useful only for single-application environments, not for multitasking, multiprocessing environments that characterize open-systems computing. RAID 4 writes strips of data independently to unsynchronized members and writes corresponding parity strips to a dedicated parity member. The dedicated parity member turns out to be a performance bottleneck in most cases.

NOTE

Network Appliance Filer network attached storage (NAS) systems are an example of RAID 4 implementations that do not create I/O bottlenecks. Through clever use of nonvolatile RAM in the Filer subsystem architecture, the rotational latency and seek times that create bottlenecks are overcome.


RAID 5 also writes data in strips to independent array members, but it moves the parity and data strips around the various members of the array, alleviating the performance bottleneck of a dedicated parity member. Table 9-4 illustrates the interspersing of parity and data in array stripes.

Table 9-4. Parity Distributed Through All Members of a RAID 5 Array

Array Stripe

Member 1

Member 2

Member 3

Member 4

Stripe 1

Strip 1a

Strip 1b

Strip 1c

Parity Strip 1

Stripe 2

Strip 2a

Strip 2b

Parity Strip 2

Strip 2c

Stripe 3

Strip 3a

Parity Strip 3

Strip 3b

Strip 3c

Stripe 4

Parity Strip 4

Strip 4a

Strip 4b

Strip 4c


RAID 5 Write Penalty

When all the data strips in a stripe are written simultaneously, the parity can be calculated and written at the same time. However, in most instances data is being updated in a single strip, and the other strips in the stripe are left unchanged. Whenever data strips are changed in an array, it is necessary to also recalculate the parity strip and rewrite it to its corresponding array member. This process is at the core of what is known as the RAID 5 write penalty.

In a nutshell, when a strip is being changed, the old data and the parity data are read from the strip first and XORed to remove the contribution of the old strip to the parity value. For lack of a better term, this is referred to here as the temporary parity. Then the new strip data is XORed with the temporary parity to create the new parity strip. Finally, both the new data strip and the new parity strip are written to their respective members. Obviously, the process of reading old data and parity, making two parity calculations, and writing two strips is somewhat time-consuming. This is especially true when done in host system volume management software, where all the involved reads and writes occur over the complete host-to-storage I/O path.

NOTE

Seagate attempted to alleviate some of the pain of the RAID 5 write penalty by adding XOR functions and an initiator function to its disk drives. The idea was that a disk drive in an array could read and write the parity strip for a stripe and use this parity to make both required XOR calculations when new strips were written to the drive. It was a nice idea and has been used to some extent, but it has not caught on in a big way. This is an excellent example of how a disk drive manufacturer attempted to add real, useful value to its products but has not been able to turn the idea into a major business success.




Storage Networking Fundamentals(c) An Introduction to Storage Devices, Subsystems, Applications, Management, a[... ]stems
Storage Networking Fundamentals: An Introduction to Storage Devices, Subsystems, Applications, Management, and File Systems (Vol 1)
ISBN: 1587051621
EAN: 2147483647
Year: 2006
Pages: 184
Authors: Marc Farley

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net