Certification Objective 8.05-Advanced Partitioning: Software RAID


A Redundant Array of Independent Disks (RAID) is a series of disks that can save your data even if a catastrophic failure occurs on one of the disks. While some versions of RAID make complete copies of your data, others use the so-called parity bit to allow your computer to rebuild the data on lost disks.

Linux RAID has come a long way. A substantial number of hardware RAID products support Linux, especially those from name-brand PC manufacturers. Dedicated RAID hardware can ensure the integrity of your data even if there is a catastrophic physical failure on one of the disks. Alternatively, you can configure software-based RAID on multiple partitions on the same physical disk. While this can protect you from a failure on a specific hard drive sector, it does not protect your data if the entire physical hard drive fails.

Depending on definitions, RAID has nine or ten different levels, which can accommodate different levels of data redundancy. Combinations of these levels are possible. Several levels of software RAID are supported directly by RHEL: levels 0, 1, 5, and 6. Hardware RAID uses a RAID controller connected to an array of several hard disks. A driver must be installed to be able to use the controller. Most RAID is hardware based; when properly configured, the failure of one drive for almost all RAID levels (except RAID 0) does not destroy the data in the array.

Linux, meanwhile, offers a software solution to RAID. Once RAID is configured on a sufficient number of partitions, Linux can use those partitions just as it would any other block device. However, to ensure redundancy, it's up to you in real life to make sure that each partition in a Linux software RAID array is configured on a different physical hard disk.

On the Job 

The RAID md device is a meta device. In other words, it is a composite of two or more other devices such as /dev/hda1 and /dev/hdb1 that might be components of a RAID array.

The following are the basic RAID levels supported on RHEL.

RAID 0

This level of RAID makes it faster to read and write to the hard drives. However, RAID 0 provides no data redundancy. It requires at least two hard disks.

Reads and writes to the hard disks are done in parallel-in other words, to two or more hard disks simultaneously. All hard drives in a RAID 0 array are filled equally. But since RAID 0 does not provide data redundancy, a failure of any one of the drives will result in total data loss. RAID 0 is also known as striping without parity.

RAID 1

This level of RAID mirrors information between two disks (or two sets of disks-see RAID 10). In other words, the same set of information is written to each disk. If one disk is damaged or removed, all of the data is stored on the other hard disk. The disadvantage of RAID 1 is that data has to be written twice, which can reduce performance. You can come close to maintaining the same level of performance if you also use separate hard disk controllers, which prevents the hard disk controller from becoming a bottleneck. RAID 1 is relatively expensive. To support RAID 1, you need an additional hard disk for every hard disk worth of data. RAID 1 is also known as disk mirroring.

RAID 4

While this level of RAID is not directly supported by the current Linux distributions associated with Red Hat, it is still supported by the current Linux kernel. RAID 4 requires three or more disks. As with RAID 0, data reads and writes are done in parallel to all disks. One of the disks maintains the parity information, which can be used to reconstruct the data. Reliability is improved, but since parity information is updated with every write operation, the parity disk can be a bottleneck on the system. RAID 4 is known as disk striping with parity.

RAID 5

Like RAID 4, RAID 5 requires three or more disks. Unlike RAID 4, RAID 5 distributes, or stripes, parity information evenly across all the disks. If one disk fails, the data can be reconstructed from the parity data on the remaining disks. RAID does not stop; all data is still available even after a single disk failure. RAID 5 is the preferred choice in most cases: the performance is good, data integrity is ensured, and only one disk's worth of space is lost to parity data. RAID 5 is also known as disk striping with parity.

RAID 6

RAID 6 literally goes one better than RAID 5. In other words, while it requires four or more disks, it has two levels of parity and can survive the failure of two member disks in the array.

RAID 10

I include RAID 10 solely to illustrate one way you can combine RAID levels. RAID 10 is a combination of RAID 0 and RAID 1, which requires a minimum of four disks. First, two sets of disks are organized in RAID 0 arrays, each with their own individual device file, such as /dev/md0 and /dev/md1. These devices are then mirrored. This combines the speed advantages of RAID 0 with the data redundancy associated with mirroring. There are variations: for example, RAID 01 stripes two sets of RAID 1 mirrors. RAID 50 provides a similar combination of RAID 0 and RAID 5.

On the Job 

Hardware RAID systems should be hotswappable. In other words, if one disk fails, the administrator can replace the failed disk while the server is still running. The system will then automatically rebuild the data onto the new disk. Since you can configure different partitions from the same physical disk for a software RAID system, the resulting configuration can easily fail if you use two or more partitions on the same physical disk. Alternatively, you may be able to set up spare disks on your servers; RAID may automatically rebuild data from a lost hard drive on properly configured spare disks.

RAID in Practice

RAID is associated with a substantial amount of data on a server. It's not uncommon to have a couple dozen hard disks working together in a RAID array. That much data can be rather valuable.

image from book
Inside the Exam

Creating RAID Arrays

During the Installation and Configuration portion of the Red Hat exams, it's generally easier to do as much as possible during the installation process. If you're asked to create a RAID array, it's easiest to do so with Disk Druid, which works only during installation. You can create RAID arrays once RHEL is installed, but as you'll see in the following instructions, it is more time consuming and involves a process that is more difficult to remember.

However, if you're required to create a RAID array during your exam and forget to create it during the installation process, not all is lost. You can still use the tools described in this chapter to create and configure RAID arrays during the exam. And the skills you learn here can serve you well throughout your career.

image from book

If continued performance through a hardware failure is important, you can assign additional disks for failover, which sets up spare disks for the RAID array. When one disk fails, it is marked as bad. The data is almost immediately reconstructed on the first spare disk, resulting in little or no downtime.

Reviewing an Existing RAID Array

If you created a RAID array during the installation process, you'll see it in the /proc/ mdstat file. For example, I see the following on my system:

 # cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] md0 : active raid6 sda13[2] sda12[1] sda10[0] sda9[4] sda8[3]       312576 blocks level 6, 256k chunk, algorithm 2 [5/5] [UUUUU] unused devices: <none> 

Yes, I know, this violates good practice, using RAID partitions from the same hard drive. But my personal resources (and I suspect many exam sites, despite the price) have limits. As you can see, this is a RAID 6 array, associated with device file md0, /dev/md0. You can find out more about this array with the following command:

 # mdadm --detail /dev/md0 /dev/md0:         Version : 00.90.03   Creation Time : Tue Mar 21 04:13:45 2007      Raid Level : raid6      Array Size : 312576 (305.30 MiB 320.08 MB)     Device Size : 104192 (101.77 MiB 106.69 MB)    Raid Devices : 5   Total Devices : 5 Preferred Minor : 0     Persistence : Superblock is persistent     Update Time : Wed Dec 20 09:38:43 2006           State : clean  Active Devices : 5 Working Devices : 5  Failed Devices : 0   Spare Devices : 0   Chunk Size : 256K        UUID : 8d85b38a:0ba072fc:858dfbb2:ba77a998        Events : 0.4  Number Major Minor RaidDevice State     0     3     10      0      active sync /dev/sda10  1     3     12      1      active sync /dev/sda12  2     3     13      2      active sync /dev/sda13  3     3      8      3      active sync /dev/sda8  4     3      9      4      active sync /dev/sda9 

As you can see, this is a RAID 6 array, which requires at least four partitions. It can handle the failure of two partitions. If there were a spare device, the number of Total Devices would exceed the number of Raid Devices.

Modifying an Existing RAID Array

Modifying an existing RAID array is a straightforward process. You can simulate a failure with the following command. (I suggest that you add --verbose to help you get as much information as possible.)

 # mdadm --verbose /dev/md0 -f /dev/sda13 -r /dev/sda13 mdadm: set /dev/sda13 faulty in /dev/md0 mdadm: hot removed /dev/sda13 

You can reverse the process; the same command can be used to add the partition of your choice to the array:

 # mdadm --verbose /dev/md0 -a /dev/sda13 mdadm: re-added /dev/sda13 

It makes sense to review the results after each command with cat /proc/mdstat or mdadm --detail /dev/md0.

Creating a New RAID Array

Creating a new RAID array is a straightforward process. The first step is to create RAID partitions. You can do so as described earlier using either parted or fdisk. In this section, I'll show you how to create a simple RAID 1 array of two partitions. I assume that there are two partitions already available: /dev/sdb1 and /dev/sdb2. Now create a simple array:

 # mdadm --create --verbose /dev/md1 --level=1 \ --raid-devices=2 /dev/sdb1 /dev/sdb2 mdadm: size set to 97536k mdadm: array /dev/md1 started. 

Now it's time to format the new device, presumably to the default ext3 filesystem:

 # mkfs.ext3 /dev/md1 

You can now mount the filesystem of your choice on this array. Just remember that if you want to make this permanent, you'll have to add it to your /etc/fstab. For example, to make it work with /tmp, add the following directive to that file:

 /dev/md1 /tmp ext3 defaults 0 0 

image from book
Exam Watch

Remember that you may not get credit for your work unless your changes survive a reboot.

image from book

Exercise 8-2: Mirroring the /home Partition with Software RAID

image from book

Don't do this exercise on a production computer. If you have a computer with Red Hat Enterprise Linux already installed with several different physical hard drives that you can use for testing, that is best. One alternative is to use virtual machine technology such as VMware or Xen, which can allow you to set up these exercises with minimal risk to a production system. You can also set up several IDE and SCSI hard disks on a VMware machine. When you're ready, use the Linux parted or fdisk techniques, described in Chapter 4, and add a RAID partition to two different hard drives.

Using the following steps, you can create a mirror of hda5, which stores the /home directory, to the hdb5 partition. (If your partition devices are different, substitute accordingly.)

Assume you haven't created a RAID array before. If you have, check it in the /proc/mdstat file. Make a note of the existing arrays, and take the next device file in sequence. For example, if there's already an /dev/md0 array, plan for a /dev/md1 array in this exercise.

If you're making changes on a production computer, back up the data from the /tmp directory first. Otherwise, all user data in /tmp will be lost.

  1. Mark the two partition IDs as type fd using the Linux fdisk utility. There are equivalent steps available in parted.

     # fdisk /dev/hda Command (m for help) : t Partition number (1-5) 5 Partition ID (L to list options): fd Command (m for help) : w # fdisk /dev/hdb Command (m for help) : t Partition number (1-5) 5 Partition ID (L to list options): fd Command (m for help) : w 

  2. Make sure to write the changes. The parted utility does it automatically; if you use fdisk, run partprobe or reboot.

  3. Create a RAID array with the appropriate mdadm command. For /dev/hda5 and /dev/hdb5, you can create it with the following:

     # mdadm --create /dev/md0 --level=1 --raid-devices=2 \ /dev/hda5 /dev/hdb5 

  4. Confirm the changes; run the following commands:

     # cat /proc/mdstat # mdadm --verbose /dev/md0 

  5. Now format the newly created RAID device:

     # mkfs.ext3 /dev/md0 

  6. Now mount it on a test directory; I often create a test/ subdirectory in my home directory for this purpose:

     # mount /dev/md0 /root/test 

  7. Next, copy all files from the current /home directory. Here's a simple method that copies all files and subdirectories of /home:

     # cp -ar /home/. /root/test/ 

  8. Unmount the test subdirectory:

     # umount /dev/md0 

  9. Now you should be able to implement this change in /etc/fstab. Remember that during the exam, you may not get full credit for your work unless your Linux system mounts the directory on the RAID device. Based on the parameters described in this exercise, the directive would be

     /dev/md0   /home   ext3   defaults  0 0 

  10. Now reboot and see what happens. If the /home directory partition contains the files of your users, you've succeeded. Otherwise, remove the directive added in step 9 from /etc/fstab and reboot again.

image from book



RHCE Red Hat Certified Engineer Linux Study Guide (Exam RH302)
Linux Patch Management: Keeping Linux Systems Up To Date
ISBN: 0132366754
EAN: 2147483647
Year: 2004
Pages: 227
Authors: Michael Jang

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net