Disk Systems

team lib

By integrating controllers into the disk assemblies, we begin to have complete storage systems. These come in an amazing amount of configurations and offerings, but the basic model is the attached controller that uses multiple disks drives within the enclosure.

As disk drives are linked together, they form an array. This is used for basic capacity and performance enhancements. Disks configured in this manner can be used individually as Disk1 through Disk4, as shown in Figure 6-9, or they can be used in an integrated fashion by combing the capacities of the array and using them as one large disk. For example, in Figure 6-9, Disk1 through 4 can be combined to form a virtual disk V which can use the entire capacity of the array. In this scenario, the application I/O sees disk V as one large disk drive even though its only a virtual representation of the four physical drives. The application and the operating system understands that data can be stored on disk drive V and lets the controller take care of where the data is actually written using the pool of disks 1 through 4.

click to expand
Figure 6-9: Disk arrays

This forms the basis for advanced functionality in storage systems such as Redundant Arrays of Independent Disks (RAID) and the concept of storage virtualization. The functionality of RAID provides disk redundancy, fault tolerant configurations, and data protection resiliency. All this is facilitated through the controller mechanisms discussed previously and illustrated in Figure 6-6. The RAID decoupling of the physical drives from the application software I/O requests has driven the usage and advancement of storage virtualization technologies and products.

Letting the servers operating system believe it has a pool of storage capacity frees up the I/O manager services, allowing it to perform other tasks , and facilitates the development of applications without the low-level coding of specific I/O calls. Remember, there is no such thing as a free lunch and the translation of calls to the virtual pool of data has to be processed either in the storage controller or in the third- party storage software running on the server. It may even be a combination of both. Either way, the overhead continues to exist, although it may be offloaded and rely on RAID processed within the controller.

Disk Arrays

The first level of storage array configuration is JBOD (Just a Bunch Of Disks), depicted in Figure 6-9. JBOD links a set of disk drives together to form an array where each drive becomes an addressable unit. These provide additional capacity; however, they do not provide any fault resiliency in the event of an inoperable drive. Partitioning data throughout the disks and providing a layer of virtualization services can be done through software, which is generally part of the I/O management of the operating system. These functions are also available through third-party software applications.

Although these functions offer a level of data redundancy (depending on the sophistication of the software), they do not provide any method of fault tolerance for continuous operations in the event of a lost disk drive. Consequently, while these guarantee some data loss protection, they do not guarantee that the data remains available even if disk drives become inoperable within the array. Functions that provide continuous operations and fault resiliency are provided by RAID.

RAID Storage Arrays

RAID is, as its name implies, a redundant array of independent disks that becomes an addressable unit made up of separate and independent disk drives (shown in Figure 6-10). The main difference from JBOD arrays is that RAID partitions data throughout the array and recovery functions. The recovery functions are developed using disk parity information that is calculated to reassemble missing data from a failed drive to the remaining drives within the array. The data is distributed throughout the array in a manner most effective to the recovery and protection strategy. RAID has several levels in which these recovery and protection strategies can be implemented.

click to expand
Figure 6-10: The basic RAID architecture for a storage array

The various RAID levels of implementation provide recovery and data protection that is appropriate to the applications and user s continuity requirements. This allows for performance, data protection, and automatic recovery from drive failures within the array. RAID has become the de facto standard for disk hardware fault tolerance functions. As an accepted standard, all storage manufacturers offer it; however, all storage vendors have also evolved into a diversity of proprietary functions that define the type of fault tolerant protection offered . These are encapsulated within the storage firmware run on the controller and are referred to as both RAID software functions and RAID firmware.

To make things more confusing, some RAID functions can be provided through software that runs on the server. This differentiation becomes distinct as we look at the standard RAID levels as defined throughout the industry. As such, RAID functionality can be selected either through hardware or software components , or a combination of both.

RAID configuration levels range from numbers 0 through 5 for basic RAID services; however, extended RAID levels, although proprietary to vendor implementations , have gone beyond basic configurations. Given the numerous options for partitioning data and the diversity of ways that parity information can be calculated and partitioned, the options for RAID can be numerous . Through the evolution of real experiences and exposure to typical applications and I/O workloads, two of the five have proven the most valuable , and therefore are the most popular. These are RAID levels 1 and 5. RAID levels 1 and 5 will most likely be the ones implemented within the data center. However, we will pay particular attention to RAID level 4 as an example of how a vendor can use a proprietary and bundled level.

RAID level 0 simply uses the storage array for partitioning of data without any disk parity information. This provides for data protection without a recovery mechanism should any of the data become unavailable from a failed drive. Many software-only data protection mechanisms use a RAID level 0 configuration, where the data is written to primary and secondary files. Similar to RAID level 1 (mirroring) without any disk recovery mechanism, this is sometimes called software mirroring. In other cases, the data is striped across the disk array for performance reasons. This allows the disk write process to take advantage of several head and disk assembly mechanisms for multiple files with high disk write requirements.

RAID level 1 offers the ability to provide a primary and secondary set of data. As one file is updated, its secondary is also updated, keeping a safe copy for data protection just like RAID level 0. The major difference is the calculation of parity information, as shown in Figure 6-11. This is used in the event a disk drive fails within the RAID array. In that case, the available mirrored copy of the data continues processing. When the drive failure is corrected, the unavailable copy of the mirror is reassembled through the parity information contained within the RAID controller. Theoretically, the data is never unavailable and remains online for the application use.

click to expand
Figure 6-11: The RAID level 1 configuration

RAID level 5 is a partitioning of the data across the RAID array for performance purposes. This is depicted in Figure 6-12 along with the complement of disk parity information also striped across the array. In the event of a disk failure, the missing data is reassembled through the parity information processed by the RAID controller. Once the failed disk drive is corrected, the data is reassembled back onto the failed drive and the entire array is back in full operation. Theoretically, the data is never unavailable as in RAID level 1.

click to expand
Figure 6-12: The RAID level 5 configuration

RAID levels 2 and 3 are derivatives of mirroring and level 5, and use different strategies for storing the parity information. Although RAID level 4 is also a derivative of level 5, it provides striping across the array of user data, and reserves a disk within the array for processing of parity data (shown in Figure 6-13). This is used in some NAS solutions where the storage is bundled and the RAID array processing is hardcoded within the packagein other words, you dont have the option to move to other RAID levels.

click to expand
Figure 6-13: The RAID level 4 configuration
 
team lib


Storage Networks
Storage Networks: The Complete Reference
ISBN: 0072224762
EAN: 2147483647
Year: 2003
Pages: 192

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net