1.1 Using the SNIA Shared Storage Model

Sharing storage resources among multiple servers or workstations requires a peer-to-peer network that joins targets to initiators. The composition of that network and the type of storage data traversing it vary from one architecture to another. Generally, shared storage architectures divide into storage area networks (SANs) and network-attached storage (NAS). For SANs, the network infrastructure may be Fibre Channel or Gigabit Ethernet, and the type of storage data being transported is block Small Computer Systems Interface (SCSI) data. For NAS, the network infrastructure is typically Ethernet (Fast Ethernet or Gigabit Ethernet), and the type of storage data carried across the network is file-based. At the most abstract level, then, the common denominator between SAN and NAS is that both enable sharing of storage resources by multiple initiators, whether block-based or file-based.

Understanding the role of direct-attached, SAN, and NAS solutions and fitting them into a coherent IT storage strategy can be a challenge for both technologists and the managers who sign off on major storage acquisitions. The SNIA Shared Storage Model offers a useful framework for understanding the relationships between upper-layer applications and their supporting storage infrastructures. The ability to map current storage deployments to proposed solutions helps to clarify the issues that storage architects are attempting to address and also creates a framework for projecting future requirements and solutions.

As shown in Figure 1-1, the Shared Storage Model establishes the general relationship between user applications that run on servers and hosts and the underlying storage domain. Applications may support user activity such as processing online transactions, mining databases, or serving Web content. Storage-specific applications such as management, backup, cluster serving, or disk-to-disk data replication are grouped within the services subsystem of the storage domain. The model thus distinguishes between end-user or business applications at the upper layer, and secondary applications used to monitor and support the lower-level storage domain.

Figure 1-1. The SNIA Shared Storage Model overview

graphics/01fig01.gif

The storage domain subdivides into three main categories: the file/record subsystem, the block aggregation layer, and the block subsystem. The file/record subsystem is the interface between upper-layer applications and the storage resources. Database applications such as SQL Server and Oracle use a record format as processing units, whereas most other applications expect to process files.

Whether useful information appears to the upper-layer application as records or files, both formats are ultimately stored on disk or tape as contiguous bytes of data known as blocks. The size of a data block can vary from system to system, as can the method of mapping records or files to the blocks of bytes that compose them. In all cases, the storage domain requires some means of associating blocks of data with the appropriate record or file descriptors. This function is depicted as the block aggregation layer, which may be based on the host system, within the storage network, or at the storage device. Having been identified with specific records or files, the blocks themselves are written to or read from physical storage, shown in the Shared Storage Model as the block subsystem.

Also part of the storage domain, but positioned as an auxiliary subsystem, the services subsystem contains a number of storage-specific functions, including management, security, backup, availability, and capacity planning. These services may appear as integrated functions in storage products or as stand-alone software applications used to monitor and administer storage resources. A particular block aggregation method may require a unique management service. NAS devices, for example, may perform backups somewhat differently than their SAN brethren, requiring a separate type of application in the services area.

Considerable engineering effort has been invested in creating products to reliably integrate the file/record, block aggregation, and block subsystems into viable shared storage solutions. Perfecting gigabit transport of block data to ensure data integrity over very high speed serial links, for example, took years of standards development and verification testing. Development of the services subsystem was not feasible until these basic infrastructure issues were resolved. Consequently, the creation of management, security, and other auxiliary services has trailed somewhat behind the deployment of shared storage networks. Today, the lack of enhanced management services inhibits further expansion of shared storage in the market, and the focus of the storage industry is shifting from infrastructure to auxiliary services that will make it easier to install and support shared storage solutions. In using the Shared Storage Model to position your own storage solutions, you should anticipate that new products will become available in the services subsystem to enable management, security, and other functions.

With the layered architecture of the Shared Storage Model as a guide, it is now possible to insert server and storage components to clearly differentiate between direct-attached, SAN, and NAS configurations. As shown in Figure 1-2, direct-attached storage (DAS) extends from the server to the storage target by parallel SCSI cabling. This is the most common storage configuration today, although shared storage is expected to gradually displace DAS over the next few years. In this example, the left side of Figure 1-2 shows a server with logical volume management (LVM) and software RAID (redundant array of independent disks) running on the host system. As the server receives information from the application that must be written to disk, the software RAID stripes blocks of data across multiple disks in the block layer. The host thus performs the block aggregation function. Whereas software RAID executes the mechanics to striping blocks of data to multiple disks, the logical volume manager presents a coherent image of the data to the upper-layer application in the form of volume (for example, M: drive), directories, and subdirectories.

Figure 1-2. Direct-attached storage in the SNIA Shared Storage Model

graphics/01fig02.gif

On the right side of Figure 1-2, servers are shown that have a SCSI attachment to a disk array containing an integrated RAID controller. In this instance, the host is relieved of the task of striping data blocks via software RAID. Instead, the array itself performs this function. Consequently, it is shown overlapping the block and block aggregation layers.

SANs alter the relationship between servers and storage targets, as shown in Figure 1-3. Instead of being bound by direct parallel SCSI cabling, servers and storage are now joined through a peer-to-peer network. As in direct-attached storage, logical volume management, software RAID, and hardware RAID still play their roles, but the connectivity between servers and storage devices now allows you to attach any server to any storage target. The exclusive ownership of a storage resource by a server, symbolized by the umbilical tether of SCSI cabling, is no longer mandatory. You can assign shared storage resources at will to designated servers, and you can alter the relationships between servers and storage dynamically to accommodate changing application requirements.

Figure 1-3. Changing the relationship between servers and storage via a SAN

graphics/01fig03.gif

Replacing direct-attached connections with a more flexible network interconnection enables new storage solutions. For example, you can consolidate storage, share resources for both tape and disk, and cluster multiple servers for high availability. Depending on the SAN topology you use, you can also scale networked storage to higher populations of servers and storage devices. A large disk array, for example, can support tens or hundreds of servers in a single SAN, amortizing the cost of the array over more hosts and streamlining storage administration.

The SAN infrastructure can be Fibre Channel, Gigabit Ethernet, or, with the appropriate IP storage switches, both. To accurately represent actual customer deployments, this model could also depict multiple SANs servicing various upper-layer applications and providing attachment to common or distinct storage resources, as shown in Figure 1-4. For heterogeneous environments, it might be useful to call out which SANs use Fibre Channel fabrics exclusively and which involve a mix of Fibre Channel and IP components.

Figure 1-4. Multiple SANs within the Shared Storage Model

graphics/01fig04.gif

The Shared Storage Model positions NAS devices partly in the file/record subsystem layer, extending down to the block subsystem. NAS devices serve up files and so naturally include the block aggregation functions required to put data to disk. As shown in Figure 1-5, a NAS device is essentially a file server with its own storage resources. NAS devices typically use NFS (Network File System) or CIFS (Common Internet File System) protocols to transport files over the local area network (LAN) to clients. For NAS, the internal SCSI access transport between the NAS intelligence (head) and its physical disk drives is transparent to the user. The NAS disk drive banks may be integrated drive electronics (IDE) at the low end, SCSI attached, or Fibre Channel. Network Appliance, for example, uses Fibre Channel arbitrated loop disk drives for mass storage. This is in effect a SAN behind the NAS device, efficiently serving up blocks of data that the NAS head assembles into files for NFS or CIFS transport.

Figure 1-5. The role of NAS in the Shared Storage Model

graphics/01fig05.gif

The Shared Storage Model in Figure 1-6 captures the general relationship among direct-attached, SAN, and NAS solutions. Because many enterprises have a mix of direct-attached, SAN, and NAS storage solutions, this is a useful framework for associating specific upper-layer applications with current and planned storage configurations. Applications currently running on direct-attached storage, for example, can be redrawn with SAN or NAS components, or multiple direct-attached storage arrays can be redrawn with consolidated SAN-based arrays and SAN-attached hosts.

Figure 1-6. Using the Shared Storage Model to show a mix of DAS, SAN, and NAS solutions

graphics/01fig06.gif

The practical value of applying the SNIA Shared Storage Model is in defining the current user applications and supporting storage resources and mapping those to new infrastructures. This in turn provides a comprehensive overview of customer storage requirements and options and can be used as a blueprint for further development of the network.



Designing Storage Area Networks(c) A Practical Reference for Implementing Fibre Channel and IP SANs
Designing Storage Area Networks: A Practical Reference for Implementing Fibre Channel and IP SANs (2nd Edition)
ISBN: 0321136500
EAN: 2147483647
Year: 2003
Pages: 171
Authors: Tom Clark

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net