Block storage aggregation in a storage network ( SAN appliance )

Block storage aggregation in a storage network ("SAN appliance")

Block storage aggregation in a SAN appliance is characterized by:

  • having multiple hosts and devices attached to a shared storage interconnect,

  • employing a block interface protocol over that interconnect and by

  • providing block-aggregation functions in a dedicated "appliance" that is on the data data-path for each operation.

This is a convenient option for centralizing the control over data placement in a shared storage environment: only the SAN appliance has to be updated to change where data is placed on the back-end storage devices. It comes at the cost of adding an additional step to the data-path, and runs the risk of the SAN appliance being a performance and availability bottleneck.


Figure E-19.

graphics/efig19.gif


The SAN appliance may provide some of the redundancy functions that are normally associated with disk array controllers, or it may simply provide the space management functions of block aggregation.

Storage network-attached block storage with metadata server ("asymmetric block service")

Storage network-attached block storage with metadata server is characterized by

  • having multiple hosts and devices attached to a shared storage interconnect,

  • employing a block interface protocol over that interconnect,

  • having the hosts communicate directly to the storage devices, while

  • employing a metadata server to provide layout information ("block metadata") to the hosts on the current layout of block data on those storage devices.

By comparison to the "SAN appliance" architecture, this does not impose additional physical resources in the data-access path, but data placement changes require coherent updates of any cached copies of the metadata (layout information) held at the hosts.


Figure E-20.

graphics/efig20.gif


Multi-site block storage

Multi-site block storage is characterized by the use of peer-to-peer protocols between like components of two or more systems at different sites to maintain data replicas at each site.

This addresses the increasing need for geographic separation and appropriate decoupling between two or more data sites. In turn, this can be used to enhance data availability in the presence of site disasters, while with careful caching and update protocols retaining the performance advantages of having access to a local copy of data. (This is particularly important in the presence of the larger propagation delays and lower bandwidths of long-haul networks.)

The peer-to-peer protocols can be implemented at several different levels, such as between pairs of logical volume managers, SAN appliances (e.g., remote mirroring boxes), and between storage devices themselves, such as disk arrays. The type of network used between the sites is frequently different than the network used within each site, so gateways or protocol conversion boxes may need to be employed to achieve the desired connectivity.


Figure E-21.

graphics/efig21.gif


File server

File servers ("NAS" systems) are characterized by:

  • bundling storage devices and a file/record subsystem controller into one package,

  • employing a client:server file/record protocol to access the data,

  • using a network that is typically not specialized for, or dedicated to, storage traffic, such as a LAN.

Of the approaches to shared network storage, this is probably the commonest, most mature, easiest to deploy, and most capable today of supporting heterogeneous hosts. The price is that the file server can sometimes be a performance, capacity or availability bottleneck.

Some database servers exist with a similar architecture.

File server controller ("NAS head")

File server controllers are characterized by:

  • decoupling storage devices from the file/record subsystem controller that provides access to them,


    Figure E-22.

    graphics/efig22.gif



    Figure E-23.

    graphics/efig23.gif


  • employing a client:server file/record protocol to access the file/record subsystem from the client hosts, and using a network that is typically not specialized for, or dedicated to, storage traffic, such as a LAN for the host to file/record subsystem traffic,

  • having the file/record subsystem controller, as well as multiple hosts and devices, attached to a shared storage interconnect that employs a block interface protocol.

This variation of the classic file server model has several potential benefits:

  • the block storage devices can be used directly (as shared block storage resources) by both the file/record service and the hosts;

  • the hosts can be bound to both block and file services from common resources at the same time;

  • easier independent scalability of file/record subsystem performance and block-storage performance and capacity.

The cost is largely that of increased complexity of managing the larger numbers of components that are exposed compared to the integrated file server approach.

NAS/file server metadata manager ("asymmetric file service")

NAS/file server metadata managers are characterized by the following (shown applying to the left-hand host in the figure here):

  • multiple hosts and devices attached to a shared storage interconnect that employs a block interface protocol;

  • a separate file system metadata server that maintains layout information for files ("file metadata"), and provides this to the hosts on request;

  • hosts that communicate with the file system metadata server (using an extended client:server file/record protocol) to obtain layout information for the files they wish to access, and then access directly across the shared storage interconnect using a block interface protocol.

This variation offers the performance advantage of direct access to the block storage devices across the storage network with the data sharing advantages of file servers. The performance advantages are largely those of high-speed data traffic, and apply only as long as the rate of metadata updates and the amount of cache coherency traffic is not "too high."

In addition, the file/record metadata server can also do double-duty and act as a traditional file/record subsystem controller ("NAS head") for hosts that do not support the extended client/server metadata protocols.


Figure E-24.

graphics/efig24.gif


Object-based Storage Device (OSD), CMU NASD

Object-based Storage Devices (OSD) are characterized by

  • storage devices that take on data-layout responsibilities, exporting a large number of "byte vectors" (objects) rather than a small number of logical units; each such object is typically used to hold the contents of a single file;

  • a separate metadata server that provides object access-and authentication-information to hosts and (optionally) the storage devices, using an extended client:server file/object interface;

  • direct access from hosts to the storage devices across a shared storage network (usually of a kind that is not specialized for storage traffic, such as a LAN);

  • using a file/object protocol for client operations on the object-storage devices.

The idea here is to offload data layout and security (access) enforcement responsibilities to the storage devices (whose number can easily be scaled), while retaining the semantic advantages of a shared file system, and the performance advantages of direct access from the hosts to the storage devices. As with the NAS/file server metadata managers, the OSD metadata manager can do double-duty as a file server controller ("NAS head") if necessary or desired (e.g., for backwards compatibility).


Figure E-25.

graphics/efig25.gif




Designing Storage Area Networks(c) A Practical Reference for Implementing Fibre Channel and IP SANs
Designing Storage Area Networks: A Practical Reference for Implementing Fibre Channel and IP SANs (2nd Edition)
ISBN: 0321136500
EAN: 2147483647
Year: 2003
Pages: 171
Authors: Tom Clark

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net