What About Management?


In addition to cost-savings derived from improved hardware utilization efficiencies, SANs were also supposed to deliver significant labor- related cost savings through improved management. The ENSA SAN advocates predicted an intelligent and largely self-managing infrastructure whose growth could be managed easily by a fixed number of staff. This value proposition remains unrealized in current SANs.

By all accounts, using current tools and techniques, the most storage a single administrator can manage is approximately 300 to 500 GB. This number increases substantially when all storage arrays are homogeneous (as white papers commissioned from analysts by most array vendors often conclude). However, the reason has nothing to do with the storage topology. Rather, increased GB per administrator is a function of the efficacy of storage management software. If all deployed storage arrays are from a single vendor, the vendor's own configuration and management "point" software can be used to manage the products as a whole. The gain in GB per administrator has little or nothing to do with SANs, but with homogeneous infrastructure.

The management of heterogeneous FC SANs, by contrast, continues to be a " kludge ." Part of the reason has to do with the lack of a "service" within the Fibre Channel protocol for performing in- band or in-the-wire management.

When it was first invented at IBM, the Fibre Channel protocol was conceived as little more than a serial storage interconnect replacement for a SCSI parallel bus. Designers are fond of saying that they did not set out to create a network protocol, just a serial interconnect. Thus, "IP stack-like functions," such as in-band management and in-band security, were deliberately excluded from the Fibre Channel protocol. The Fibre Channel Industry Association, in a draft white paper detailing the roadmap for the protocol in 2000, recognized this deficit and stated that it was working to add in additional services to make the protocol more "network-like" in the future. [3]

Whether or not Fibre Channel is, in fact, a network is a subject for debate among intelligent people in the industry. Howard Goldstein, a good guy and well-known storage consultant and trainer, offers a perspective that is somewhat contrary to this book. Out of deference to Howard, it is being printed here in its entirety.

"There are many myths in technology and many interpretations as well. One of these myths is the concept that Fibre Channel (FC) is not a network architecture. Some of this comes from the fact that many of the FC services are provided in a FC switch that in a historical OSI-like implementation operates at OSI Data Link Layer 2. In today's products, many functions that would be described at OSI Network Layer 3, Transport Layer 4, and other layers are often consolidated in single device."

"Take for example an integrated Internet Cable/DSL Firewall, DHCP-capable, Gateway, Router, and Switch one can purchase from the local computer store. This incorporates Internet Protocol Suite function at all 7 layers of the OSI Architecture but does this in the 4-Layer Internet architecture model."

"This view also comes from the incorrect assumption that OSI is the perfect network model and that all network architectures must layer and assign functionality based on this model. I can't tell you how many times I have seen this lead to incorrect comparisons between architectures and products in the storage-networking world. Comparing Gigabit Ethernet to Fibre Channel is like comparing the core of an apple to an entire orange."

"OSI approaches network functionality in a classic seven-layer model (see Figure 3-6). Each layer implemented by an OSI-compliant product follows the architected functionality of that layer. All of these OSI functions can also be found in the 5-Level architecture of Fibre Channel."

  • OSI Physical Layer 1 media interface and bit transmission functions are found in FC Level 0 and Level 1

  • OSI Data Link Layer 2 system port to system port frame transmission functions are found in FC Level 2. This Level 2 function is implemented within a FC port found on Host Bus Adapters (HBA), FC Storage Adapters (FA), and switches.

  • OSI Network Layer 3 routing functions are found from an End System point of view just above FC Level 2 port. From an interconnect topology Intermediate System perspective such as a fabric of switches, this routing takes place implicitly within the Fabric Controller services function of the fabric switch Generic Services architecture component. This uses the routing tables created by an OSI Network Layer 3 like protocol such as FC Fabric Shortest Path First (FSPF), a derivative of the Internet Layer 2 function such as the Open Shortest Path First (OSPF) protocol. If one had to put a number on it, it could be considered FC Level '2.5'.

Figure 3-6. Fibre Channel Protocol and the ISO Open Storage Interconnect Network Model. ( Source: Howard Goldstein, President, Howard Goldstein Associates, Inc., Superior, Co., www.hgai.com . Reprinted by Permission.)

graphics/03fig06.gif

FC Level 3 Common Services Level is not analogous to OSI Layer 3 but is a placeholder level for possible enhanced architected functionality like automatic failover, parallel I/O, hunt groups, etc. At the time of this writing, no FC Level 3 standards have been defined for these functions and that is unfortunate. Only proprietary implementations exist.

  • OSI Transport Layer 4 End System to End System virtual interface functions are implemented with FC Level 2 as well.

  • OSI Session Layer 5 application workflow management function can also be found in FC Level 2 through the use of FC Exchange and Sequence management.

  • OSI Presentation Layer 6 syntax, data compression and data security functions are implemented with some of the newer FC Level 2 and '2.5' services such as Fabric Generic Services Security Key Distribution Service found in a FC switch or the proposed FC Security Protocol. Additionally, the fabric services include Zone management, configuration management, and others.

  • OSI Application Layer 7 application support services such as X.500 Directory services has its parallels in FC Level '2.5' as well in the FC Extended Link Services and Directory service sub-component Name Services again found in a FC fabric of switches.

"Desert cakes come in all shapes , sizes, layers and tastes. Each can be served on different occasions and for different purposes. Whether it is a 2-layer carrot cake, an 8-layer devil 's food cake or a multi-tiered wedding cake, they all serve their designed purpose."

"Network architectures are like these deserts. Not all functions, even OSI functions need to be present to have robust network architecture. Fibre Channel provides Physical Transport Networking function by design. It implements all of the 7-Layer OSI functions in the mission it performs . I like to say that FC fundamentally is nothing more than the ability to provide the appearance of many private virtual SCSI Bus cables for every SCSI LUN accessed. It is a 'plumbing' platform to build on." [4]

Goldstein's perspective offers one of the most exhaustive efforts to rationalize the FC-as-network view of so many vendors in the FC community. According to him, even an effort to develop a channel interconnect can inadvertently lead to the creation of something else. He is fond of saying that, just as the fellow who developed "sticky notes" did not originally set out to develop a detachable/re-attachable glue, IBM invented a network architecture without meaning to. That alone does not invalidate the result as a network.

However, Goldstein does agree that certain services were not originally provided in FC that are taken for granted in other messaging networks. Even where enhancements have been made to the protocol through subsequent standards body development efforts, they have not been seized upon by the industry or implemented at all in a standard way.

In the absence of a native in-band management service, the FC SANs deployed today are actually a hybrid of a Fibre Channel protocol "fabric" ”basically, a switched, serial, storage interconnect linking servers and storage devices and providing data transport services ”and an IP network that also interconnects all SAN elements (server host bus adapters, switches, and storage controllers) to carry SAN management traffic (see Figure 3-7). In disagreement with Goldstein's conclusion, this author must conclude that it is oxymoronic for FC to ever have been used as the foundation for a storage network.

Figure 3-7. The FC SAN topology ”A "kludge."

graphics/03fig07.gif

In fact, the lack of standards-based, in-band services in FC SAN implementations also accounts for the fact that heterogeneous FC SANs offer little improvement over heterogeneous server-attached configurations in terms of the number of GB that an individual storage administrator can manage. Until recently, those deploying heterogeneous SANs had enormous difficulty even with the low-level task of "discovering" heterogeneous devices in the same fabric. Vendors seemed to be going out of their way to ensure that a competitor's hardware could not be discovered or used by the switches of their preferred SAN switch-maker. SAN switch- makers seemed to be unwilling to recognize or interoperate with their competitor's switch product or with host bus adapters that were not part of their vendor cadre.

Today, there has been some improvement in this space, with HBA vendors agreeing to a quasi-standard device driver and some large array vendors cooperating to create "Bluefin," an object-oriented messaging interface specification that links distributed management applications with device management support and enables the discovery of different arrays in the same fabric. It remains to be seen how well these cooperative arrangements will stand the test of time and the proprietary forces driving the industry to greater and greater Balkanization. If the tenure of recent application programming interface (API) swapping arrangements between vendors is any indication of the life expectancy of such surrogate services, the management of FC SANs will likely remain a headache for storage administrators for the foreseeable future.

Even with discovery issues to some degree resolved, storage management ”defined here as the cost-efficient provisioning of data across storage resources to meet the data-storage, data-access , and data-protection requirements as determined by business applications and end-users ”entails significantly greater intelligence than contemporary tools provide. FC SANs, far from making storage management easier and reducing the labor costs associated with this activity, have actually contributed complexity and cost in most environments. The exceptions are homogeneous SANs created from the products of a select vendor cadre. However, in the final analysis, it is not the FC SAN that makes this infrastructure more manageable by fewer administrators, but the homogeneity of the configuration.



The Holy Grail of Network Storage Management
The Holy Grail of Network Storage Management
ISBN: 0130284165
EAN: 2147483647
Year: 2003
Pages: 96

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net