12.7 Campus Storage Networks

The need to share storage resources over campus or metropolitan distances has been one byproduct of the proliferation of SANs on a departmental basis. Separate departments within a company, for example, may make their own server and storage acquisitions from their vendor of choice. Each departmental SAN island is designed to support specific upper-layer applications, and so they may be composed of various server platforms, SAN interconnections, and storage devices. It may be desirable, however, to begin linking SANs to streamline tape backup operations, share storage capacity, or share storage data itself. Creating a campus network thus requires transport of block storage traffic over distance as well as accommodation of potentially heterogeneous SAN interconnections.

By standard, native Fibre Channel supports distances of as much as 10 kilometers over single-mode fiber-optic cabling and longwave transceivers. This is sufficient for many campus requirements, but to drive longer distances requires additional equipment. The main issue with native Fibre Channel SAN extension is not the distance itself but the requirement for dedicated fiber from one site to another. Many campus and metropolitan networks may already have Gigabit Ethernet links in place, but to share the same cable by Fibre Channel and Gigabit Ethernet simultaneously requires the additional cost of dense wave division multiplexing (DWDM) equipment. Consequently, it has proven to be more economical to leverage existing Gigabit Ethernet services and use FCIP or iFCP protocols to transport Fibre Channel traffic over IP.

As discussed in Chapter 4, extending Fibre Channel over distance also extends the scope of Fibre Channel switch-to-switch protocols. Connecting Fibre Channel switches builds a single layer 2 fabric, and therefore multiple sites in a campus or metro storage network must act in concert to satisfy fabric requirements. State change notifications and SNS updates can be broadcast throughout the campus network, or disruptions at one SAN site can propagate to all others. Although native Fibre Channel extension and FCIP tunneling are vulnerable to fabric-wide behavior, the problem can be addressed with iFCP gateways. These satisfy the need for Fibre Channel SAN connectivity but preserve the autonomy of each SAN site and isolate potential disruptions.

The other legacy issue for campus SANs based on Fibre Channel is lack of interoperability in multivendor switch environments. Depending on the vintage of Fibre Channel switches involved, it may not be possible to accommodate older microcode versions. With newer products, you may have more success in implementing standard ANSI FC-SW-2 switch interoperability, whereas some iFCP gateway products help you to connect different vendors' fabric switches and avoid having to build a fabric.

The Fibre Channel-specific extension issues are eliminated if the hosts and storage end systems throughout a campus are iSCSI-based. iSCSI appears on the campus IP network or MAN as simply additional IP data, with no inherent distance limitation, no need for DWDM or tunneling, and no special switch behavior. At each site, iSCSI servers and iSCSI storage arrays are connected to standard Gigabit Ethernet switches and can then take advantage of any available IP connectivity from site to site. In a campus environment, this is typically Gigabit Ethernet from building to building, or it may involve some mix of Gigabit Ethernet and IP over ATM or wide area transport. As with any storage application, you must administer authorized assignment of servers to campus-attached storage resources to avoid connectivity-induced chaos.

Figure 12-11 shows a campus storage network with a heterogeneous mix of Fibre Channel and iSCSI-based SANs. In this example, existing Gigabit Ethernet links connect the various buildings. Depending on bandwidth requirements, these links can be shared with messaging traffic or can be dedicated to storage. For Fibre Channel SAN connectivity, FCIP could be used, but this example shows iFCP gateways to ensure autonomy of each departmental SAN and isolation from potential fabric disruption. The administrative building is shown with aggregated Gigabit Ethernet links to the data center to provide higher bandwidth, although 10Gbps Ethernet could also be used if desired. The development center is shown with an iSCSI SAN, which requires only a local Gigabit Ethernet switch to provide connections to server, storage, and the campus.

Figure 12-11. Campus storage network with heterogeneous Fibre Channel and iSCSI SANs

graphics/12fig11.jpg

This campus configuration could support multiple concurrent storage applications, such as consolidated tape backup to the data center or sharing of storage capacity between sites. Because the campus connectivity is built with Gigabit Ethernet and IP, network designers can use IPSec and IP quality of service mechanisms to safeguard storage data and prioritize storage traffic against other IP loads. Finally, the integration of iFCP gateways to support Fibre Channel enables any-to-any connectivity between iSCSI and Fibre Channel resources. In this way, the composition of the campus SAN solution can be altered over time as the Fibre Channel resources are maintained and new iSCSI devices are introduced.



Designing Storage Area Networks(c) A Practical Reference for Implementing Fibre Channel and IP SANs
Designing Storage Area Networks: A Practical Reference for Implementing Fibre Channel and IP SANs (2nd Edition)
ISBN: 0321136500
EAN: 2147483647
Year: 2003
Pages: 171
Authors: Tom Clark

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net