5.6 Switching Hubs
are a hybrid SAN interconnection, occupying a middle ground between shared loop hubs and fabrics. As with the functionality of private loop stealth or phantom mode for some fabric switches, switching hubs leverage the simplicity of arbitrated loop with the high-performance bandwidth of switches. By design, switching hubs are not fabric-capable and so do not support fabric login, SNS, or state change notification. This
Switching hubs typically provide 6 to 12 ports, each of which supports 1Gbps or 2Gbps throughput. The attached loop nodes are configured into one virtual loop
Figure 5-12. Switching hubs allow multiple concurrent loop transactions
At the high end of the hub product offering, switching hubs support SNMP, SCSI Enclosure Services, or other management features. Depending on the vendor's design, some products offer advanced diagnostic features, including the ability to direct, via a management interface, data capture traffic on any other port without disrupting the topology.
5.7 Fabric Switches
The deployment of fabric switches was initially hindered by cost (more than $2,000 per port) and the requirement for fabric login services for host bus adapters and disk arrays. Cost reductions due to ASIC technology, and widespread support of fabric services by other Fibre Channel
Figure 5-13. Fabric switch functional diagram
Fabric switches may support 8 to 16 ports for departmental applications, or 32 to 128 ports (or more) for larger
Fibre Channel standards define F_Ports for attaching nodes (N_Ports), E_Ports for fabric expansion, G_Ports for supporting either N_Ports or other fabrics, and NL_Ports for loop attachment. The standards do not define how,
Performance differences between fabric products are minimal when ASIC technology is used. Non-ASIC fabrics typically incur more than 2 microseconds latency per frame switch. ASIC-based fabrics normally incur less than 1 microsecond latency. As latency
Enhanced functionality not covered by specific Fibre Channel standards also includes support for private loop devices, proprietary fabric management, and load sharing trunking between switches.
Almost all fabric switches support some variation of zoning. Port zoning allows a port to be assigned to an exclusive
To achieve interoperability in multivendor switch environments, a process called
is required. The mechanics of how each vendor implements port- or WWN-based zoning may differ, but once established, zones must be represented in a common
Management of fabrics is normally provided out-of-band over Ethernet using SNMP or Telnet. Fabric switches may also be managed in-
5.7.1 Departmental Fabric Switches
Fabric switches that provide 8 to 32 ports are referred to as
switches to differentiate them from more robust and higher-port-count
Enterprise data centers may accumulate a significant quantity of departmental fabric switches simply because the supplying vendor packages servers, switches, and storage as a bundle. When more applications are required, more bundles appear. At some point, the volume of separate departmental fabrics begins to resemble the previous accumulation of direct-attached server/storage configurations and
Departmental fabric switches may have redundant power
Consolidating an enterprise storage network by connecting multiple departmental switches in a single fabric is a challenge for both customers and vendors. Although a vendor data sheet for a departmental switch may declare support for as many as 239 switches in a fabric, practical guidelines typically call for no more than 32 switches, with no more than 7 switch-to-switch hops in any path. To create a robust meshed network with 8- and 16-port departmental switches simply consumes too many ports for interswitch links and imposes lengthy convergence times to stabilize the fabric. For high-port-count enterprise SAN requirements, it is more efficient to use director-class switches at the
The best use of departmental fabric switches is in environments that need to support only a few storage applications for a limited device population. If growth is anticipated, the SAN design should include eventual attachment to a core director switch as opposed to cascading of additional departmental switches.
5.7.2 Fibre Channel Directors
At the high end of Fibre Channel fabric offerings, Fibre Channel directors provide high port counts in a single, high-availability enclosure. Directors may provide 64 to 128 or more ports (256 ports for some announced products) and so present a streamlined solution for the storage requirements of large data centers. High port count alone, however, does not automatically qualify a product for director status. With roots in data center mainframe channel extension, Fibre Channel director architecture implies high availability for every component, including redundant processors, routing engines,
As with departmental switches, the number of directors that can be connected in a single fabric may be limited. This is not because of the availability of ports for interswitch links but rather because of the complexity of exchanging route, zoning, and SNS data and the switch-to-switch latency that may occur as the hop count
As shown in Figure 5-14, you can accommodate hop count limitations by combining directors with departmental fabric switches. Trunked interswitch links between directors can reduce potential blocking, and the port fan-out supplied by departmental switches increases the total population that can be reasonably supported. This type of solution does not provide director-class availability to every port, so the SAN architect must decide which devices should be directly attached to the director and which can reside on departmental switches.
Figure 5-14. Combining directors and departmental switches in a fabric