5.7 Fabric Switches

The deployment of fabric switches was initially hindered by cost (more than $2,000 per port) and the requirement for fabric login services for host bus adapters and disk arrays. Cost reductions due to ASIC technology, and widespread support of fabric services by other Fibre Channel vendors, have moved fabric switches into the mainstream of SAN solutions. SAN solutions, in turn, have been moved into the mainstream of enterprise networking.

As illustrated in Figure 5-13, current-generation fabric switches support 1Gbps or 2Gbps per port, provide a high-speed routing engine to switch frames from source to destination, and offer basic services for fabric login and name server functions. Products are differentiated based on vendor-specific issues such as port density, performance, and value-added functionality for ease of installation and management.

Figure 5-13. Fabric switch functional diagram

graphics/05fig13.gif

Fabric switches may support 8 to 16 ports for departmental applications, or 32 to 128 ports (or more) for larger enterprises. Cascading fabrics via expansion ports (E_Ports) allows small and medium configurations to expand as SAN requirements grow, with the caveat that each cascade consumes additional ports and that the expansion links themselves may become potential bottlenecks for fabric-to-fabric communication. Some switch products support multiple links between two switches with load balancing of traffic. This resolves the congestion issue but may introduce another problem if the source and destination N_Ports on either side receive out-of-order frames. In addition to bandwidth, switch-to-switch latency may limit the number of switches in a path. Hop count limitations imposed by vendors range from three to seven switches in any consecutive path.

Fibre Channel standards define F_Ports for attaching nodes (N_Ports), E_Ports for fabric expansion, G_Ports for supporting either N_Ports or other fabrics, and NL_Ports for loop attachment. The standards do not define how, specifically, these port types are to be implemented in hardware, so vendor designs may differ. Some products offer a modular approach, with separate port cards for each port type. Others provide ports that can be configured via management software or auto-configuration to support any port type. The latter offers more flexibility than the others for changing SAN topologies, permitting redistribution or addition of devices with minimal disruption.

Performance differences between fabric products are minimal when ASIC technology is used. Non-ASIC fabrics typically incur more than 2 microseconds latency per frame switch. ASIC-based fabrics normally incur less than 1 microsecond latency. As latency drops into the nanosecond range, the fabric is essentially performing at wire speed. Fabric performance is also affected by the transmit and receive buffering capability of each port. Some products provide sufficient buffering to queue 12 frames, whereas others can queue 60 or more. Additional buffering allows the fabric to handle congestion without throwing away frames, and that speeds the end-to-end communication between source and destination.

Enhanced functionality not covered by specific Fibre Channel standards also includes support for private loop devices, proprietary fabric management, and load sharing trunking between switches.

Almost all fabric switches support some variation of zoning. Port zoning allows a port to be assigned to an exclusive group of other ports. In most implementations, a single port can be assigned to several groups, depending on application requirements. Port zoning, normally a no-cost enhancement to the switch, is an accessible means to segregate servers and their storage from other servers or to isolate different departments sharing the same switch resource. Other zoning options include zoning by node World-Wide Name (WWN) or port World-Wide Name. This more granular approach offers more flexibility in assigning zones but also incurs more cost. Zoning on WWNs may also require an external server and custom application software for managing the zones.

To achieve interoperability in multivendor switch environments, a process called zone merging is required. The mechanics of how each vendor implements port- or WWN-based zoning may differ, but once established, zones must be represented in a common nomenclature so that other switches in the fabric can honor zoning restrictions. As with SNS information exchange, zone merging is a requisite step in the convergence of the fabric toward stable operation. The more complex the fabric, the longer it takes to merge zones and ensure that only authorized storage conversations are allowed.

Management of fabrics is normally provided out-of-band over Ethernet using SNMP or Telnet. Fabric switches may also be managed in-band through the Fibre Channel link using SES queries or proprietary protocols. Most fabric management applications are device managers; they manage the switch enclosure but have no visibility to the rest of the SAN. Fabric management graphical interfaces may include topology mapping, enclosure and port statistics, routing information, and port performance graphing. These features give IT managers a snapshot of the fabric's status and throughput. The tendency of enterprise management for the LAN and WAN is to incorporate network snapshots into trending applications (for example, Concord Network Health) that can be used for capacity planning and network analysis. As this trend extends to SANs, the management data from fabrics can be queried to provide proactive information for storage network planning.

5.7.1 Departmental Fabric Switches

Fabric switches that provide 8 to 32 ports are referred to as departmental switches to differentiate them from more robust and higher-port-count director switches. The term "departmental," however, does not mean that this class of switch is used only in stand-alone departments or small-scale, dispersed SANs. It is not uncommon for an enterprise data center to have hundreds of 8- and 16-port fabric switches mounted with servers and storage in endless rows of 19-inch racks. Although these switches reside in the same data center, they are rarely connected to each other. Instead, each switch serves a specific storage application for one or more departments in the enterprise.

Enterprise data centers may accumulate a significant quantity of departmental fabric switches simply because the supplying vendor packages servers, switches, and storage as a bundle. When more applications are required, more bundles appear. At some point, the volume of separate departmental fabrics begins to resemble the previous accumulation of direct-attached server/storage configurations and presents the same problems in terms of storage management and server administration. It also becomes difficult to reengineer the data center in a more rational distribution of servers and storage connected by a common enterprise fabric, especially in multivendor situations.

Departmental fabric switches may have redundant power supplies and swappable fans, but they do not provide the high-availability features of director-class fabric switches. Each departmental fabric switch is a FRU (field-replaceable unit), and high availability requires installation of dual data paths and redundant switches.

Consolidating an enterprise storage network by connecting multiple departmental switches in a single fabric is a challenge for both customers and vendors. Although a vendor data sheet for a departmental switch may declare support for as many as 239 switches in a fabric, practical guidelines typically call for no more than 32 switches, with no more than 7 switch-to-switch hops in any path. To create a robust meshed network with 8- and 16-port departmental switches simply consumes too many ports for interswitch links and imposes lengthy convergence times to stabilize the fabric. For high-port-count enterprise SAN requirements, it is more efficient to use director-class switches at the core, with departmental switches as fan-out for device ports.

The best use of departmental fabric switches is in environments that need to support only a few storage applications for a limited device population. If growth is anticipated, the SAN design should include eventual attachment to a core director switch as opposed to cascading of additional departmental switches.

5.7.2 Fibre Channel Directors

At the high end of Fibre Channel fabric offerings, Fibre Channel directors provide high port counts in a single, high-availability enclosure. Directors may provide 64 to 128 or more ports (256 ports for some announced products) and so present a streamlined solution for the storage requirements of large data centers. High port count alone, however, does not automatically qualify a product for director status. With roots in data center mainframe channel extension, Fibre Channel director architecture implies high availability for every component, including redundant processors, routing engines, backplanes, and hot-swappable port cards. These high-availability features necessarily drive the cost of director-class fabric switches beyond the reach of departmental budgets and positions them as a high-end data center solution.

As with departmental switches, the number of directors that can be connected in a single fabric may be limited. This is not because of the availability of ports for interswitch links but rather because of the complexity of exchanging route, zoning, and SNS data and the switch-to-switch latency that may occur as the hop count increases. The automated processes provided by Fibre Channel switch protocols to streamline small fabric configurations pose problems for scalability. In addition, some vendors do not advertise the fact that their high-port-count director chassis is the result of combining smaller modules for example, connecting two 64-port switches in a single enclosure. This design introduces an additional hop count within the director itself and, without proper allocation of ports, may result in a blocking architecture. Due diligence with your vendors is always a good idea, particularly when significant amounts of money are involved.

As shown in Figure 5-14, you can accommodate hop count limitations by combining directors with departmental fabric switches. Trunked interswitch links between directors can reduce potential blocking, and the port fan-out supplied by departmental switches increases the total population that can be reasonably supported. This type of solution does not provide director-class availability to every port, so the SAN architect must decide which devices should be directly attached to the director and which can reside on departmental switches.

Figure 5-14. Combining directors and departmental switches in a fabric

graphics/05fig14.gif



Designing Storage Area Networks(c) A Practical Reference for Implementing Fibre Channel and IP SANs
Designing Storage Area Networks: A Practical Reference for Implementing Fibre Channel and IP SANs (2nd Edition)
ISBN: 0321136500
EAN: 2147483647
Year: 2003
Pages: 171
Authors: Tom Clark

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net