Fibre Channel


FC is currently the network technology of choice for storage networks. FC is designed to offer high throughput, low latency, high reliability, and moderate scalability. Consequently, FC can be used for a broad range of ULPs. However, market adoption of FC for general purpose IP connectivity is not likely, given Ethernet's immense installed base, lower cost, and comparative simplicity. Also, FC provides the combined functionality of Ethernet and TCP/IP. So, running TCP/IP on FC represents a duplication of network services that unnecessarily increases cost and complexity. That said, IP over FC (IPFC) is used to solve certain niche requirements.

Note

Some people think FC has higher throughput than Ethernet and, on that basis, would be a good fit for IP networks. However, Ethernet supports link aggregation in increments of one Gbps. So, a host that needs more than one Gbps can achieve higher throughput simply by aggregating multiple GE NICs. This is sometimes called NIC teaming, and it is very common.


Merger of Channels and Packet Switching

The limitations of multidrop bus technologies such as SPI and the intelligent peripheral interface (IPI) motivated ANSI to begin development of a new storage interconnect in 1988. Though storage was the primary application under consideration, other applications such as supercomputing and high-speed LANs were also considered. ANSI drew from its experience with earlier standards including SPI, IPI, and high-performance parallel interface (HIPPI) while developing FC. IBM's ESCON architecture also influenced the design of FC. ANSI desired FC to support the ULP associated with each of those technologies and others. When ANSI approved the first FC standard in 1994, it supported SCSI, IPI, HIPPI, SBCCS, 802.2, and asynchronous transfer mode (ATM) via separate mapping specifications (FC-4). FC has continued to evolve and now supports additional ULPs.

We can define the concept of a channel in many ways. Historically, a storage channel has been characterized by a physical end-to-end connection (among other things). That is the case with all multidrop bus technologies, and with IBM's ESCON architecture. Preserving the channel characteristics of traditional storage interconnects while simultaneously expanding the new interconnect to support greater distances, higher node counts, and improved utilization of link resources was a major challenge for ANSI. ANSI determined that packet switching could be the answer if properly designed. To that end, ANSI produced a packet-switched interconnect capable of providing various delivery modes including circuit-switched connection emulation (conceptually similar to ATM Circuit Emulation Service [CES]). ANSI made it feasible to transport storage traffic via all defined delivery modes by instituting credit-based link-level flow control and a full suite of timing restrictions. Today, none of the major FC switch vendors support circuit-switched mode, and storage traffic is forwarded through FC-SANs on a hop-by-hop basis. This model facilitates improvements in link-utilization efficiency and enables hosts to multiplex simultaneous sessions.

FC Throughput

FCP is the FC-4 mapping of SCSI-3 onto FC. Understanding FCP throughput in the same terms as iSCSI throughput is useful because FCP and iSCSI can be considered direct competitors. (Note that most vendors position these technologies as complementary today, but both of these technologies solve the same business problems.) Fibre Channel throughput is commonly expressed in bytes per second rather than bits per second. This is similar to SPI throughput terminology, which FC aspires to replace. In the initial FC specification, rates of 12.5 MBps, 25 MBps, 50 MBps, and 100 MBps were introduced on several different copper and fiber media. Additional rates were subsequently introduced, including 200 MBps and 400 MBps. These colloquial byte rates, when converted to bits per second, approximate the data bit rate (like Ethernet). 100 MBps FC is also known as one Gbps FC, 200 MBps as 2 Gbps, and 400 MBps as 4 Gbps. These colloquial bit rates approximate the raw bit rate (unlike Ethernet). This book uses bit per second terminology for FC to maintain consistency with other serial networking technologies. Today, 1 Gbps and 2 Gbps are the most common rates, and fiber-optic cabling is the most common medium. That said, 4 Gbps is being rapidly and broadly adopted. Additionally, ANSI recently defined a new rate of 10 Gbps (10GFC), which is likely to be used solely for inter-switch links (ISLs) for the next few years. Storage array vendors might adopt 10GFC eventually. ANSI is expected to begin defining a new rate of 8 Gbps in 2006. The remainder of this book focuses on FC rates equal to and greater than 1 Gbps on fiber-optic cabling.

The FC-PH specification defines baud rate as the encoded bit rate per second, which means the baud rate and raw bit rate are equal. The FC-PI specification redefines baud rate more accurately and states explicitly that FC encodes 1 bit per baud. Indeed, all FC-1 variants up to and including 4 Gbps use the same encoding scheme (8B/10B) as GE fiber optic variants. 1-Gbps FC operates at 1.0625 GBaud, provides a raw bit rate of 1.0625 Gbps, and provides a data bit rate of 850 Mbps. 2-Gbps FC operates at 2.125 GBaud, provides a raw bit rate of 2.125 Gbps, and provides a data bit rate of 1.7 Gbps. 4-Gbps FC operates at 4.25 GBaud, provides a raw bit rate of 4.25 Gbps, and provides a data bit rate of 3.4 Gbps. To derive ULP throughput, the FC-2 header and inter-frame spacing overhead must be subtracted. Note that FCP does not define its own header. Instead, fields within the FC-2 header are used by FCP. The basic FC-2 header adds 36 bytes of overhead. Inter-frame spacing adds another 24 bytes. Assuming the maximum payload (2112 bytes) and no optional FC-2 headers, the ULP throughput rate is 826.519 Mbps, 1.65304 Gbps, and 3.30608 Gbps for 1 Gbps, 2 Gbps, and 4 Gbps respectively. These ULP throughput rates are available directly to SCSI.

The 10GFC specification builds upon the 10GE specification. 10GFC supports five physical variants. Two variants are parallel implementations based on 10GBASE-X. One variant is a parallel implementation similar to 10GBASE-X that employs four pairs of fiber strands. Two variants are serial implementations based on 10GBASE-R. All parallel implementations operate at a single baud rate. Likewise, all serial variants operate at a single baud rate. 10GFC increases the 10GE baud rates by 2 percent. Parallel 10GFC variants operate at 3.1875 GBaud per signal, provide an aggregate raw bit rate of 12.75 Gbps, and provide an aggregate data bit rate of 10.2 Gbps. Serial 10GFC variants operate at 10.51875 GBaud, provide a raw bit rate of 10.51875 Gbps, and provide a data bit rate of 10.2 Gbps. Note that serial 10GFC variants are more efficient than parallel 10GFC variants. This is because of different encoding schemes. Assuming the maximum payload (2112 bytes) and no optional FC-2 headers, the ULP throughput rate is 9.91823 Gbps for all 10GFC variants. This ULP throughput rate is available directly to SCSI. Table 3-5 summarizes the baud, bit, and ULP throughput rates of the FC and 10GFC variants.

Table 3-5. FC Baud, Bit, and ULP Throughput Rates

FC Variant

Baud Rate

Raw Bit Rate

Data Bit Rate

FCP ULP Throughput

1 Gbps

1.0625 GBaud

1.0625 Gbps

850 Mbps

826.519 Mbps

2 Gbps

2.125 GBaud

2.125 Gbps

1.7 Gbps

1.65304 Mbps

4 Gbps

4.25 GBaud

4.25 Gbps

3.4 Gbps

3.30608 Gbps

10GFC Parallel

3.1875 GBaud x 4

12.75 Gbps

10.2 Gbps

9.91823 Gbps

10GFC Serial

10.51875 GBaud

10.51875 Gbps

10.2 Gbps

9.91823 Gbps


Figure 3-11 illustrates the protocol stack for FCP.

Figure 3-11. FCP Stack


FC Topologies

FC supports all physical topologies, but protocol operations differ depending on the topology. Protocol behavior is tailored to PTP, loop, and switch-based topologies. Like Ethernet, Fibre Channel supports both shared media and switched topologies. A shared media FC implementation is called a Fibre Channel Arbitrated Loop (FCAL), and a switch-based FC implementation is called a fabric.

FC PTP connections are used for DAS deployments. Companies with SPI-based systems that need higher throughput can upgrade to newer SPI equipment or migrate away from SPI. The FC PTP topology allows companies to migrate away from SPI without migrating away from the DAS model. This strategy allows companies to become comfortable with FC technology in a controlled manner and offers investment protection of FC HBAs if and when companies later decide to adopt FC-SANs. The FC PTP topology is considered a niche.

Most FC switches support FCAL via special ports called fabric loop (FL) ports. Most FC HBAs also support loop protocol operations. An HBA that supports loop protocol operations is called a node loop port (NL_Port). Without support for loop protocol operations, a port cannot join an FCAL. Each time a device joins an FCAL, an attached device resets or any link-level error occurs on the loop, the loop is reinitialized, and all communication is temporarily halted. This can cause problems for certain applications such as tape backup, but these problems can be mitigated through proper network design. Unlike collisions in shared media Ethernet deployments, loop initialization generally occurs infrequently. That said, overall FCAL performance can be adversely affected by recurring initializations to such an extent that a fabric topology becomes a requirement. The FCAL addressing scheme is different than fabric addressing and limits FCAL deployments to 127 nodes (126 if not fabric attached). However, the shared medium of an FCAL imposes a practical limit of approximately 18 nodes. FCAL was popular in the early days of FC but has lost ground to FC switches in recent years. FCAL is still used inside most JBOD chassis, and in some NAS filers, blade centers, and even enterprise-class storage subsystems, but FCAL is now essentially a niche technology for embedded applications.

Like Ethernet, FC switches can be interconnected in any manner. Unlike Ethernet, there is a limit to the number of FC switches that can be interconnected. Address space constraints limit FC-SANs to a maximum of 239 switches. Cisco's virtual SAN (VSAN) technology increases the number of switches that can be physically interconnected by reusing the entire FC address space within each VSAN. FC switches employ a routing protocol called fabric shortest path first (FSPF) based on a link-state algorithm. FSPF reduces all physical topologies to a logical tree topology. Most FC-SANs are deployed in one of two designs commonly known as the core-only and core-edge designs. The core-only is a star topology, and the core-edge is a two-tier tree topology. The FC community seems to prefer its own terminology, but there is nothing novel about these two topologies other than their names. Host-to-storage FC connections are usually redundant. However, single host-to-storage FC connections are common in cluster and grid environments because host-based failover mechanisms are inherent to such environments. In both the core-only and core-edge designs, the redundant paths are usually not interconnected. The edge switches in the core-edge design may be connected to both core switches, but doing so creates one physical network and compromises resilience against network-wide disruptions (for example, FSPF convergence). As FC-SANs proliferate, their size and complexity are likely to increase. Advanced physical topologies eventually might become mandatory, but first, confidence in FSPF and traffic engineering mechanisms must increase. Figures 3-12 and 3-13 illustrate the typical FC-SAN topologies. The remaining chapters of this book assume a switch-based topology for all FC discussions.

Figure 3-12. Dual Path Core-Only Topology


Figure 3-13. Dual Path Core-Edge Topology


FC Service and Device Discovery

FC employs both service- and device-oriented approaches that are well suited to medium- and large-scale environments. FC provides registration and discovery services via a name server model. The FCNS may be queried for services or devices, but the primary key of the FCNS database is node address. So, a common discovery technique is to query based on node address (device oriented). That said, the device-oriented approach is comparatively inefficient for initial discovery of other nodes. So, the FCP specification series suggests querying the FCNS based on service type (service oriented). All nodes (initiators and targets) can register themselves in the FCNS, but registration is optional. This means that FCNS discovery reveals only registered nodes, not all nodes that are physically present. However, practically all FC HBAs are hard coded to register automatically with the FCNS after link initialization. So, unregistered nodes are extremely rare, and FCNS discovery usually provides complete visibility. Unlike the iSNS RFC, the FC specifications do not explicitly state that unregistered nodes cannot query the FCNS.

When a node joins a fabric, an address is assigned to it. The new node does not know which addresses have been assigned to other nodes. So, when using the device-oriented approach, the new node submits a get next query to the FCNS asking for all information associated with an arbitrarily chosen node address. The FCNS responds with all information about the next numerically higher node address that has been assigned to a registered node. The address of that node is specified in the next get next query submitted by the new node. This process is repeated until the numerically highest address is reached, at which point the FCNS wraps to the numerically lowest address that has been assigned to a registered node. The query process then continues normally until the originally specified address is reached. At this point, the new node is aware of every registered node and has all information available about each registered node. LUN discovery can then be initiated to each registered node via the SCSI REPORT LUNS command. The FCNS stores information about which FC-4 ULPs are supported by each node and the capabilities associated with each ULP, assuming that information is supplied by each node during registration. This information allows a new node to limit LUN discovery to just those nodes that support FCP target services. However, a node may query all discovered nodes that support FCP.

Alternately, the new node can submit a get port identifiers query. In such a query, a node address is not specified. Instead, an FC-4 protocol is specified along with features of the FC-4 protocol. The name server searches the FCNS database for all registered devices that support the FC-4 protocol and features specified in the query. The response contains a list of node addresses that match the search criteria. This service-oriented approach enables a new node to discover all relevant nodes with a single query.

The FC name service is a subset of the directory service. Implementation of the FC directory service is optional, but all modern FC-SANs implement the directory service. The directory service can be implemented on a host or storage subsystem, but it is most commonly implemented on FC switches. FC globally reserves certain addresses for access to fabric-based services such as the directory service. These reserved addresses are called well known addresses (WKAs). Each FC switch uses the WKA of 0xFFFFFC to represent its internal logical node that provides the directory service. This address is hard coded into FC HBA firmware. Hosts and storage subsystems send FCNS registration and discovery requests to this WKA. Each FC switch processes FCNS requests for the nodes that are physically attached. Each FC switch updates its local database with registration information about locally attached devices, distributes updates to other FCNSs via inter-switch registered state change notifications (SW_RSCNs), listens for SW_RSCNs from other FCNSs, and caches information about non-local devices. FC nodes can optionally register to generate and receive normal RSCNs. This is done via the state change registration (SCR) procedure. The SCR procedure is optional, but practically all FC nodes register because notification of changes is critical to proper operation of storage devices. Registration also enables FC nodes to trigger RSCNs whenever their internal state changes (for example, a change occurs to one of the FC-4 operational parameters). This is done via the RSCN Request procedure.

FC zones are similar to iSNS DDs and SLP scopes. Nodes can belong to one or more zones simultaneously, and zone membership is additive. Zones are considered to be soft or hard. Membership in a soft zone may be determined by node name, node port name, or node port address. Membership in a hard zone is determined by the FC switch port to which a node is connected. A default zone exists into which all nodes not explicitly assigned to at least one named zone are placed. Nodes in the default zone cannot communicate with nodes in other zones. However, communication among nodes within the default zone is optional. The choice is implementation specific. Nodes belonging to one or more named zones are allowed to discover only those nodes that are in at least one common zone. Management nodes are allowed to query the entire FCNS database without consideration for zone membership. The notion of a zone set is supported to improve manageability. Many zones can be defined, but only those zones that belong to the currently active zone set are considered active. Management nodes are able to configure zone sets, zones, and zone membership via the FC zone server (FCZS). RSCNs are limited by zone boundaries. In other words, only the nodes in the affected zone(s) are notified of a change.




Storage Networking Protocol Fundamentals
Storage Networking Protocol Fundamentals (Vol 2)
ISBN: 1587051605
EAN: 2147483647
Year: 2007
Pages: 196
Authors: James Long

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net