Section 4.7. SAN Building Blocks

   

4.7 SAN Building Blocks

Sections 4.4 and 4.6 provided an overview of Fibre Channel topologies and the Fibre Channel protocol. Continuing with the top-down approach, it is time to consider the various different devices and elements that are used to construct a Fibre Channel SAN. These elements, referred to as the building blocks of a Fibre Channel SAN, are

  • Host bus adapters

  • Fibre Channel cables

  • Connectors

  • Interconnect devices, which include hubs, switches, and fabric switches

All of these are described in Sections 4.7.1 through 4.7.4. Note that all addressable entities on a Fibre Channel SAN have unique World Wide Names. These are analogous to the unique MAC addresses for Ethernet interfaces. In Fibre Channel, the World Wide Name is a unique 64-bit number, typically written as "XX:XX:XX:XX:XX:XX:XX:XX." The IEEE assigns each manufacturer a range of addresses. The manufacturer is responsible for allocating its assigned addresses in a unique way to its devices.

4.7.1 Host Bus Adapters

A host bus adapter ( HBA ) is simply an adapter that plugs into a computer system and provides connectivity with storage devices. In the Windows PC world, HBAs are typically PCI (Peripheral Component Interconnect) based and can provide connectivity to IDE (Integrated Drive Electronics), SCSI, or Fibre Channel devices. The HBA is operated and controlled via a device driver, which in the Windows PC world is typically a SCSIPort or Storport miniport driver.

When an HBA port is initialized , it logs into the fabric (whenever a fabric is available) and registers various attributes that are stored within the fabric switch. Applications can discover these attributes, typically using either switch vendor “specific APIs or HBA vendor “specific APIs. The Storage Networking Industry Association (SNIA) is working on defining a standardized API that will work across all vendor APIs.

For robust SANs that have high availability requirements, some HBA vendors offer additional capabilities, such as automatic failover (to another HBA). These solutions, along with additional architecture, are described in Chapter 9.

In an arbitrated loop, only two devices can be simultaneously communicating. Assume that one of these devices is an HBA in a host system, and that the HBA is receiving data from a storage device. However, if this HBA is connected to switched-fabric SAN, it could send multiple read requests to multiple storage units. The responses to those requests could arrive in any order and in an interleaved fashion. To make matters worse for the HBA, the fabric switch typically provides a round- robin service for the ports, and hence it is more likely that the packets will arrive in such an order that succeeding packets will be from different sources.

HBAs deal with this problem in one of two ways. One strategy, called store and sort , simply stores the data in host memory and then lets the HBA driver use host CPU cycles to sort the buffers. Obviously this approach is expensive in terms of host CPU time, and the total time taken is on the order of a few tens of microseconds for each context switch. The other strategy, called on the fly , provides extra logic and silicon on the HBA itself to accomplish context switching without using the host CPU cycles. Typical context-switching times with this strategy are on the order of a few seconds.

As explained in Section 4.6.3.5, buffer credit is a term defined as part of the FC standard. One credit allows the sender to send one FC frame. Before the next frame can be sent, a Receiver Ready signal must be received by the sender. To keep the FC pipe busy, one must have multiple frames in flight, but that requires multiple credits, which means more memory available for frame reception . Some HBAs have four 1K buffers or two 2K buffers, although some high-end HBAs have 128K and 256K of memory for buffer credits. Note that this memory ideally needs to be dual ported; that is, while some part of memory is receiving data from the Fibre Channel SAN, other parts of memory may be transferring data into the host PCI bus.

HBAs also play a role in ensuring high availability and failover solutions that provide multiple I/O paths to the same storage unit. This is described in Chapter 9.

4.7.1.1 Windows and HBAs

In Windows NT and Windows 2000, Fibre Channel adapters are treated as SCSI devices, and the drivers are written as SCSI miniports. As described in Chapter 2, the problem is that the SCSIPort driver is a little outdated and does not cater to features supported by newer SCSI devices, let alone Fibre Channel devices. Hence, Windows Server 2003 introduces the new Storport driver model that is meant to replace the SCSIPort driver model, especially for SCSI-3 and FC devices. Note that FC storage disks appear to Windows as if they were directly attached, thanks in part to the abstraction provided by the SCSIPort and Storport drivers.

4.7.1.2 Dual Pathing

Sometimes high performance and high availability are needed, even at a higher cost. In these cases a server is connected to dual-ported storage disks via multiple HBAs and multiple independent FC SANs. The idea is to eliminate any single point of failure. In addition, while everything is healthy , the multiple paths can be used to balance the load and improve performance. More details, including how vendors and Microsoft have built multipath solutions for Windows servers, are provided in Chapter 9.

4.7.2 Fibre Channel Cable Types

The two major types of cables used are optical and copper . Their major advantages and disadvantages are as follows :

  • Copper cabling is cheaper than optical cabling.

  • Optical cabling can support higher data rates than copper cabling.

  • Copper cable can span a smaller distance, up to 30 meters , as compared to optical cable, which can span up to 2 kilometers for multimode cable and up to 10 kilometers for single-mode cable.

  • Copper cable is more prone to electromagnetic interference and cross talk.

  • Optical data typically needs to be converted to electrical connections for transmission through a switch backplane and then converted back to optical data for further transmission.

As alluded to here, there is only one type of copper cabling, but there are two different types of optical cabling: multimode and single-mode.

For short distances, a multimode cable is used, which typically has a 50- or 62.5-micron core. (A micron, or micrometer, is one millionth of a meter.) The light wave used has a wavelength of 780 nanometers, which is not supported on single-mode cables. For longer distances, a single-mode cable is used, which typically has a 9-micron core . The light wave used has a wavelength of 1,300 nanometers, and that does work with single-mode cable as well.

Because this book is about storage and networking, even though this chapter is about Fibre Channel, it must be stated that all these cables can also be used for other forms of networks, such as Gigabit Ethernet.

4.7.3 Connectors

Because Fibre Channel supports multiple cable types (and transmission technology), devices such as HBAs, interconnect devices, and storage devices are manufactured with a socket that will accept a connector to provide connectivity to the transmission media. This is done in the interest of keeping costs down. Given that there are different types of transmission media and technology, it is logical that there are different connectors: [3]

[3] Multiple physical standards exist, so the fact that there are only three basic types of technology (copper, single-mode cable, and multimode cable) does not mean that there are only three types of physical connectors. In addition, all of these connectors can and are used for other types of networks, such as Gigabit Ethernet.

  • Gigabit interface converters ( GBICs ), which provide serial or parallel data transfer translation. GBICs offer hot-plug functionality; that is, one can pull out a GBIC and plug in a GBIC without affecting the other ports. GBICs have a 20-bit parallel interface.

  • Gigabit link modules ( GLMs ), which provide functionality similar to that of GBICs but require the device to be powered down for installation. On the other hand, they are less costly than GBICs.

  • Media interface adapters , which are used to allow conversion from copper to optical media and vice versa. Media interface adapters are typically used in HBAs, but they can also be used in switches and hubs.

  • Small form factor ( SFF ) adapters , which allow more interfaces to be accommodated on a given card size .

4.7.4 Interconnect Devices

Interconnect devices provide the connectivity between various elements of the SAN building blocks. The functionality ranges from the low-cost low end of the Fibre Channel hub to the high cost, high performance, and high manageability provided by fabric switches. These devices are described in Sections 4.7.4.1 through 4.7.4.3.

4.7.4.1 Fibre Channel Arbitrated-Loop Hubs

FC-AL hubs provide a low-cost solution to connect multiple Fibre Channel nodes (storage, servers, computer systems, other hubs or switches) into a loop configuration. Typical hubs provide from 8 to 16 ports. A hub can support different transmission types ”for example, copper or optical.

Fibre Channel hubs are passive devices; that is, any other device on the loop cannot detect the presence of a hub. Hubs provide simply two kinds of functionality:

  1. A wiring backplane that can connect any port to any other port

  2. The ability to bypass a port that has a faulty device

The biggest single problem with ports is that they allow only one Fibre Channel connection at a time. In Figure 4.7, if Port 1 wins an arbitration to establish a session to Port 8, none of the other ports can communicate for the duration of the session.

Figure 4.7. A Fibre Channel Hub

graphics/04fig07.gif

Hubs can be connected to Fibre Channel fabric switches (described in Section 4.7.4.3) without any upgrades. One can also cascade hubs simply by plugging in a cable between two hubs.

FC-AL represents a significant percentage of the total Fibre Channel deployment, but Fibre Channel fabric switches (FC-SWs) are gaining share as costs come down.

Gadzoox Networks, Emulex, and Brocade are some examples of companies that produce hubs.

4.7.4.2 Fibre Channel Arbitrated-Loop Switches

The biggest single advantage of FC-AL switches over hubs is that they allow multiple simultaneous connections, whereas hubs allow only a single connection at a time (see Figure 4.8).

Figure 4.8. A Fibre Channel Switch

graphics/04fig08.gif

Achieving this simultaneous transmission capability entails quite some work. Devices connected to the loop switch do not even realize this is happening. Loop switches play a role in data transmission and loop addressing. The next section provides more details. The following sections also discuss the roles of switches in a SAN and ways in which some vendors have added features to their offerings.

Loop Switches and Data Transmission

A server that wants to access a storage device will still send a Fibre Channel arbitration request to gain control of the loop. On a normal FC-AL loop with a hub, every device sees the arbitration packet before it is returned to the server HBA, indicating that the server has won arbitration. A loop switch will send a successful arbitration response immediately, without sending the arbitration to any other nodes. At that point the server HBA will send an Open primitive directed at a port that holds the storage device, and the loop switch will forward the Open primitive. Assuming that that particular port is not actively communicating with any other port, all is fine. The problem arises when, say, the server HBA sends an Open primitive addressed to port 7 and port 7 happens to be busy already communicating with another port. To take care of this problem, the loop switch must provide for buffers to temporarily hold the frames directed at port 7. Some switch vendors provide 32 buffers per port for this purpose.

Loop Switches and FC-AL Addressing

FC-AL hubs play no role in device address assignment, other than forwarding the address primitive frames around the loop. The same is true for most switches as well. However, some devices insist on having a particular address. To manage this demand, some switch vendors allow the hub to control the order in which the ports are initialized, thereby allowing a particular port to initialize first, and the insistent device can be attached to that port.

Loop Switches and Loop Initialization

The FC-AL protocol requires loop reinitialization for the entry, removal, or reinitialization of a device. This loop reinitialization can cause disruption of existing communication between two other devices. Some switch vendors provide a capability to selectively screen and forward loop initialization primitives ( LIPs ). The idea is to minimize disruption, minimize loop reinitialization time, and allow existing communications to continue uninterrupted if possible. At the same time, one must ensure that no two devices ever end up having identical addresses.

If all devices were participating in a loop reinitialization, this would not happen, because the devices defend their addresses. However, if some devices are not participating in loop reinitialization, care needs to be taken that the address assigned to these devices is not also assigned to other devices that are participating in the loop reinitialization. Added logic in the loop switch ensures the uniqueness of the addresses. The idea is that when a storage device is added, the LIP should be sent to a server because the server communicates with storage devices, but the LIP need not be forwarded to a storage device if the storage device never directly communicates with another storage device.

Some storage devices do have the capability to communicate directly with other storage devices, and this is particularly useful for backup operations. See Chapter 5, which describes backup operations in more detail.

Loop Switches and Fabric

If all devices on the loop are fabric aware, things are relatively straightforward and the loop switch simply needs to pass the fabric- related frames through ”for example, the Fabric Login frame. When the devices on a loop are not fabric aware and they need to communicate with devices that are connected to a fabric switch, the loop switch must do a considerable amount of work.

Some vendor loop switches cannot handle cascading. Some loop switches also need firmware upgrades before they can be connected to fabric switches. Some switches must be upgraded to full fabric capability before they can be connected to a fabric SAN.

Brocade, McDATA, Gadzoox Networks, Vixel, and QLogic are examples of companies that produce FC-AL switches.

4.7.4.3 Fibre Channel Fabric Switches

Fibre Channel fabric switches (FC-SWs) allow multiple, simultaneous any-to-any communications at very high speeds. Currently installed switches provide 1-Gbps transmission rates, and 2-Gbps switches are rapidly appearing. In general, fabric switches cost more per port as compared to hubs and FC-AL switches, but they also provide a lot more functionality.

Fabric switches are much more active as compared to hubs and FC-AL switches. For example, they provide the fabric services described here, they provide flow control via flow control primitives, and most importantly, some switches can emulate FC-AL behavior for backward compatibility.

Some fabric switches implement a feature called cut-through routing . Upon receipt of a frame header, the switch rapidly looks up the destination header in the frame and routes the frame to the destination port while it is still being received. The advantage is that the frame is delivered with lower latency, and one need not have a memory buffer to hold the frame, store it in the buffer, and then forward it. The disadvantage is that all frames are forwarded rapidly, including corrupted frames.

Fabric switches play an important role in Fibre Channel SAN security, as described in Chapter 7.

4.7.4.4 Comparing the Three Interconnect Devices

Table 4.5 summarizes the functionality and highlights the differences among the three interconnect devices.

Table 4.5. Fibre Channel Interconnect Devices
 

Hub

FC -AL Switch

FC -SW Switch

Functionality

Single 100MB-per-second data transfer after negotiating permission to transfer.

Multiple 100MB-per-second data transfers after negotiating permission to transfer.

Multiple switched 1-Gbps to 2-Gbps data transfers, with no negotiation required; can emulate FC-AL when required.

Performance

Decreases as nodes increase.

Remains the same as nodes increase.

Remains the same as nodes increase.

Data visibility

All nodes see all data, whether or not data is intended for them.

Only receiving and transmitting nodes have data visibility.

Only receiving and transmitting nodes have data visibility.

Error recovery

Complex error recovery to reinitialize loop affects all nodes, including healthy nodes.

Error recovery affects only the faulty node

Error recovery affects only the faulty node.

Reconfiguration

When a node is added or removed, all nodes participate in loop reinitialization.

Only the new node and switch participate in reconfiguration.

Only the new node and switch participate in reconfiguration.

Data buffer

No data buffer in hub.

No data buffer in switch.

Extensive data buffers per port and per switch; allow transmission without checking if receiving node is ready.

Addressing

8-bit addressing, 127 nodes; with one node reserved for connection to switch.

16-bit addressing, up to 16 million nodes; supports subaddressing ”that is, using fewer bits, such as the last five digits of a phone number for intracompany calls.

16-bit addressing, up to 16 million nodes.

Implementation complexity and cost

Low.

Medium.

High.

Manageability

Usually no manageability, sometimes available as add-on module for extra cost.

Good manageability.

Excellent manageability.

Advanced features

None.

Zoning.

Zoning and advanced security features; multipath failover if topology allows; trunking of links when allowed by some switches.

4.7.4.5 Bridges and Routers

For the purposes of this chapter in particular and this book in general, the terms bridges and routers do not refer to the traditional Ethernet bridges or IP routers. The bridges and routers here deal with Fibre Channel and not layer 2 or layer 3 network protocols.

Bridges are devices that provide for Fibre Channel and legacy protocols such as SCSI. Fibre Channel “to “SCSI bridges would help preserve existing investment in SCSI storage. Such a bridge provides both SCSI and Fibre Channel interfaces and translates between the two. Thus a new server equipped with a Fibre Channel HBA would be able to access existing SCSI storage via such a bridge.

Bridges provide an interface between a parallel SCSI bus and a Fibre Channel interface. Routers can do the same, but with multiple SCSI buses and one or more Fibre Channel interfaces. Storage routers, or intelligent bridges, routinely provide additional features, such as LUN masking and mapping (sometimes called access controls ) and support for SCSI Extended Copy commands. As data movers, storage routers implement the Extended Copy commands for use by storage libraries, to move data from identified targets to the attached libraries. This is also referred to as server-free backup .

Crossroads Systems, Chaparral Network Storage, Advanced Digital Information Corporation (ADIC, via acquisition of Pathlight), and MTI are some examples of router and bridge device vendors.


   
Top


Inside Windows Storage
Inside Windows Storage: Server Storage Technologies for Windows 2000, Windows Server 2003 and Beyond
ISBN: 032112698X
EAN: 2147483647
Year: 2003
Pages: 111
Authors: Dilip C. Naik

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net