5.2 Host Bus Adapters

Fibre Channel host bus adapters provide the interface between the internal bus architecture of the server or workstation and the external storage network. HBAs are available for various bus types and various physical connections to the transport. Most commonly employed are HBAs with peripheral component interface (PCI) bus interfaces and shortwave fiber-optic transceivers. HBAs are supplied with software drivers to support various operating systems and upper-layer protocols as well as support for private loop, public loop, and fabric topologies.

Although most HBAs have a single transceiver for connection to the SAN, some dual-ported and even quad-ported HBAs exist. As discussed in Chapter 4, these multiported devices appear as a single Fibre Channel node containing two or more N_Ports, each with a unique World-Wide Name (Port_Name) and 24-bit port address. Multiported HBAs save bus slots by aggregating N_Ports, but they also pose a potential single point of failure should the HBA hang. Most HBAs offer a single Fibre Channel port, requiring that you install additional HBAs if you desire multiple links to the same or different SAN segment.

As shown in Figure 5-4, the HBA embodies all four Fibre Channel layers, FC-0 through FC-4. At the FC-0 layer, the HBA has transmit and receive functions to physically connect to the link. For fiber optics, this connector may be a standard GBIC, a fixed transceiver, or a small form factor transceiver. For copper interface, the connector may be DB-9 with four active wires or the high-speed serial direct connect (HSSDC) form factor. Behind the link interface, clock and data recovery (CDR) circuitry, serializing/deserializing functions, and an elasticity buffer and retiming circuit enable the receipt and transmission of gigabit serial data.

Figure 5-4. Host bus adapter functional diagram

graphics/05fig04.gif

The FC-1 transmission protocol requirements are met with on-board 8b/10b encoding logic for outbound data and decoding logic for incoming and error monitoring functions. For loop-capable HBAs, the FC-1 functions must be followed by a loop port state machine (LPSM) circuit, typically included with other features in a single chip, such as Tachyon or Emulex Firefly. Above the LPSM, the HBA provides the signaling protocol for frame segmentation and reassembly, class of service, and credit algorithms as well as link services for fabric and port login required by FC-2. At the FC-4 upper-layer protocol mapping level, most HBAs provide SCSI-3 software drivers for NT, UNIX, Solaris, or Macintosh operating systems.

The number of functions consolidated into one or more chips is vendor-dependent, but current HBAs are ASIC-based and collapse most functions into an integrated architecture. This arrangement has helped to bring the cost down and provides more functionality on less real estate compared with less integrated designs.

The rapid evolution of server technology to multiprocessor architectures and wider buses has also affected the development of HBAs. The 32-bit PCI bus found in some NT, UNIX, and Macintosh platforms can sustain 132MBps throughput. Two 32-bit HBAs driving 100MBps each would outperform the bus. Newer 64-bit PCI bus implementations, however, can drive 264MBps, permitting full utilization of two 64-bit PCI HBAs at 1Gbps or a single 2Gbps HBA at 200MBps. The 64-bit PCI HBAs are usually backward-compatible with 32-bit buses, and that provides flexibility in sourcing and maintaining these cards in mixed platform environments.

The design of HBAs typically includes Flash ROM for microcode. This is an important feature, because compatibility issues and the need for microcode fixes are facts of life for all network products. Having a means to upgrade microcode via a software utility is very useful and extends the life of the product. Device drivers for specific operating systems are also upgradable. In most cases, installing new microcode or device drivers requires taking an HBA off line, and that is additional incentive for redundant configurations in high-availability networks.

Although HBA vendors may sell products directly to customers, significant volumes are sold through VARs, systems integrators, and OEMs. These resellers, in turn, may integrate a model of HBA into a certified SAN configuration along with servers, switches, and storage targets. Although the HBA vendor may have supplied device drivers with the HBA to ensure interoperability in open systems environments, the reseller may modify those device drivers to suit specific application requirements. The result is that even though an HBA functions perfectly in an OEM configuration, interoperability issues surface when third-party products or other HBAs are introduced into the SAN. OEM microcode meddling may also extend to Fibre Channel switches, further exacerbating open systems interoperability issues. For the SAN architect, this state of affairs requires additional due diligence in product and vendor selection to avoid frustrating and expensive troubleshooting to resolve vendor-induced problems.

The SCSI-3 device driver supplied by the HBA vendor is responsible for mapping Fibre Channel storage resources to the SCSI bus/target/LUN triad required by the operating system (OS). These SCSI address assignments may be configured by OS utilities or by a graphical interface supplied by the manufacturer (or both). Because Fibre Channel addresses are self-configuring, the mapping between port addresses or AL_PAs (which may change) and the upper-layer SCSI device designations (which generally do not change) is maintained by the HBA and its device driver interface to the OS.

Device drivers for IP over Fibre Channel must perform a similar function via the Address Resolution Protocol, or ARP. When upper-layer applications send data addressed to an IP destination, the HBA's device driver must resolve IP addresses into Fibre Channel addresses. Most configurations assume that all IP-attached devices reside on the same IP subnet and that no IP router engine exists in the fabric to which it is connected. If a SAN design requires concurrent use of IP and SCSI-3, some vendors require a separate card for each protocol. Others provide integrated SCSI-3 and IP support and thereby allow you to use a single HBA.

The trend in HBA development, as with other Fibre Channel products, is toward more sophisticated functionality at a reduced cost. Some host bus adapters offer add-on features such as HBA-based RAID, which offloads the task of striping of data across multiple drives from the server's CPU. Support for the Virtual Interface (VI) protocol is also being developed. VI drivers will allow applications ready access to the Fibre Channel transport without passing through traditional, CPU-intensive protocol stacks. And advanced diagnostic features including SCSI Enclosure Services (SES) emulation and standardized management application programming interfaces (APIs) allow HBAs to participate in umbrella SAN management platforms. You should consider such enhanced features when querying HBA vendors about their product roadmaps and cooperative development efforts with other Fibre Channel suppliers.

The Storage Networking Industry Association has promoted interoperable HBA management through development of the Common HBA API. This API is standardized through the NCITS/ANSI T11 committee and defines status reporting. This enables management frameworks to solicit common information in a multivendor HBA environment, including traffic statistics and configuration data useful for monitoring the operation of the SAN. Additional information on the Common HBA API is available on the SNIA Web site at www.snia.org.



Designing Storage Area Networks(c) A Practical Reference for Implementing Fibre Channel and IP SANs
Designing Storage Area Networks: A Practical Reference for Implementing Fibre Channel and IP SANs (2nd Edition)
ISBN: 0321136500
EAN: 2147483647
Year: 2003
Pages: 171
Authors: Tom Clark

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net