7.3 iSCSI Adapter Cards

There has been an ongoing debate in the vendor community over what to call iSCSI adapter cards. Vendors from the IP networking community have marketed them as iSCSI NICs, because the cards may simply be enhanced Gigabit Ethernet NICs with optimized iSCSI processing. In contrast, vendors from the Fibre Channel SAN community prefer to call them iSCSI HBAs, or at least iSCSI adapters, to emphasize that such cards have significantly more value than ordinary NICs. The real issue, as always for vendors, is money. SAN hardware has always demanded premium pricing, with Fibre Channel HBAs selling in the $1,200 to $2,000 range. Commonplace IP network equipment, and especially network interface cards, has been driven to commodity pricing. A Gigabit Ethernet NIC, for example, typically sells for less than $500 with optical interface, and less than $100 with copper RJ-45 interface.

Whatever they are called, iSCSI adapters are more expensive than Gigabit Ethernet NICs and less expensive than Fibre Channel HBAs. iSCSI adapters are more expensive than traditional NICs because of the additional processing required to offload TCP and iSCSI processing by the host. TCP off-load engines (TOEs) are a prerequisite for enterprise storage applications. Burdening the host system with intensive network packet processing is unacceptable, and Fibre Channel HBAs have set the expectation that host CPU utilization should be less than 10 percent. Fortunately for iSCSI card manufacturers, the cost of TOE technology is being driven lower by the volume of TOE chips required for the much broader IP networking market.

As shown in Figure 7-2, a fully implemented iSCSI adapter card performs the same basic functions as an ordinary Gigabit Ethernet NIC and includes additional logic for TOE and iSCSI processing. At the interface to the network, the card looks very similar to a Fibre Channel HBA, reflecting Gigabit Ethernet's roots in Fibre Channel signaling. Clock and data recovery, serializing/deserializing (serdes), and 8b/10b encoding are the same. The LAN controller logic performs Ethernet-specific functions, and the TOE and iSCSI layers represent two new additions to the card architecture.

Figure 7-2. Functional diagram of an iSCSI adapter card

graphics/07fig02.gif

Although it is possible to run an iSCSI device driver over a standard Gigabit Ethernet NIC, the penalty for doing so is imposed by lost CPU cycles. A host system may expend 50 percent to 90 percent of its CPU cycles simply processing the TCP layer in each packet. In addition, the host would also have to serve the iSCSI device driver, placing an additional burden on host resources. If the specific storage application is tape backup and if it is performed during off hours, a software-only iSCSI implementation may be viable. From a cost standpoint, host connectivity would be free because iSCSI device drivers are available for download from a number of vendor Web sites. For enterprise storage applications, though, the iSCSI adapter should have at minimum an efficient TOE to reduce host network processing to less than 10 percent.

As shown in Figure 7-3, iSCSI can be implemented in several ways. In the left-hand column, software iSCSI is run over a standard NIC, which places both TCP and iSCSI processing responsibility on the host CPU as described earlier. In the middle column, software iSCSI is run over a TOE-enabled Gigabit Ethernet NIC. With the on-board TOE engine, most networking processing has been offloaded, but the host must still process iSCSI exchanges. In the right-hand column, both TCP and iSCSI processing have been fully offloaded. In this case, the card hands already processed SCSI data to the host SCSI interface to the operating system. As might be expected, the TOE adds cost to the card, as does the additional iSCSI processing logic. Still, it should be possible to implement a fully offloaded iSCSI card for less cost than a comparable Fibre Channel HBA.

Figure 7-3. iSCSI adapter implementations

graphics/07fig03.gif

iSCSI adapter cards provide flexibility for deploying servers within the network. No longer tethered to Fibre Channel switches, servers can be connected to standard departmental or core Gigabit Ethernet switches or to remote IP routers. On the storage target side, the interface can be iSCSI or an IP storage switch that performs iSCSI-to-Fibre Channel protocol conversion.

iSCSI adapters have begun the migration from Fibre Channel SANs to IP storage networking. With iSCSI on the servers and with the SAN interconnection provided by standard IP infrastructure, at least two-thirds of the SAN components can be IP-based. When target systems also support iSCSI, the entire SAN enters mainstream IP networking.



Designing Storage Area Networks(c) A Practical Reference for Implementing Fibre Channel and IP SANs
Designing Storage Area Networks: A Practical Reference for Implementing Fibre Channel and IP SANs (2nd Edition)
ISBN: 0321136500
EAN: 2147483647
Year: 2003
Pages: 171
Authors: Tom Clark

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net