Communicating externally with the switchs operating systems requires the use of software drivers that allow the devices to operate as either N_Ports or NL_Ports. Each type of device requires a specific kind of driver to enable this functionality. In a Storage Area Network, the two big ones are the HBAs interface with the switch, and the storages connection.
As discussed in Chapter 14, the HBA provides an FC-interface from a computer device, typically a server, to a Fibre Channel switch. In addition to the hardware adapter that must be physically connected to the servers bus interface, the software contained in the HBA must translate the file I/O request and device command set into Fibre Channel commands. For this to happen, software drivers are installed on the system that execute as device handlers at the kernel level, requiring a change to the ROM configurations, as well as additions to recognize the new interface that exist within an HBA. In essence, an application takes SCSI commands found for particular file destinations and translates these commands into specific requests through the FC protocol as it logs in to the switch and executes the appropriate services.
HBAs are critical for two reasons. First, for their capability to shield the server from the specifics of the addressing encapsulated in the Fibre Channel (not to mention in managing the actual physical and logical units theyre authorized to use within the fabric). The HBA maintains the logical unit numbers (LUNs) for the devices it can access. It works in tandem with the operating system to determine the bounds of the devices it has access to, thereby initiating the first level of device virtualization to the server.
The second critical function is providing the server with an effective transmit and receive data path into the switch, which supplies a way of leveraging the flexibility and bandwidth of the FC network it is connected to. Figure 15-3 shows both the logical and physical functions that are controlled by an HBAs device drivers.
Given an HBAs critical proximity to the I/O path, performance is a direct result of the addressability of the servers bus interface bandwidth. Providing a high-end HBA interface to a low-level bus wont buy you much performance. Similarly, hooking a legacy bus into an HBA is only asking for trouble. The additional features found in todays HBAs parallel much of the added functionality mentioned earlier in our discussion of the switchs operating system. Some of these features are unique to the device drivers themselves , including software RAID, support for virtual interface protocol (see Chapter 8), and service features like enclosure management.
Fibre Channel storage arrays, discussed in the previous chapter, provide yet another embedded system for consideration when it comes to compatibility, functionality, and servicing. The big hardware vendor feature offers the capability to differentiate on the basis of software, which means that all storage arrays, whether JBOD, RAID, tape, or optical, have some level of secret sauce that must be downloaded in order to gain bonus levels of functionality. In the Fibre Channel arena, it becomes super challenging in just trying to access the storage array through the FC network on a native basis. This is a required step in configuring either JBOD or the RAID controller with the proper software (were really talking firmware here). As a result, another configuration utility must be used against the storage array as it becomes implemented. Having said all this, never fear, once you work through this a first time, the process can generally be proceduralized into the functions and servicing of the SAN. Now, heres the fly in the ointment. Each vendors implementation is different. A heterogeneous storage array installation, as you might have guessed, can become mired in inconsistencies, incompatibilities, and 25 different ways of doing the same thing.
RAID configurations pose the biggest problem, but also provide the biggest buyback . Given that RAID software shields , or manages , the local devices it is connected to, it directs through its parameters read/write operations within the disk subsystems. This is an example of level 2 virtualization. Coupled with level 1, which has already occurred on the HBA, were then forced to contend with two levels of virtualization. Even after configurations, care should be taken as to the level of logical unit numbering that occurs when defined within the RAID configuration chosen .