5.4 Fibre Channel JBODs

A JBOD, or Just a Bunch of Disks, is an enclosure with multiple Fibre Channel disk drives inserted into a common backplane. The backplane provides the transmit-to-receive connections that bring the assembled drives into a common arbitrated loop segment; these connections bypass electronics that let you insert or remove drives without disrupting the loop circuit. Because Fibre Channel drives provide primary and secondary NL_Ports for dual-loop attachment, the JBOD enclosure typically includes external interfaces for connecting two loops, as shown in Figure 5-7.

Figure 5-7. JBOD disk configuration with primary and secondary loop access

graphics/05fig07.gif

A JBOD brings two things the receive lead going to the first drive of a set, and the transmit lead coming from the last drive of a set to a Fibre Channel interface, typically DB-9 copper. When the JBOD's interface is connected to a Fibre Channel hub or switch, the connection is not to a single Fibre Channel device but to multiple independent loop devices within the enclosure. An eight-drive JBOD, for example, appears as eight AL_PAs to the interconnection. If the connection is made to a switch, the switch port must be an FL_Port because the downstream enclosure is actually a loop segment. If the connection is to an arbitrated loop hub, the population of the entire loop is increased by the number of drives in the JBOD. JBODs, unlike Fibre Channel RAIDs, have a direct impact on the topology to which they are attached.

Some JBOD-based disk arrays incorporate additional logic for enclosure management, usually a Fibre Channel controller that supports SCSI Enclosure Services queries. This adds an extra AL_PA to the configuration, something that can be addressed by an SES management workstation to solicit power, temperature, and fan status. JBODs may also include options for configuring the backplane to support dual loops to a single set of drives or, as shown in Figure 5-8, dividing the drives into separate smaller sets with a single loop attachment each.

Figure 5-8. Dividing the JBOD backplane into separate loops

graphics/05fig08.gif

How the individual disks within a JBOD are used for data is determined by a host server. You use volume administration tools for the appropriate OS to assign drives as individual logical disks or to assign a group of drives or the entire JBOD as a logical disk. In the latter case, you can use software RAID to increase performance and provide redundancy. Use of software RAID, however, implies that the server will be the exclusive owner of the JBOD, because that server alone is responsible for managing striping of data across the drives. Even if another server sits on the SAN, it will have to request the JBOD's data via the external LAN. In some systems this problem has been addressed by custom application software, but it requires an additional server on the SAN to coordinate file information (metadata) management. Generally, software RAID on JBODs offers redundancy for dedicated server-to-storage relationships, but it does not lend itself to server clustering or serverless tape backup across the SAN.

Software RAID improves performance by avoiding the latency of reading or writing to a single drive, but this gain must be balanced with the increased traffic load on the loop. In a normal disk write operation to a single drive, for example, the server must first set up the write transaction using a SCSI-3 write command to the target. On an arbitrated loop, this may require several loop accesses (tenancies of the loop) before the disk is ready to accept data.

The transfer of frames to the disk, in turn, is not normally accomplished with a single possession of the loop unless the file is fairly small. If the server is sending frames faster than the target can process them, the target will accept what it can and then close the loop. The loop must then be regained for each additional transfer of frames. A single write operation may therefore require multiple tenancies, or loop occupations, before all frames have been sent and a write response is sent by the target to the initiator.

This problem is aggravated when you use software RAID with a JBOD, because the initiator must conduct a series of small transactions to multiple targets. At the SCSI-3 level, commands queue to the targets and await response from each one before more frames can be sent. Even if the initiator can leverage transfer mode to address several targets during a single tenancy, it is still limited by disk performance and the timeout values of arbitrated loop. This incurs more protocol overhead and traffic on the loop, and that may be a concern if the loop has other active initiators. A loop with four servers, each with its own loop-attached JBOD, would suffer much higher protocol overhead than a comparable configuration with loop-attached RAIDs.

JBOD enclosures are typically marketed with eight to ten drive bays, some of which may be configured for failover. Some very large disk arrays are packaged in 19-inch rack form factors, with more than 20 disks per JBOD module and as many as four modules per rack enclosure. These high-end systems may include rack-mounted arbitrated loop hubs or switches for connecting JBOD modules to one another and to servers for single- or dual-loop configurations. Given the amount of customer data that is stored on such arrays, it is preferable to use managed hubs or fabric switches rather than unmanaged interconnects. Dual power supplies, redundant fans, and SES management options allow JBODs to be used in high-availability environments. Vendors may also provide an upgrade path to add a RAID controller card to the enclosure, something that extends the life of the original investment. You can begin with a partially populated JBOD enclosure and add disks as storage needs dictate. A RAID controller option can then be added to increase performance and offload software RAID tasks from the host.

First-generation JBODs provided a backplane with hardwired arbitrated loop connections to connect the transmit lead of one drive to the receive lead of another. If an individual loop drive failed, an entire bank of loop drives would go down as well. Current backplane design incorporates "switch-on-a-chip" architecture developed by Vixel Corporation that automatically bypasses failed drives and keeps the remaining drives operational. This the loop transport, resulting in proactive notification of impending problems that might affect loop stability.



Designing Storage Area Networks(c) A Practical Reference for Implementing Fibre Channel and IP SANs
Designing Storage Area Networks: A Practical Reference for Implementing Fibre Channel and IP SANs (2nd Edition)
ISBN: 0321136500
EAN: 2147483647
Year: 2003
Pages: 171
Authors: Tom Clark

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net