4.2 SAN Terms and Building Blocks

only for RuBoard - do not distribute or recompile

4.2 SAN Terms and Building Blocks

You should be familiar with the following components and their general capabilities before building the SAN.

4.2.1 Building Blocks

The top line of Figure 4-1 shows the workstations. They are connected to the LAN, as expected. The client view of stored data will not change in a SAN. The only expectation is that there should be some relief from storage bottlenecks, and data should be available when a SAN is used.

Figure 4-1. SAN Building Blocks
graphics/04fig01.gif

The next row is the servers, connected to the LAN, intended to run applications and deliver retrieved data to the client workstations. What s shown is a heterogeneous mix of open system (UNIX) servers.

Windows NT and IBM mainframes could just as easily appear in the illustration, although it would be reckless to say that interoperability concerns have been worked out. At the very least, heterogeneous servers can be connected to the SAN, and we are moving toward the day when each operating system better recognizes the file systems of the other operating systems.

Note that the servers have a single card in them, called a Fibre Channel Host Bus Adapter. In the balance of this chapter, you ll usually see two HBAs in each server, as multiple HBAs are an important part of fault-proofing a SAN.

The next row is the SAN, made up of various combinations of hubs, switches, and FC-SCSI bridges. The SAN is frequently represented as an ellipse (or a cloud, in the case of fabric-switched SANs). Whatever graphic is used, the idea is to broadly suggest an any-to-any connectivity of devices on the SAN. In fact, a Fibre Channel Arbitrated Loop (FC-AL) is really a loop. A switch is an any-to-any star topology. A bridge is a device that has both Fibre Channel and SCSI connections, allowing SCSI devices to be attached to the Fibre Channel network.

The SAN, of course, is the connection infrastructure between servers and storage devices. The connection medium is usually fiber optic cable, although connections over copper are permitted in the Fibre Channel standards, and some copper -based devices are available.

At the bottom are the storage devices. The typical primary storage device on a SAN is the high-availability disk array. However, the JBOD (just a bunch of disks) is still with us. The JBOD is commonly a collection of SCSI disks in an enclosure, but Fibre Channel JBODs are available, too. The JBOD disk farm or disk hotel is still useful for meeting some data center needs.

The typical secondary storage device is the tape library. Although single-mechanism tape drives and autochangers can be part of a SAN, the basic assumption is that the SAN is a topology for massive disk storage, and massive disk storage needs massive tape backup.

Magneto Optical devices certainly can appear on a SAN, and they fill the need for online delivery of archival or reference data (usually on read-only media). MO isn t discussed in this chapter. However, in Chapter 6, we consider re-writable MO as a possible backup medium.

4.2.2 Capabilities

SAN components form something like a high-tech Lego set, so in order to build a SAN, we should be familiar with what each piece can do.

As stated in the preface, we use HP equipment in our examples. This is not an infomercial for HP, which we leave to HP s product briefs and Web site. It is simply because we are most familiar with this equipment and have worked on the teams that developed the products. HP products are excellent ; however, be aware that other manufacturers make products that will work fine on a SAN, and HP has announced its commitment to an open SAN technology.

Fiber optic cable capabilities.   That s not a spelling error. Fibre Channel uses fiber optic cable. The fiber optic cable may be 9 micron single mode, and can be used for distances of up to 10 km. The cable may be 50 micron multimode (distances to 500 meters) or 62.5 micron multimode (distances to 375 meters ). Incidentally, Fibre Channel is also supported over copper connections.

Hub capabilities.   HP s two Fibre Channel hub offerings are good models for hubs in general. The shortwave hub (called the HP SureStore E Hub S10) has ten ports. It can be cascaded into one other longwave or shortwave hub. The shortwave hub is the right hub for installations covering short distances, using 50 or 62.5 micron cable.

The longwave hub (the HP SureStore E Hub L10) has 10 ports. It can be cascaded into one other longwave or shortwave hub. The longwave hub is the right one for distance requirements of up to 10 km, using 9 micron cable.

Bridge capabilities.   The HP FC4/2 Bridge has two Fibre Channel ports for connection to the SAN. It has four SCSI ports for connection to SCSI devices.

Switch capabilities.   Hewlett-Packard configurations use the Brocade 2800 switch. It is a 16-port fabric switch with the capability of cascading or meshing switches into very large fabrics .

Disk array capabilities.   A typical disk array is Fibre Channel-enabled, and provides substantial storage capacity. The high-availability features of a disk array are hot-swappable fans, power supplies , controllers, and disk drives. Battery backup for cache memory is another useful high-availability feature. These arrays are usually RAID-enabled. We base our SAN models on the HP SureStore E Disk Array FC60. It s a high-end product, offering about 1.6 TB of storage in a fully- populated 2.0 meter rack. The largest deployed HP disk array is the XP256, with a capacity of about 12 TB. The XP512 has just been announced.

JBOD capabilities.   JBODs are still with us, in both SCSI and Fibre Channel versions. The HP FC 10 as a Fibre Channel JBOD. The HP SureStore E Disk System SC10, a SCSI JBOD, is a good candidate for a SAN. It has ten 9 GB or 18 GB drives for a maximum capacity of 180 GB per enclosure. Also, each device takes up only 3.5 EIA rack units, so you can rackmount up to ten of them in a rack, yielding 1.8 TB. The high-availability features of a modern JBOD are hot-swappable fans, power supplies, bus control cards, and disk drives.

4.2.3 Failure Proofing

There are a number of terms related to minimizing the impact of component failure in storage devices. These include high availability, fault tolerant, self-healing, redundancy, etc.

Since RAID technology reduced the impact of lost data in terms of a disk failure, it was only natural that disk array designers turned to other parts of the box that could fail. In particular, controller, fan, and power supply failures could make the device unavailable, making the data unavailable. The answer was simple enough: put two or more of each item in the device.

The point of redundant components is, of course, to allow the device to keep working until the defective part can be replaced . The functionality of the box is self-healing, although unfortunately the broken part is not. Service personnel still have to replace it.

There are event monitoring and alerting capabilities in almost every component on a SAN, to alert you to component failures. One useful advancement in high-end devices is the phone home capability, found in the HP SureStore E Disk Array XP256. When the disk array senses a failure, it contacts HP service so immediate action can be taken.

Even HBAs report failures as events, and their onboard LEDs are also a good indicator that something is wrong. The more recent HBAs have customer- replaceable GBICs, so it s not difficult to keep the HBAs operating.

Hubs shut down failed ports, and as hub management software improves , hub problems become easier to isolate and fix. Keeping the data flowing without interruption is due to providing multiple paths that go around failed hubs.

Multiple paths to a device is an important concept in a SAN. And it s easier to ensure multiple pathing in a SAN than it is in a arrangement of SCSI mass storage devices.

Even defective fiber optic cable can be overcome as a problem. Inside a data center, a bad crimp in the fiber cable bend can ruin its ability to carry data. In cross-campus cable link, the landscaper s backhoe can sever a cable. But if the cable runs are well worked out, these problems can be avoided, and because of dual pathing, data flows should not be interrupted .

The operative concept is no single point of failure. If any component fails, there is another component that instantly takes its place until the defective part is replaced.

Assumptions about power.   Take care of your power. Even the best planned SAN is susceptible to power outages. The uninterruptible power supply (UPS) or standby generator can be your best friend. It also helps to have a reliable power company.

If an earthquake or hurricane strikes, your data center will probably lose power. However, at that point, a power loss (and loss of data) will not be your biggest worry. The good news is that, with the most durable SANs, you will have made a mirror image copy of your data, made as little as ten minutes before the catastrophe, and it s resident on your company s disaster recovery SAN, located many kilometers away.

only for RuBoard - do not distribute or recompile


Storage Area Networks. Designing and Implementing a Mass Storage System
Storage Area Networks: Designing and Implementing a Mass Storage System
ISBN: 0130279595
EAN: 2147483647
Year: 2000
Pages: 88

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net