4.3 SAN Topologies

only for RuBoard - do not distribute or recompile

4.3 SAN Topologies

For the development of SAN topologies, we won t show the workstations or the LAN in the illustrations.

4.3.1 Working Up to a SAN

Let s begin with a fundamental Fibre Channel connection.

Figure 4-2. A Simple Point-to-point Connection
graphics/04fig02.gif

This is a simple point-to-point connection, with one server and one disk array. Now a point-to-point connection isn t much to write home about. It is not quite a SAN, but it is a useful server/storage connection.

Our connection is fast, and more convenient to cable than a SCSI connection, but it isn t very reliable. The single points of failure are: server, FC HBA, cable, disk array controller. So, we ll add some additional connectivity.

Figure 4-3. A Point-to-point Connection with Two Paths
graphics/04fig03.gif

Here s the connection with a second FC HBA in the server and a cable path to the disk array s second controller.

If either FC HBA, cable, or disk array controller fails, there s still a path between the server and the disk array.

The single point of failure is the server. Since a server failure is possible, let s add another server.

Figure 4-4. Point-to-point Connections from Two Servers
graphics/04fig04.gif

Here, each server connects to a disk array controller.

This isn t going to work for us. In fact, it has made matters worse . The single points of failure are:

  • Server 1: server, FC HBA, cable, disk array controller

  • Server 2: server, FC HBA, cable, disk array controller

If any part of the Server 1 chain fails, the whole chain goes down. The same is true of Server 2. The best case is that if one chain fails, the other chain should operate .

Another limitation is that this disk array only has two Fibre Channel ports, one for each controller. Other disk arrays, with more connections, could cross connect to both servers. Let s improve the arrangement.

Figure 4-5. Clustered Servers
graphics/04fig05.gif

Here s a server cluster. If one server goes down, the other one will continue to deliver the applications.

The single points of failure are:

  • Server 1: FC HBA, cable, disk array controller

  • Server 2: FC HBA, cable, disk array controller

Now, let s add the rest of the equipment.

Figure 4-6. Fibre Channel Arbitrated Loop
graphics/04fig06.gif

The rest of the equipment is simply a hub. When we add a hub, we no longer have point-to-point connections. We have a Fiber Channel Arbitrated Loop (FC-AL). At this point, its not a complex loop at all, but this is just a starting point.

Each server connects to the hub on two paths from two HBAs. Each disk array controller connects to the hub. Now there are multiple continuous paths from each server to each disk array controller.

The single point of failure is the hub, so let s add another hub.

When we add a second hub, we have a configuration that is now a fully-expandable SAN.

The clustered servers connect to each hub on two paths from separate HBAs. Each disk array controller connects to a hub. Now there are no single points of failure.

Because of the addition of the second hub, we are now free to experiment with scalability and distance.

4.3.1.1 Scalability
Figure 4-8. FC-AL with Two Hubs and Ten Devices
graphics/04fig08.gif

The hubs used in these illustrations have ten ports each. How will we fill them? We should not consider one server and nine disk arrays, because that would make the server the single point of failure.

This is not true of an arrangement with nine servers and one disk array, since a high availability disk array has several redundant components . This is reflected later in this section, when we use a high-end disk array, which (to quote the user documentation) is not expected to fail in any way.

In the above illustration, one port on each hub is used by each of the four servers. That leaves room on each hub for six storage devices.

The above six storage devices could each represent over 1 TB of stored data, so this as not a minor SAN.

Multiple paths are in place and there is no single point of failure.

4.3.1.2 Distance

You can use Fibre Channel to locate components much farther apart than with SCSI, which typically permits maximum cable lengths of 25 meters .

Figure 4-9. FC-AL at Maximum Distance
graphics/04fig09.gif

This is merely Figure 4-7 stretched across the building. In theory, we can locate the hub about 500 meters from the server, and locate the disk array another 500 meters from the hub. In discussions of 10 km distances over 9 micron cable, this extra kilometer is often forgotten.

Figure 4-7. FC-AL with Two Hubs
graphics/04fig07.gif

In reality, the hubs would be located very near the servers or the storage devices. So a better way to achieve distance using hubs is to cascade the hubs.

With cascaded longwave hubs, the fiber cable distance limit is 10 km. This makes virtually any on-campus SAN topology possible. You are limited only by the complexities of laying the fiber cable between buildings .

There are risks in using hubs in longer distance cable runs. Those risks can include a degradation in access time and some loss of received optical power. In Figure 4-10, there will be a propagation delay of 50 ms for data traveling in each direction, for a total of 100 ms. This is the equivalent of a loss of 1 MBps. In terms of power loss, the received optical power must be more positive (less than -17dBm) at the target, and you can test for this using an optical power meter.

Figure 4-10. FC-AL with Cascaded Hubs
graphics/04fig09.gif

What is illustrated is theoretically possible, but we would not recommend hubs as the first choice for solving distance problems. Since FC-AL is a loop topology, every device on the loop would feel the effects of the degradation caused by even one distant device. There can also be significant performance losses due to arbitration time and the lack of buffering.

When a configuration calls for distance, the fabric switch is the preferred choice.

In Figure 4-11, not all connections are illustrated.

Figure 4-11. FC-AL, Cascaded Hubs and Full Buildout
graphics/04fig11.gif

Here, cascaded 10-port hubs connect to each another. The cascading requires two ports on each of the A hubs to reach the B hubs, leaving eight ports on each hub. Those ports can connect to eight devices over dual paths.

The cascading also takes two ports on each of the B hubs, leaving eight ports on each hub. Those ports can connect to eight devices over dual paths

Those 16 ports can be any combination of FC devices. Here we ve shown four servers and 12 disk arrays, but it could be eight servers and eight arrays.

How can you connect 16 servers? Use four A hubs, and two B hubs, and you ll be able to hook up two disk arrays to the resulting FC-AL.

The advantages and disadvantages of longer distances between devices is still present in the Figure 4-11. Devices linked over a distance impose a penalty in time on the entire loop. This occurs in two ways: First, I/O between server and storage device takes slightly longer because of distance; and second, when devices are arbitrating for use of the loop, the ARB primitives must circulate throughout the entire set of loop connections.

Let s use a Fibre Channel switch instead.

In Figure 4-12, not all connections are illustrated.

Figure 4-12. SAN with Fabric Switches
graphics/04fig12.gif

This is a 16-port switch and will establish dual-pathed, switched connections between any two of the devices.

A good switch has multiple fans in a module, dual power supplies and individual replaceable GBICs on the ports. However, it s still valuable to run separate paths through two switches. So the above illustration shows two paths from each server to each switch and two paths from each switch to each disk array.

A switch currently costs about four times as much as a hub, so why would you want to consider it? The following table, drawn from HP s switch training for customer engineers , compares hubs and switches.

Table  4-1. Hubs and Switches Compared

Hub

Switch

Moderate equipment cost “ excellent entry-level implementation

Higher equipment cost offset by lower cost of ownership with management services

Small systems “ limited connectivity (up to 126) with a single loop

Medium to large systems “ increased connectivity with multiple switches

Shared media “ single communication at a time

Unshared media “ multiple concurrent communications

Limited performance “ latency and throughput

Highest performance “ latency and throughput

No isolation between a single device from the rest of loop s devices

Point-to-point link with the switch provides isolation between one device and the other devices

Single point of failure with one power cord

Dual power supplies with dual power cords avoids single point of failure

It may be true that Higher equipment cost [is] offset by lower cost of ownership with management services, but you ll have to determine your Total Cost of Ownership (TCO) by the formulas that prevail in your operation.

Also take it with a grain of salt that smaller systems use hubs while larger systems use switches. There is a place for both devices in both small and large systems. And, in reference to Dual power supplies and dual power cords avoids [sic] single point of failure, it s still important to maintain multiple paths between devices, whether you use hubs or switches.

Fabric switches are undoubtedly the wave of the future, but there will always be a place for the FC-AL hub. To illustrate this, let s attach some hubs to switches.

In Figure 4-13, not all connections are illustrated.

Figure 4-13. SAN with Switches and Hubs
graphics/04fig13.gif

The servers are connected to the switches over multiple paths. The switches connect to each hub over multiple paths, and the disk arrays connect to the hubs over multiple paths.

Considering that each disk array could be an FC60 or equivalent, this storage pool could contain 32 devices with a capacity of about 1.6 TB each. That would be a total of 51.2 TB.

If we were to pool the servers using hubs, it might look like this:

In Figure 4-14, not all connections are illustrated.

Figure 4-14. SAN with Server Pools and Storage Pools
graphics/04fig14.gif

Scalability like this is possible, and it can be done with hubs. However, since switches like the Brocade Silkworm 2800 are certified to cascade up to 32 switches over seven hops, it would be better to consider switches for such a complex arrangement of devices.

There are a lot of storage devices illustrated above. However, a single large disk array might be easier to manage. It would certainly take up a good deal less floor space.

In Figure 4-15, not all connections are illustrated.

Figure 4-15. Connecting to a High-end Disk Array
graphics/04fig15.gif

In the arrangement illustrated above, eight Fibre Channel ports on the disk array are used. When connected with switches, a large number of devices on the SAN gain access to the disk array at full bandwidth.

There are up to 256 disk drives in one XP256, and it has a relatively small footprint. Given HP s current maximum capacity point of 47 GB, that gives the XP256 a capacity of about 12 TB. If you put eight of these on the floor of your data center, you are very close to one petabyte of mass storage.

This disk array has 1024 LUNs, and management software allows you to make smaller ones (using CVS, or Custom Volume Size) or larger ones (using LUSE, or Logical Unit Size Expansion). The Cache LUN feature puts LUN contents into RAM, which helps data to move at much faster speeds.

4.3.2 Building from Legacy Equipment

But what, you may say, can I do with my legacy equipment? Use it on the SAN. Despite falling per-gigabyte costs for new native Fibre Channel storage hardware, legacy storage is still valuable, and worth keeping. It s expensive, often acquired one piece at a time, and must stay in service as long as possible. Legacy equipment is commonly SCSI equipment.

Very few data centers transitioning to a SAN would buy all new Fibre Channel devices and make a wholesale conversion. As you build your SAN and it grows stable, you are likely to bring SCSI equipment on to the SAN a few devices at a time. Additionally, the show must go on, so you need to keep running your legacy equipment while the SAN is developing.

SCSI equipment that you will transition would include:

  • High-availability (HA) SCSI disk arrays

  • SCSI JBODs

  • SCSI tape libraries

  • SCSI single-mech tape drives or autochangers

Here s where the FC4/2 bridge comes in. The bridge has two Fibre Channel ports and four SCSI ports.

You can connect the bridge to Fibre Channel hubs or switches. One caution here, however. SCSI Tape drives do not do well when they share a hub with other citizens of a Fiber Channel Arbitrated Loop. Here s a quote from the product literature for the FC4/2 bridge:

In a dynamic environment such as a Fibre Channel Arbitrated Loop, availability of devices ”reserving a Fibre Channel tape device on a SAN, for example ”can be a concern. A tape backup in progress can be interrupted by the dynamics of the FC-AL (e.g., a LIP occurs when a server is power cycled). Error handling in the backup application needs close attention to ensure that data loss does not occur and that the chance of a failed command is recoverable.

Even HP s top of the line tape library, the SureStore E 20/700, is SAN-enabled only by means of an FC4/2 bridge.

Anyway, let s bring some SCSI devices onto the SAN.

Figure 4-16. Using a Bridge to Connect SCSI Devices
graphics/04fig16.gif

Here s a simple entry into the world of Fibre Channel. One server is connected to one bridge, and the bridge is connected to four SCSI disk enclosures.

The single points of failure are the server, the HBA, the bridge, and the JBOD ”just about everything.

An improvement might be to connect a second server to the bridge s other Fibre Channel port. Alternately, connecting a second fiber cable from the server to the bridge would be a small improvement. But overall, this arrangement does not display the kind of reliability we d like to see.

Here, two servers are crossconnected to two bridges. To those bridges, we can connect modern SCSI disk systems, which have some high-availability components. In particular, dual controllers in some SCSI disk arrays permit connection to two switches.

Now, there is no single point of failure.

4.3.2.1 Device Mix

To bring other SCSI devices onto a SAN, we can build on the topology illustrated in Figure 4-17.

Figure 4-17. Two Bridges and HA SCSI Disk Systems
graphics/04fig17.gif

Unlike hubs, bridges should not present problems with mixing devices (Figure 4-18). We continue to employ two bridges so the high-availability SCSI disk system has some failure proofing. In this example, the legacy JBOD and DLT tape library only have one SCSI port and cannot be double-pathed to the bridges.

Figure 4-18. A Mix of SCSI Devices
graphics/04fig18.gif

The arrangement leaves open ports on the bridges, and would permit four more connections.

4.3.2.2 Capacity

Let s make the bridges citizens in a fabric by attaching them to switches. By attaching a larger number of bridges, and we ll be able to attach many SCSI devices.

Figure 4-19. Connecting a Lot of JBODs
graphics/04fig19.gif

Here s an arrangement of servers, non-cascaded switches, bridges and JBODs. Every available port is filled.

The switches allow us to attach 16 devices, in this case eight servers and eight FC4/2 bridges. All the items are double pathed, with the exception of the JBODs. With high-availability SCSI disk systems, we d cut the number of devices in half, but they would be double pathed to the bridges.

The same arrangement can be accomplished with hubs instead of bridges, but the number of connections available on the hubs would be ten.

4.3.3 Adding Tape

How do we go about adding a SCSI tape library? There are two considerations: first, we ll need to use a FC4/2 bridge; second, we don t want to mix disk and tape I/O on a hub.

Let s begin by adding a tape library directly attached to a server.

Figure 4-20. Adding a SCSI-based Tape Library
graphics/04fig20.gif

In this arrangement, we revisit our four-server setup with its connection through bridges to four high-availability SCSI disk systems. Of course, the disk units could just as easily be Fibre Channel high-availability disk arrays.

To build a dedicated path for the tape, we add another Fiber Channel HBA to one of the servers. That is connected to a bridge connected to the tape library. We ve designated this server as the backup server.

In this example, the single point of failure is connection from the server s single HBA to the single bridge. We could add an HBA to another server and cross-connect them through two hubs to two bridges. That would provide double-pathing.

4.3.4 Backup Over the SAN

The following illustrates compact and typical large scale configuration: high-end disk arrays and tape backup through a bridge.

Figure 4-21. Backup Over the SAN
graphics/04fig21.gif

Here, a switch is a compact method of improving connectivity. A bridge is still required by the tape drive, but it can be connected to both switches.

If the tape library has high-available features, the single point of failure is the bridge. With a Fibre Channel-ready tape library, we can eliminate the bridge.

In a SAN equipped with the correct hardware and backup software, the LAN-free or serverless backup is possible.

4.3.5 The Next Step in the SAN

Figure 4-22. SAN Evolution
graphics/04fig22.gif

As the SAN evolves, we are beginning to see sophisticated switching capabilities. Combine that with the current announcement of 2 GB Fibre Channel and the plans for 4 GB Fibre Channel and there should be plenty of speed and connectivity options.

As a result, look for vast disk pools and tape pools, with full bandwidth connectivity. The storage pools need not be located near the servers or each other.

Look for specialty servers, such as database servers, data movement servers, backup servers, and SAN management servers.

only for RuBoard - do not distribute or recompile


Storage Area Networks. Designing and Implementing a Mass Storage System
Storage Area Networks: Designing and Implementing a Mass Storage System
ISBN: 0130279595
EAN: 2147483647
Year: 2000
Pages: 88

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net