4.1 SAN Principles

only for RuBoard - do not distribute or recompile

4.1 SAN Principles

By creating a SAN from its component parts , we can see the promises and pitfalls in the topologies. As the components come together, certain practical connectivity considerations will assert themselves .

The old saying is, In theory, there is no difference between theory and practice, but in practice there is. This is true of the SAN. The theoretical limits of capacity and performance have practical limitations. Some elements of connectivity do not yet work or don t work as well as they should.

Let s fabricate some SANs. We begin with principles and concepts we want to follow, and then describe some of the terms and building blocks that apply generally to all SAN building.

Then we ll create some topologies, building them up step by step. There are additional considerations besides the basic connections. Fault protection, distance, and tape backup topologies are among them.

We ll also look at legacy devices, and how to integrate them into the SAN. Legacy is a polite term for old equipment you already own and can t afford to replace. This is important, since most IT shops are not prepared to junk their older devices every time a new technology develops.

Once we ve created some SANs, we ll look at SAN planning, maintenance, and management considerations. Finally, we ll wrap up with some SAN cost considerations.

4.1.1 Review of the Principles

We said in Chapter 1 that a SAN is an interconnected set of hardware devices. The SAN will exhibit most of these characteristics:

  • Storage behind the server

  • Storage devices connected to each other

  • Multiple servers connected to the storage pool

  • Heterogeneous servers may be connected to the storage pool

  • Fibre Channel connectivity (FC host bus adapters and fiber optic cable)

  • Hubs and switches

  • Multiple paths to devices

Not all characteristics need be present. For example, some SANs don t have heterogeneous servers, as the enterprise has chosen a one-vendor server solution.

FC host bus adapters and fiber optic cable connections imply a full Fibre Channel interconnection, but that s not always possible. There s a need to connect SCSI devices to SANs. At this time, the majority of SANs back up data to SCSI tape libraries.

Some SANs don t have multiple paths to devices. These SANs operate at an elevated level of risk of failure, and we generally discourage that sort of connectivity. A foundation concept of the SAN is no single point of failure, and most of our models in this chapter are built on that concept. Ideally, there should be at least two ways to get from one device to another.

SAN-building can be a gradual process. You can build the core SAN loop and migrate devices to it. Because of this, many data centers will have a combination of directly attached storage, Network Attached Storage, and SAN storage. However, as the benefits of the SAN assert themselves, the older storage connectivity options will fall by the wayside.

only for RuBoard - do not distribute or recompile


Storage Area Networks. Designing and Implementing a Mass Storage System
Storage Area Networks: Designing and Implementing a Mass Storage System
ISBN: 0130279595
EAN: 2147483647
Year: 2000
Pages: 88

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net