SAN Workload Characterization

team lib

When characterizing workloads for a SAN, its helpful to consider the inherent value that Storage Area Networks bring to the data center. This cross-application infrastructure enhancement may be the overriding factor in the justification of an initial SAN configuration that on the surface could be handled by direct connect architecture. This further demonstrates the importance of defining I/O workloads, as discussed in Chapter 17.

When describing SAN I/O workloads, it is also important to be aware of the possible integration of system-level capacity planning as well as network capacity planning for the obvious reason that the SAN architecture represents a combination of computer system and network characteristics that must work together to sustain the workloads. This requires that workloads be evaluated with both I/O processing and network configuration metrics. Even though SANs are young by contrast to other technologies, standard configurations have emerged that can be applied to most common workloads. The major categories, excluding a single switched environment, are cascading, meshed, and core /edge configurations.

These categories provide starting points for data centerspecific customization and specialization. SAN expansion and derivations will likely come from one of these three configurations. A brief overview follows .

  • Cascading SAN Configuration ƒThis configuration provides a switch-to-switch connection that allows the number of server and server devices to scale quickly. Figure 18-1 shows a simple cascading configuration with three servers and multiple storage devices using three FC switches.

    click to expand
    Figure 18-1: Cascading SAN configuration

  • Meshed SAN Configuration ƒThis configuration provides a performance- oriented system that allows for the quickest path from server to data. Figure 18-2 illustrates how an I/Os path is reduced as it can reach the storage array without traversing multiple switches. However, it also provides the multiple connections to connect to the storage array using alternate paths in the event of heavy traffic or switch disruption.

    click to expand
    Figure 18-2: Meshed SAN configuration

  • Core/Edge SAN Configuration ƒThis configuration takes into account I/O optimization, redundancy, and recovery, as shown in Figure 18-3. By far the most performance oriented, it is also the most complex in implementation and configuration.

    click to expand
    Figure 18-3: A core/edge SAN configuration

These are all important considerations as you begin to evaluate your SAN design and implementation. Some words of caution, however, before the details of configuration complexities overcome your planning and design:

  • Identify and describe the I/O workloads of the applications you expect the SAN to support.

  • Understand the strengths and weaknesses of each of the major networking configurations.

Reviewing Chapter 17 will help get you started on I/O workload analysis.

Assuming you have completed a general workload analysis and have a total I/O workload transfer rate (see Chapter 17 for guidelines on estimating I/O workloads), we can estimate the number of ports required to support the I/O workloads. The sidebar Guidelines for Estimating the Number of Ports Required to Support I/O Workloads describes a set of activities helpful in estimating the number of ports required.

start sidebar
Guidelines for Estimating the Number of Ports Required to Support I/O Workloads

The assumptions to consider in viewing these guidelines include the following:

  • The I/O workload transfer rate is available and accurate.

  • The base configuration consists of FC switch ports using switched fabric.

  • The base ratio available from the server to a number of data paths (for example, the number of HBAs for each server connected to the SAN configuration).

Guidelines for estimating SAN port requirements include:

  • Total I/O Workload Transfer Rate (I/OWTR) / Maximum Port Transfer Capacity (MPTC) = Number of Port Data Paths (PDPs) required

  • PDPs — Redundancy/Recovery Factor (RRF) = Number of ports for Redundancy and Recovery (RRPs)

  • PDPs + RRPs = Total Data Path Ports required

  • Total PDPs (with RRF) + Server Ports (SPs) {number of servers — number of HBAs) = Total Switch Ports required Where RRF = 3040%; SPs = 14

end sidebar

Once the total estimated numbers of switch ports are calculated, the workload characteristics can be applied to determine the type of access and performance factors necessary. As shown in our examples, this can be an OLTP-type application, such as our banking example in Chapter 17, or a typical web transactional workload that may be used for processing warranty and service information. Finally, the ubiquitous datacentric application, the data warehouse, demonstrates unique processing characteristics and is enhanced by the basic architectural value of SAN architectures.

Switch port estimates are applied to each of these workloads, which drives the configuration into connectivity schemes that best suit the workload. As discussed previously, we then configure and distribute the port count into a cascading, meshed, or core/edge configuration. Through examples, we apply an I/O workload for each configuration and discuss the rationalization for our decision, first using OLTP to demonstrate a core/edge solution. We then move to web transactional applications supported by a meshed configuration, and finally a data warehouse using a cascading architecture.

team lib

Storage Networks
Storage Networks: The Complete Reference
ISBN: 0072224762
EAN: 2147483647
Year: 2003
Pages: 192 © 2008-2017.
If you may any questions please contact us: