Three major points have emerged from our discussion of applying workloads to SANs. First is the SANs scope of configurations needed to handle the most common set of workloads within the data center. Second is the flexibility of the configuration which enables both transactional, messaging, and batch types of workloads. In some cases, as in data warehouse datacentric loads, two diverse workloads can be processed concurrently while providing a manageable configuration necessary to meet service levels.
It should be noted that SANs consume both manpower and dollar expenses, which in many cases are over and above your existing direct-attached configurations. Cost notwithstanding, the following additional considerations should be analyzed when contemplating the use of SANs within a data center.
Enterprise Workloads ƒSANs are justified using enterprise-level workloads that have resource requirements which exceed the scope and functionality of direct-attached solutions, or which exceed the capacity of a NAS solution.
Integrating Systems and Networking Skills ƒSANs require existing personnel be educated about themtheres no getting around it. However, storage expertise, coupled with network expertise, will best facilitate capacity planning, design, and installation activities.
Plan for Open-Ended Solutions ƒSANs allow the data center to design and plan for the long term, while making purchases for the short term. This leverages the SANs scalability and long- term viability as a storage solution.
All the solutions in these examples follow a macro plan. The following steps are recommended when implementing a SAN into a production environment. The macro plan is further defined by separate micro plans that describe actual task-level activities, IT assignments, and target dates. Additional steps may be required given the level of change control managed within the data center. (Refer to Chapter 23 for more details on change control processes and practices.)
Our examples show SANs supporting three common types of workloads: OLTP, web Internet-based, and data warehouse. It is beyond the scope of this chapter to illustrate the enterprise application, which will be defined by your workload planning. Typically, configurations are comprised of combinations of 8-, 16-, and 32-port FC switches, with disk arrays commensurate with storage capacities ; however, its not unlikely to surpass 20 distinct systems. Another important point we have focused on in configuration management is the inclusion of interswitch-link ports (ISLs), as well as an integrated FC-SCSI bridge into a tape library.
Define a test installation environment. Putting a small configuration in place provides essential first-case experiences useful in the configuration and operation of a SAN environment. This also provides a test bed for assessing future software and hardware upgrades while enabling an application testing facility.
Use the test installation to initiate a pseudo-management practice. Management becomes the most challenging activity when operating the SAN. Its also the fastest moving and most quickly evolving practice given the rapid change in software tools and accepted practices today. Detailed discussion of SAN management can be found in Part VI of this book.
Develop a production turnover activity where a formal change window is established. In many cases, this may need to be integrated into existing change management activity within the data center. Key among these is tracking the changes made to components of the SAN. It is particularly troublesome if you formalize changes to the switch configurations and dont upgrade critical components like HBAs, routers, and attached storage devices.
An important aspect of the production installation is the discipline surrounding establishment of a back-out practice. Because the SAN is an infrastructure in and of itself, its reliability can be problematic in the beginning, as with any new technology. However, being able to back out quickly and return the production environment to a previously existing state will save valuable time as you move into a SAN environment.
If you have established a formal set of production turnover and change window practices, maintaining the SAN components should become manageable. The key area in providing maintenance to the SAN components is recognizing their complexities as interoperable components. Upgrading the fabric OS in a switch configuration may effect interactions with the server HBAs, which in turn may impact storage bridge/ routers and other attached node devices.
Further establishing a maintenance matrix of SAN components is your best defense against maintenance ricochet, where upgrading or changing one component affects the operation of others. However, SANs are no more complex than networks, and as weve discussed several times in this book, they share many of the processing characteristics of a network. The differences you encounter will be in the devices attached to the network (for instance, the switches, servers, routers, tapes, and so on). This means that the mean time to defect recognition is longer than in traditional networks, given the fact that there are no clients directly attached that will provide instant feedback if the network is down or not operating properly.
Consequently, there is a need to monitor the operation of the SAN in as active a fashion as possible. Although we will cover this in more detail in the management part of the book (Part VI), it is important that the information gathered during this activity play an important role in problem identification, tracking, and resolution.