The Storage Analysis

team lib

Analysis of storage is divided into two distinct parts : new storage demands and existing storage allocations . Although these two different activities culminate in the same place, which are reasonable configurations that support end- user demands, the reason they are distinct is the following. The demands for new storage provide an opportunity to consider alternative methods to meeting the demands, rather than extending the current configuration. This is most appropriate in storage networking given the sometimes overlapping solutions that exist between NAS and SAN solutions. It is also appropriate when considering moving from direct-attached configurations where the storage demands are not closely tied to existing configurations.

Leveraging a Storage Upgrade

When considering a storage upgrade for capacity and performance, the analysis of a storage network should be part of the storage capacity planning activities. However, this does provide a challenge to the existing server infrastructure and installed storage components . Moving away from any of these requires additional hardware, software, and training. It also requires concurrence from the external drivers, such as applications and systems. Likely, each of these may be resistant to change and the ability to rely on information from the external driver matrix will assist in the justification of the whats in it for me scenario.

Analysis of existing storage can provide greater justification into storage networking solutions given the scalability limitations within the client/server direct-attached model (see Chapters 1 and 2). More often than not, the move to storage networking provides a longer term solution in supporting increased user demands. Using the previously described user requirements translations and I/O workload calculations, the justification can prepare the way for the necessary increase in expenditures as well as showing the short- term fix that adding storage to existing server configurations will have.

Establishing a New Storage Network

Analyzing new storage demands provides an opportunity to leverage a storage network driven by user requirements and I/O workload analysis, the justification can be compelling. New storage capacity can be depicted in terms of scalability of capacity and performance, but also in its ability to consolidate some of the legacy storage into the storage network. This provides the first articulation and integration of internal consolidation factors that are so popular in justifying SANs. However, the same can be said for NAS devices if the storage and application characteristics are justified in this solution.

Working in conjunction with your systems colleagues, there can be real synergy in establishing a storage network strategy. First, is the consolidation of servers. That, in itself, is a large cost factor reduction in overall systems responsibility and administration. This will be augmented by added savings of OS and application license fees associated with multiple servers with direct-attached strategies. Finally, there is the added benefit of managing fewer server entities and the processing consolidation that occurs with the collapse of application processes into a single, albeit larger, server. These cost savings start to mediate the increased costs associated with a storage network solution.

A hidden benefit to systems administrators is the performance and problem management factors that come with storage networking. The consolidation of servers and the collapsing of the storage arrays translates to less network connectivity, minimum servers to manage, and consequently fewer things to go wrong. Establishing an agreed upon and reasonable metric for this allows a quantifiable benefit to be monitored when adopted. In other words, if putting in four NAS devices can collapse file servers on a 20:1 basis, then the quantifiable benefit will be losing 20 general-purpose servers for every single NAS device. If that were the case, then once the plan is implemented, the redeployment or retirement of the 80 servers, to use our example, would create excellent credibility for the plan and storage network.

Tools for Storage Analysis

Storage networking can be analyzed in two ways: the physical capacity and the performance of the configuration. Just as direct-attached storage configurations are monitored for storage allocation, usage, and access, storage networks need to provide the same information. The problem comes with both the lack of tools that account for multiple usages of networked storage arrays and the immaturity of system tracking databases that provide historical data. Performance monitoring in storage networks provides the same challenge regarding tool deficiencies and lack of historical data collectors.

There are several storage software choices when considering capacity and access monitoring tools. It is beyond the scope of this book to analyze or recommend any of these tools and should be a data-center specific choice dependent on the specific needs of the entire storage infrastructure. These tools fall into two categories related to a wide variety of tools known as storage resource management tools. Subcategories include the quota management and volume management tools.

Storage Resource Management

Quota management tools provide a mechanism to assign storage capacity quotas to end users or specific applications, or a combination of both. These provide a safety net in terms of storage usage and the potential of errant users or applications to utilize the majority of storage capacity in an unregulated environment. The quotas are generally set by administrators, either from a storage or systems perspective, and managed from a central location.

The majority of tools in these categories provide some level of reporting on storage utilization. The more sophisticated tools provide views from several perspectives: by end user, by application, by logical volume, or by logical device. These tools work well with file-oriented storage implementations ; however, they become problematic when attempting to understand the storage usage of relational databases or applications employing an embedded database model.

Its important to note that in these cases, and given the installation of databases within the data centers (which is likely to be large), the monitoring of storage utilization and access needs to rely on the database monitoring and management tools. This provides another important element within the external capacity driver matrix (see establishing an external capacity matrix), which is the influence of database administrators, designers, and programmers. Given the transparent nature of the relational and embedded databases to the physical storage, the usage of these application subsystems needs to be managed in conjunction with the database expertise. However, its also important to become familiar with these tools to understand the underlying physical activity within the storage infrastructure.

Volume Managers

Another category of tools that can provide storage analysis information are volume managers. These tools provide the ability to further manage storage by allocating the physical storage in virtual pools of capacity. This allows file systems and applications to access particular volumes that are predefined with specific storage capacity. This can be extremely valuable when allocating storage for particular applications that have specific requirements for storage needs and may become volatile if storage becomes constrained.

Like quota management tools, most volume managers have reporting mechanisms to track both usage and access. They also provide an important function and level of detail that enhances both performance analysis and problem determination. Volume managers work in conjunction with storage controller and adapter hardware to separate the logical unit (LUN) numbering schemes used by the hardware functions. As such, they provide a physical-to-logical translation that becomes critical in understanding the actual operation of a storage configuration. Given the complexities of SANs, these functions can be extremely important in problem determination and monitoring performance.

We have used volume manager examples in many of the figures within this book. As Figure 24-2 shows, they add relevance to naming storage devices within a configuration. As seen in this figure, a typical storage area network, prod01, prod02, and prod03 are disk volumes that contain the production databases for the configuration. The storage administrator through the services of the volume manager software assigns the specific volume a name . The volume manager, meanwhile, manages the storage pools, prod01, prod02, and prod03, transparently . Working in conjunction with storage controllers and adapters, the volume manager works to translate the LUN assignments within each of their storage arrays. In viewing the actual physical operation of the storage configurations, one must understand and look at the LUN assignments and activity within each storage configuration.

click to expand
Figure 24-2: Volume management working with storage arrays

The quota and volume management tools provide a snapshot of storage usage and access. However, looking for a longer term historical usage becomes problematic. One of the continuing difficulties within the open systems operating environments is the lack of historical collection capabilities, although this situation continues for a number of reasons, such as disparate processing architectures, box mentality , and distributed computing challenges. Each of these provides a roadblock when systems personnel attempt to gather historical processing information.

Collecting Storage Information

Disparate operating systems provide the first problem, as the major contributor to the differences between vendors , but especially between UNIX and Windows operating environments. Historically, however, the centralized mainframe computing systems of IBM and others provided a unique architecture to gather processing data. Most examples in this area point to the IBM System Management Facility (SMF) and Resource Management Facility (RMF) functions as models for what should be expected within the open systems area. Overall, this is not the case, and taken out of its centralized and proprietary context, the SMF and RMF models for historical information gathering do not fit well into the area of client/server and distributed computing.

The first area not covered by the IBM model is the collection and support of distributed computing configurations that have become commonplace within client/ server installations. This is exacerbated by the open architectures of UNIX systems and lack of any standards in processing nomenclature. Consequently, there are multiple ways of viewing a process within the UNIX variants of the UNIX open systems model. This permeates into the storage space, as the nomenclature and operation of UNIX storage models are, by design, very flexible and open to vendor interpretation. So gathering data about a SUN configuration is a different exercise than gathering processing information about an AIX environment. If we throw in the increased open area of storage, vendor support for each of these environments makes the common historical database such as IBMs SMF and RMF impossible .

The second area that makes the centralized proprietary environment different is the distributed nature of processing. As servers became specialized, the ability to provide a historical record in context with the application became problematic at best. Consider a database transaction that executes partly on application servers, where application logic creates and submits a database query, which is executed on another server. The historical log of the application processing becomes difficult to track as it migrates from one server to another. With the distributed processing activity, the combined activity information transverses from one operating system to another with the related resource utilization. This example points out the problematic recording of database transactions within the context of the application.

We will integrate the complexities of a SAN into the collection discussion and provide yet another source of processing and resource information into the capacity planning equation. In terms of a SAN, this becomes the fabric and related micro-kernel operations in moving the data within the storage network. If we integrate the NAS environment, we integrate yet two more sources of processing: the NAS micro-kernel processing and the operating systems that run the network fabric. The collection of data for both SAN and NAS adds additional data collection points that are again distributed in nature.

The point is that the collection of data within the distributed environment continues to be problematic. Although there are some methods that attempt to deal with these issues, the need to develop an alternative to the historical collection of data becomes evident. Currently, only a limited number of software tools are effectively addressing this area using emerging standards that not all vendors have accepted. This exacerbates the situation for storage analysis within storage networks, given the difficulty in collecting valid storage activity information.

However, there are two basic initiatives to consider when collecting historical storage activity information. The first is the industry initiative of the common information model (CIM) that is an object-oriented database for collecting system processing details and device status. CIM is a Microsoft-initiated standard that has achieved some level of acceptance although its complexity continues to make it impractical for storage network usage. The other item is the usage of network-oriented tools which many storage- networking vendors include with their products: the management information base (MIB). These provide the quickest way of collecting information in- band for the SAN and within the NAS micro-kernel (although this isnt used as often in the NAS micro- kernel levels).

The Common Information Model and Storage Networks

The common information model (CIM) is an object specification that provides a uniform way of describing a computer and its components. The specification is used as a standard to write applications that access the objects as described within the model. The CIM initiative first started as a Microsoft object-oriented structure used mainly by the operating system to build a repository of general and specific information about the computer it was running on.

The objective of this functionality was to give hardware companies the ability to access their components within a network. Used for support and problem management, vendors quickly contributed to the Microsoft-led initiative with specifications for general computer components, concentrating chiefly on internal components for processors, adapter cards, and memory structures. However, it was also Microsofts objective to evaluate a configuration in order to determine the requirements needed for particular operating system functions, applications packages, and licensing verification. Oddly enough, the initial releases of the CIM specification did not reflect any of the major peripheral and processing software components necessary to complete a computer configuration.

The Microsoft CIM initiative was soon passed to the Distributed Desktop Management Task Force, DMTF, a standards-based body that manages similar initiatives for distributed PCs. DMTF had already recognized the CIM initiative and included it as part of the initial releases of DMTF standards. The move to integrate CIM as a DMTF standard came about the time another consortium standard initiative was passed into the DMTF this was WBEM, or web-based enterprise management.

The WBEM initiative began the development of particular standards for enterprise management of computers over the Web. As Microsoft was one of the original members of the WBEM consortium, one of the standards initiatives it began with was the CIM standard. So any vendor wishing to develop products that could be managed uniformly, regardless of vendor association, and managed over the Web, must meet the CIM and WBEM standards.

Consequently, specifications for general storage devices (say, IDE and SCSI drives ) and offline media, such as tape and optical peripherals, were not added until later. The component descriptions for Storage Area Networks are very new and have only been added as of the writing of this book. However, given vendor acceptance and cooperation, CIM and WBEM do provide a uniform description for storage devices and storage network components. As these specifications are incorporated within vendor products, the data center begins to see a consistent view of the storage infrastructure.

The important points here are the eventual scope of the CIM specifications and the use of CIM as a standard. As a standard, that means sufficient vendors must integrate their product offerings into the CIM standard before the concept becomes relevant to the data center. As a meaningful solution, both vendors that develop management products as well as vendors that develop storage networking products must adhere to the CIM standard for this to be productive. In other words, it would be difficult if you have vendor As disk products that conformed to the CIM standard, and vendor Bs disk products that did not but conform to their own specifications of storage devices and activities. The same can be said for vendors who provide products for performance management and capacity planningthe standard must support all instances of CIM implementation at the storage hardware and software level. In addition, the CIM specification itself must have sufficient detail to provide value.

SNMP and MIBs

MIBs, on the other hand, have come through the standards process and been accepted as a valid solution for collecting and describing information aboutget ready for this networks. Yes, the MIB specification came from the network environment and continues to be used today as a fundamental element of network management activities.

Management information bases (MIBs) are very complex file-oriented databases that describe a network and its components and act as a repository for activity information that occurs within the network. The MIBs were the database for a distributed protocol that is used to access remote networks for management purposes. This protocol is known as the Simple Network Management Protocol, or SNMP. Anyone with networking experience should be quite familiar with this concept and protocol.

SNMP and their related MIBs create a way of collecting information for inclusion in performance, problem, and capacity management. However, SNMP and their related MIBs are complex systems that require specific programming to derive value. Many network management products base their functions on SNMP and MIB standards. This proximity of network technologies within the SAN environments prompted the inclusion of MIBs within the SAN switch software.

Leveraged by SAN vendors as an accepted network management tool, the SNMP and MIB combination for SAN management laid the groundwork for todays management repository for SANs. Although not as likely to be accessed or included within the micro-kernel applications, the NAS configurations can also be included in SNMP and MIB solutions. The caveat is the complexities of the solution and the long-term viability of the MIB. The data center must rely on third-party software products that integrate SNMP functions while allocating their own MIB files, or face writing their own SNMP scripts and defining the MIB files themselves . The latter is a complex, time-consuming task within the data center.

 
team lib


Storage Networks
Storage Networks: The Complete Reference
ISBN: 0072224762
EAN: 2147483647
Year: 2003
Pages: 192

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net