Chapter 9. Final Word: Tape is Dead . . . Maybe


The previous two chapters have discussed the provisioning of storage to applications: the most frequently cited source of pain in contemporary storage administration. Implicit in the discussion was a storage topology increasingly treated as a "given" in vendor marketing literature, but rarely fully explicated. As shown in Figure 9-1, this implicit storage architecture consists of at least three tiers of storage technology interconnected by a fabric or network.

Figure 9-1. Multitiered storage architecture.

graphics/09fig01.gif

Tier one consists of high-end arrays that offer superior access speed and internal intelligence and sophisticated data replication functionality: the expensive, state-of-the-art array one might purchase from a company with a three-letter acronym for a name . The second tier comprises less expensive, lower performance arrays ”perhaps built with SAS and SATA drives by the time this book is published ”or perhaps consisting of legacy tier-one products to push their useful service life out a few more years . These tier-two arrays provide high capacity for the reliable storage of less frequently accessed "reference" data. Finally, there is a tier of tape and optical disk primarily used for disaster recovery-focused data copy and/or archive.

This three-tier model provides a platform that can be enhanced through software processes to support access frequency-based data migration, which was described in the previous chapter and whose operation is summarized in Figure 9-2. The purpose of such software processes is to facilitate enhanced capacity utilization efficiency in a manner that does not lock a consumer into a particular vendor's technology.

Figure 9-2. A data management system featuring access frequency-based data migration.

graphics/09fig02.gif

Such a technology reduces the cost of storage because it ensures that less frequently accessed data is not stored on the most expensive gear, and data that has outlived its useful life is purged from storage altogether. And, it does these things automatically and with minimal administrator intervention.

The key challenge inherent in such a system, if history is any indication, is the collection of information about business processes and applications that will form a knowledge base for use in data classification, retention policies, security requirements, and other definitions. This information is part of the "DNA" of a data description header and determines much of how data will be hosted and migrated throughout its useful life, as well as the method of protection that is best applied to the data.

The process of data collection on business processes and applications, and the determination of their criticality, has long been a challenge for disaster recovery planners. It may entail a multi-month, sometimes a multi-year, effort to track data back to its original application in order to determine its importance to business continuity; to define its growth and access characteristics; and, to identify any special regulatory or legal provisions affecting its retention. As a veteran of 60-plus plans, I can confidently assert that the data collection and risk analysis processes are the most difficult tasks in DR.

This view is echoed by those who worked to convert customers to IBM's Systems Managed Storage approach to storage management throughout the 1990s. The creation of Storage Classes in SMS required a detailed understanding of the criticality of applications and of the requirements that the application imposed on its data in terms of retention, protection and accessibility. Only with such information could a policy be developed that would target a certain class of data in a specified protective service such as mirroring or tape backup.

Today, the issue of application awareness is again coming to the fore as many organizations become interested in developing managed storage services for "sale" to internal "customers" ”and as telcos and other organizations endeavor to become "second generation" Storage Service Providers (SSP). In both cases, the data access and storage requirements of applications need to be clearly understood in order to establish meaningful service level agreements (SLAs). While networked storage has not matured to a point to which performance and resiliency can be taken for granted, vendors (and former first-generation SSPs) like CreekPath Systems and Storability are creating what amount to workflow management software to facilitate storage services and accounting.

The industry has made some preliminary moves to build a storage infrastructure services quality model, called the 7-Layer Storage Management Model (an obvious homage to the OSI 7-Layer Network Model), to facilitate the use of a storage network to provide Managed Storage Utility. Authors of the model observe that separate, non-integrated, management tools exist for SAN disk subsystems, Fibre Channel fabrics and IP networks, NAS file heads, tape backup subsystems, SNMP-enabled remote management, OSS/BSS databases, etc. that must be combined into a common platform in order to manage a Service Provider storage network, and provision and bill for storage services. A desirable feature set for enabling storage utility from a storage network, writes one advocate of the model, includes disk virtualization, back-up automation, automatic storage provisioning, FC fabric/IP network integration, heuristic SLA management, and a customer management portal ”all accessible through a web browser. [1] EMC Corporation was responsible for articulating layers 1 through 5 of the model (see Table 9-1), while CreekPath Systems is cited as the author of layers 6 and 7.

Table 9-1. The Storage Management Services Layer Model

Layer

Storage System Technology

7

Self-Healing Policy Driven Management

6

Automatic Provisioning

5

Remote Management and Portal Service

4

Interoperability and Sharing

3

Enabling Software for Data Management

2

Network Protocols

1

Storage Hardware

Ultimately, whether driven by the need to rationalize storage costs, or to deliver storage as a service, or to protect the most irreplaceable asset in any organization ”its data ”work must first be done to understand applications and their I/O.



The Holy Grail of Network Storage Management
The Holy Grail of Network Storage Management
ISBN: 0130284165
EAN: 2147483647
Year: 2003
Pages: 96

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net