Fact and Fiction in Networked Storage


The panacea solution offered to the data explosion by the storage industry is summarized in the catch-all expression "networked storage." Networked storage is a "marketecture" term encompassing, at present, storage area networks (SAN) and network-attached storage (NAS).

In the writings of the industry ( brochures , white papers, trade press articles, etc.), networked storage is often described as a revolutionary departure from traditional server-attached storage (sometimes called "server-captive" storage). In essence, it comprises topologies that separate storage platforms into their own infrastructure, enabling

  • Storage scaling without application disruption,

  • Enhanced storage accessibility,

  • Storage self-management , and

  • Intelligent and automatic storage provisioning and maintenance.

One of the first descriptions of networked storage appeared in a visionary white paper from Compaq Corporation, which was discussed in the previous Holy Grail book. With its acquisition of Digital Equipment Corporation in the 1990s, Compaq also acquired a conceptual design for networked storage called the Enterprise Network Storage Architecture (ENSA), which it promulgated in a white paper of the same name in 1997.

ENSA envisioned a utility storage infrastructure that delivered the capabilities enumerated above: scalability, accessibility, manageability, and intelligence. At least at first, Compaq did not leverage claims of a data explosion to justify the value of the ENSA infrastructure: ENSA simply provided an elegant and evolutionary strategy for enterprise data storage.

If anything, ENSA anticipated the diminishing revenues possible from an increasingly commoditized disk market and the need for vendors to "add value" to their storage platform offerings with software and services. The ENSA authors may also have anticipated the realities of superparamagnetism and its impact on disk storage itself.

It is a well-documented fact that disk drives have been increasing in capacity and decreasing in cost fairly consistently since the mid-1990s. According to industry watchers, disk capacity has doubled about every 18 months, while disk prices have been cut in half every 12 months (see Figure 2-3). This dynamic has been an engine of growth in terms of the quantity of storage products sold over the past decade , and has had the unfortunate side effect of encouraging consumers to address poor data management practices by throwing more and more inexpensive disks at the problems that mismanagement creates.

Figure 2-3. Disk capacity improvement and cost-per-megabyte decline.

graphics/02fig03.gif

In 2000, however, an interesting fact about magnetic disk storage resurfaced: There were fixed limits to hard disk areal density (bits per square inch of data that could be reliably written and read from a disk platter), given conventional disk drive technology. These limits to magnetic storage capacity were imposed by a reality of physics called the superparamagnetic effect. And, at current rates of disk drive capacity improvement, the limits to growth in disk would be reached as early as 2005 or 2006.

Superparamagnetism is, simply stated, the point at which the magnetic energy used to hold data bits written to disk media in their recorded state becomes equal to the thermal energy generated by drive operations. Exceeding this limit would cause random bit flipping and make disk storage unreliable.

The specter of disk density limitations imposed by the superparamagnetic effect had been raised many times in the past by vendors. However, it had become so great a source of embarrassment when one manufacturer announced a fixed limit on disk size, only to be "corrected" by a competitor who claimed that its superior technology allowed the construction of a more capacious disk, that most vendors had decided not to talk about it in public.

However, in 2000, as I worked on an article on the subject for Scientific American, [5] leading manufacturers grudgingly provided their "best guess" estimates for the "superparamagnetic limit": 150 gigabits per square inch (Gb/in. [2] . This prognosis was unanimous among leading disk manufacturers, including Seagate, Hewlett-Packard, Quantum, and IBM. Barring some unforeseen breakthrough in media materials, the best areal density that could be obtained from current disk technology was fixed at 150 Gb/in. [2] ”and, given the rates of disk growth (120 percent per year), that limit was fast approaching (see Figure 2-4).

Figure 2-4. Data densities and the superparamagnetic effect.

graphics/02fig04.jpg

Other exotic technologies, such as perpendicular recording, thermally assisted recording, near or far field recording (NFR/FFR), atomic force resolution, and even holographic storage, were in development at leading laboratories, but their introduction as products for general consumption was still at least a decade away, manufacturers agreed. Worst-case scenario: Conventional magnetic disk would run out of elbow room at least five years before alternatives would be ready for enterprise data storage "prime time."

It is likely that the early network storage visionaries, including members of the Digital Equipment Corporation brain trust who cross-pollinated the industry after the acquisition of DEC by Compaq, had superparamagnetism in the back of their minds. For disk-based storage to continue to scale once the superparamagnetic limit was reached, it would have to scale "outside the box" ”whether that box was conceived as an individual disk drive or a cabinet of disk drives organized as an array. Put another way, network storage would be required to perpetuate the dynamic of 120 percent capacity improvement accompanied by 50 percent annual reduction in cost. It was an evolutionary solution to the problem.



The Holy Grail of Network Storage Management
The Holy Grail of Network Storage Management
ISBN: 0130284165
EAN: 2147483647
Year: 2003
Pages: 96

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net