Barriers to Growth in Disk and Consequences for Platform Decision Making


Of key importance in current disk drive technology is the limit imposed by the superparamagnetic effect (SPE) on areal density scaling in disk media. As previously mentioned, SPE refers to a relationship between the magnetic stability of disk storage and the thermal properties of disk subsystems. When the magnetic energy holding bits in their recorded states (0 or 1) becomes equal to the thermal energy generated by the disk drive in operation, random "bit flipping" can occur, making disk-based data storage unreliable.

SPE has become an issue because of the manner in which disk technology has achieved its phenomenal storage density improvements through the years . Vendors have increased areal density ”defined as the product of the number of tracks per inch on a disk multiplied by the number of bits per inch ”by packing more and more bits more and more closely into a track and by squeezing tracks closer and closer together on the same platter (see Figure 6-1). Of course, this has been accompanied by steady improvements in the sensitivity of read-write heads to enable the reading of the diminished magnetic signal of smaller and smaller bits against the backdrop of electromagnetic noise generated by the drive itself. The question that is being asked over and over today is how densely bits can be packed before SPE imposes a finite limit to capacity.

Figure 6-1. Areal density: Tracks per inch (TPI) x bits per inch (BPI).

graphics/06fig01.gif

According to engineers at IBM, HP, Seagate, Quantum and elsewhere, [2] the SPE is just another engineering parameter ”not so much a "limit," as a media property that must be addressed as part of disk drive design.

Experts have optimistically noted that the presumed "limit" imposed by SPE has been set ”and broken ”more times than the mythical MACH barriers were established, then surmounted, by Chuck Yeager and other aircraft test pilots in the 1940s and 1950s. Indeed, public pronouncements of the limits of areal density scaling had all but disappeared by the 1990s, owing to the many times that vendors, who had declared areal density limits, were made to appear foolish or inept by rivals who had found a way to exceed the limit.

Initially set at 2 kilobits per square inch (Kb/in. [2] ), the SPE " barrier " was overcome by vendors time after time by improving disk drive components so that more bits could be reliably written and read within the same fixed space of the disk platter. As late as 1998, industry rule of thumb held that superparamagnetism would limit disk areal density to about 30 gigabits per square inch (Gb/in. [2] ). This would have been a more frightening concept at the time, given that areal densities of high-end disk drives were already hovering at 10 Gb/in. [2] , but no vendor was willing to make a public statement about an SPE-imposed limit.

In 1999, IBM demonstrated a significantly higher areal density in its Almaden Research facility. About a year later, they bolstered their laboratory test results by adding "Pixiedust" ”Ruthenium ”to their platter coating, enabling 100 Gb/in. [2] densities. [3]

Today, the "SPE demon" is thought to live out at about the 100 to 150 Gb/in. [2] range, providing substantial " elbow room" to assuage the concerns of companies whose appetite for data storage seems boundless. However, the future of areal density improvement is not terribly bright beyond the 150 Gb/in. [2] Moreover, even with such a generous runway for growth, if current capacity expansion trends continue, SPE- related limits will be reached before 2010.

According to most experts, areal density improvements beyond 150 Gb/in. [2] will not be achievable without the application of entirely new technologies, many of which will require either significant breakthroughs in enabling process technologies or discoveries in materials science. To understand the improvements that will need to be made to current disk drives in order to reach even the anticipated 100 to 150 Gb/in. [2] areal densities, it is useful to look at the disk drive as it has evolved to this point.

Readers of the first Holy Grail book will recall that the modern magnetic hard disk (see Figure 6-2) consists of a stack of disk platters ”each comprising an aluminum alloy substrate with a magnetic material coating ”onto which data is written in the form of linear tracks of magnetic bits. The platters are rotated counter-clockwise by a spindle motor at speeds of between 3,600 and 10,000 rotations per minute (RPM). Data is recorded to and retrieved from the spinning media by means of read-write heads that "fly" above the media at heights measured in microinches.

Figure 6-2. An exploded view of a contemporary disk drive.

graphics/06fig02.jpg

With most current disk drives, heads are positioned precisely over tracks on both sides ”or "faces" ”of the media through the use of servo-motor-controlled actuator arms. Microelectronics are used to control the operation of the drive, to convert detected magnetic states into useful signals, and to provide an interface between the drive and the computer system or systems that use it.

The basic design of the disk drive traces its origins to IBM's RAMAC drive, which was introduced in 1956. The RAMAC offered random access storage of up to 5 million characters , weighed nearly 2,000 pounds , and occupied the same floor space as two modern refrigerators. Data was stored on fifty, 24-inch diameter, aluminum disk platters that were coated on each side with magnetic iron oxide. The magnetic coating of the media was derived from the primer used to paint San Francisco's Golden Gate Bridge.

In the 40-plus years that followed, innovations in the technologies of disk drive subcomponents and in signal processing algorithms led to dramatic increases in storage capacity and equally dramatic decreases in the physical dimensions of drives themselves . According to industry spokespersons, improvements have been largely realized through the scaling of drive components, which have enabled more bits to be written to more tracks on the media.

The statistics are stunning. There has been a 5-million-fold increase in magnetic disk storage capacity since the first disk drive. The growth rate accelerated in the 1990s. From 1991 until 1997, capacities increased annually at a rate of approximately 60 percent per year. In 1998, the rate of increase was 80 percent per year. In 1999, the rate of capacity improvement was at 130 percent. Currently, capacity is doubling every nine months.

It is no coincidence that areal density improvements have been seen at roughly the same times as drives featuring new read-write heads were introduced to market. As a practical matter, to increase areal density requires both the capability to write data to smaller and smaller areas (called domains) of the disk and to read the magnetic patterns from those domains efficiently .

Traditionally, bits are recorded on a disk by flying a write head over domains and altering the magnetic polarity of the grains of magnetic material located in the domain. Reading the bits is a function of positioning the read element of a read-write head at a position where it can successfully interpret the magnetic state of the domain and convert it into a useful signal. Over time, the precision and sensitivity of read-write heads, positioning electronics, and media itself have improved to enable smaller and smaller domains to be used to store bits.

Many early disk drives used ferrite heads to induce magnetic changes in media. Beginning in 1979, ferrite heads gave way to thin film inductive heads, a new approach that applied silicon chip-building technology to head design and production. Thin film heads enabled closer head fly heights and the reading and writing of more densely packed bits.

Thin film heads themselves were displaced in the early 1990s as other head designs offering the capability to read even smaller- sized bits appeared on the scene. The year 1991 saw the introduction of a new head design from IBM, based on the magnetoresitive (MR) effect, which revolutionized the industry.

MR head design used a traditional inductive head approach to write data to disk media, but it added a read element whose resistance changed as a function of an applied magnetic field. This application of the magnetoresistive effect, which was first observed by Lord Kelvin in 1857, enabled a major breakthrough in areal density. By increasing the sensitivity of read heads to the minute magnetic fields generated by smaller bit domains, smaller bits could be used effectively for data storage.

While older thin film read-write heads continued to be used by many manufacturers through 1996, drives based on the newer MR heads technology and incorporating additional improvements in servo control and media coating eventually came to dominate the market. In 1997, IBM introduced another read-write head innovation ”the gigantic magnetoresistive (GMR) head ”that enabled areal densities to climb even higher.

GMR heads improved upon MR heads by layering magnetic and non-magnetic materials in the read head to increase its sensitivity to even weaker magnetic fields, and by extension, to smaller bit sizes. According to one IBM insider, "tweaks" to the company's GMR head structure, a manmade component, are being counted upon to aid the vendor in achieving capacity expectations of 100 Gb/in. [2] within the next six years.



The Holy Grail of Network Storage Management
The Holy Grail of Network Storage Management
ISBN: 0130284165
EAN: 2147483647
Year: 2003
Pages: 96

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net