The Solution is Out There


We could go on and on about the many ways that vendors have taken advantage of the deconstruction of centralized computing. While they can wrap themselves in the flag and claim that open computing broke the domination of all computing by a single vendor (IBM), vendors must also concede that this also ushered in an anarchical era in storage technology. In the world of contemporary open systems storage, one in which vendors manifest the Hobbesian ideal of self-interested opportunism, and life for storage consumers is nasty and brutish, if not altogether short.

At the time that the first Holy Grail book was being developed, the late 1990s, consumers didn't pay very much attention to burgeoning issues like storage management. For as long as the economy was robust and storage devices kept dropping in price, consumers were content to simply buy more, with little or no attention paid to the ultimate price that unmanaged storage would exact.

Today, we are at last feeling the "two towers " of storage pain: the need for cost-effective capacity provisioning in a "do more with less" world using inadequate tools, and the need for a data protection strategy that will endure in the face of an unmanaged infrastructure and threats that seem to be growing daily.

From a technical perspective, most of the problems of provisioning could be effectively addressed today by a combination of true, standards-based, cross-platform LUN carving-and-splicing technology (true virtualization), a robust global namespace for file storage, and a more effective management strategy based upon an open, standards-based scheme of self-describing data and access frequency-based data migration.

In terms of finding space to store all of that exploding data, there are, on the horizon, a number of additional technologies that promise to expand the areal density limits of magnetic storage dramatically. These include

  • Atomic Force Resolution: A number of vendors are working to provide a thumbnail-size device with storage densities greater than one terabit (1,000 gigabits) per square inch. The technology builds on advances in atomic probe microscopy, in which a probe tip as small as a single atom scans the surface of a material to produce images accurate within a few nanometers. Probe storage technology would employ an array of atom-size probe tips to read and write data to spots on the storage medium. A micro-mover would position the medium relative to the tips.

    The technology confronts four primary challenges. [2] First is the storage medium: Researchers are looking for a cost-effective and durable phase-change medium for recording bits (basically by heating data spots to change them from one phase to the other). Second is the probe tip, which must emit a directed beam of electrons when voltage is applied. A strong beam flowing from the tip to the medium heats a data spot as needed to write or erase a bit. A weak beam can be used to read data by detecting a certain spot's resistance (or other phase-dependent electrical property) with which the tip is in contact or almost in contact. Third is the actuator or micro-mover that positions the media for reading and writing at the nanometer level. Fourth is packaging to enable the device to be integrated into other devices. Complicating this is the requirement that the components currently require a near vacuum to reduce the scattering of electrons from the read-write beam and to reduce the flow of heat between data spots.

  • Holographic Storage: For nearly four decades, holographic memory has been the great white whale of technology research. Despite enormous expenditures, a complete, general-purpose system that could be sold commercially continues to elude industrial and academic researchers. Theoretical projections suggest that it will eventually be possible to use holographic techniques to store trillions of bytes ”an amount of information corresponding to the contents of millions of books ”in a piece of crystalline material the size of a sugar cube or a standard CD platter. Moreover, holographic technologies permit retrieval of stored data at speeds not possible with magnetic methods . In short, no other storage technology under development can match holography's capacity and speed potential.

    These facts have attracted name -brand players, including IBM, Rockwell, Lucent Technologies and Bayer Corporation. Working both independently and in some cases as part of research consortia organized and co- funded by the U.S. Defense Advanced Research Projects Agency (DARPA), the companies are striving to produce a practical commercial holographic storage system within a decade . [3]

    1999 was a watershed year in which tests were performed of prototype systems, which differed substantially in design and approach. They did share certain fundamental aspects in common, however. An important one is the storage and retrieval of entire pages of data at one time, each containing thousands or even millions of bits and stored in the form of an optical-interference pattern within a photosensitive crystal or polymer material. The pages are written into the material, one after another, using two laser beams. One beam, known as the object or signal beam, is imprinted with the page of data to be stored when it shines through a liquid-crystal-like screen known as a spatial-light modulator . The screen displays the page of data as a pattern of clear and opaque squares that resembles a crossword puzzle.

    A hologram of that page is created when the object beam meets the second beam, known as the reference beam, and the two beams interfere with each other inside the photosensitive recording material. Depending on what the recording material is made of, the optical-interference pattern is imprinted as the result of physical or chemical changes in the material. The pattern is imprinted throughout the material as variations in the refractive index, the light absorption properties or the thickness of the photosensitive material.

    When this stored interference pattern is illuminated with either of the two original beams, it diffracts the light so as to reconstruct the other beam used to produce the pattern originally. Thus, illuminating the material with the reference beam re-creates the object beam, with its imprinted page of data. It is then a relatively simple matter to detect the data pattern with a solid-state camera chip, similar to those used in modern digital video cameras . The data from the chip are interpreted and forwarded to the computer as a stream of digital information.

    Practical impediments to the productization of the technology include the discovery of a durable media and the creation of beam alignment technology that does not require a staff of guys in lab coats to adjust.

  • Patterned Media: One simple solution to the problem of supermagnetism is to segregate the individual bits by erecting barriers between them. This approach, called patterned media, has been an ongoing area of research at most laboratories doing advanced work in storage technology. [4]

    One technique showing promise is to create media with mesas and valleys. Bits are kept from conflicting with each other's magnetic fields by segregating each bit in its own mesa. The difficulty is in making the mesas small enough: they would have to be around eight nanometers across or smaller in order to achieve the kind of densities that developers are seeking. IBM has been able to build such structures with feature sizes as small as 0.1 and 0.2 micron (inset), or 100 and 200 nanometers.

    To fabricate media with such mesas and valleys, companies have been investigating photolithographic processes used by the chip industry. Electron beams or lasers would be needed to etch the pattern onto the storage medium. Mesas would then need to be grown on a substrate layer, one bit in diameter. But this technique needs much refinement. One estimate is that the current lithographic processes can at best make mesas that are about 80 nanometers in diameter ”an order of magnitude too large for what is needed.

    Another challenge is to develop an entirely new technology for reading weak signals produced by such small bits. A radical departure from current magnetic disk would be required.

  • Optical-Assisted Recording: One strategy for extending the life span of the workhorse magnetic-disk drive is to supplement it with optical technology. Such a hybrid approach could push storage densities to well beyond the current range of 10 to 30 gigabits per square inch. Industry insiders claim [5] that capacities could eventually top 200 gigabits per square inch, surpassing the anticipated limit imposed by the superparamagnetic effect.

    In operation, a laser heats a tiny spot on a disk, which permits the flying head to alter the magnetic properties of the spot so that it stores a binary 1 or 0. Two lenses focus the beam to an extremely fine point, enabling the bits to be written onto the disk at very high density. An objective lens concentrates the beam on a solid-immersion lens ”the cornerstone of the system ”which, in turn , focuses the light to a spot smaller than a micron across.

The point of this brief survey is that new technology that scales well beyond the limits of magnetic disk is only a few years away. While it might sometimes seem appealing to do as little as possible to solve our current storage problems in anticipation of limitless storage space on a sugar cube, practical necessity dictates otherwise . Many organizations are already at a crisis point when it comes to unmanaged storage costs, and they need solutions now ”hence their willingness to try half-baked technologies like FC SANs.

However, the promise of new technologies should not be allowed to enable a whole new generation of IT folks to unlearn the hard lessons about storage management that are being foisted upon organizations today. While higher density, faster access storage might forestall some of the issues around data provisioning, this technology does not address the second tower of storage pain: data protection.

Today, an obscene amount of mission-critical data remains at risk. Despite numerous events that have pressed data vulnerability into the forefront of business and IT thought, very little has actually been done to rectify the situation.

Case in point, within the past year, I had the opportunity to chat with a storage manager for a U.S. federal government agency responsible for printing all of the checks for civilian agencies and departments. The fellow noted that not one of his hundreds of servers had ever been successfully backed up. This was particularly disconcerting because of the close proximity of his data center to the Pentagon, which was targeted by terrorists in the infamous September 11th attacks. Said the manager, had the plane diverted its course only a few degrees and flown a few more miles to where his data center was located, the U.S. government's abilities to pay employees , service providers, and others would have ceased to exist. [6]

For its own part, the federal government has produced only a weak mandate in the area of data protection (as opposed to data security and long term retention, as discussed below). In response to the attacks on the World Trade Center, several financial agencies did convene a panel to look into the efficacy of mirroring arrangements as a disaster protection measure. They discovered that those organizations that had established storage mirrors across the Hudson River certainly fared the attacks better than those that didn't, but they were rightfully concerned that the location of mirrored data centers ”within a 30-mile radius of a "target rich" environment like New York City ”left them susceptible to the geographic reach of other types of attack scenarios. Just when it appeared that the Office of the Comptroller of the Currency, the Security & Exchange Commission, and other agencies involved in the panel were going to impose some significant distance requirements on backup facilities ”thereby placing a burden on storage vendors to develop some real data protection strategies ”they backed off the issue completely, stating that they did not have the authority to proscribe distance requirements. [7]

While the legal mandate for data protection in the context of disaster recovery continues to be weak, this is not the case with data protection from the standpoint of security and privacy. Regulations and laws, borne out of financial scandals and concerns about healthcare patient privacy, are today exacting a toll on organizations from the standpoint of storage security provisioning.

Dealing with storage security, as discussed in this book, will require the adaptation of medieval security techniques to an entirely new threat paradigm: one in which the bad guys may not be interested in the contents of the castle, but only in the pleasure of vandalizing the castle itself. In the new millennium , you no longer need a motive to do bad things. For many computer criminals, the answer to the question of their motivation is simply, "Why not?"

The latest developments in the fast-moving world of network security ”developments that may find their way into the realm of storage security as well ”are technologies like Invicta Networks' Variable Cyber Coordinates. [8] This technology, patented by a former KGB major and cryptography expert who defected to the United States in 1980, is simple in concept. Basically, it provides security for a network connection by making it "invisible" to would-be eavesdroppers. This is done by rapidly changing the logical network addresses of the communicating end stations .

The core of the technology is an algorithm (which is also claimed by BBN Technologies) deployed at each of the communicating endpoints that changes addressing information at the rate of many times per second. Currently, implementation of the algorithm is in the form of a proprietary system that includes a secure network card that must be installed in each communicating system, a secure gateway that must be installed in each LAN, and a security control unit that is used to implement and manage the algorithm-based protection itself.

The approach sidesteps notions such as secure operating systems, firewalls and payload encryption ”techniques that have seen billions of dollars in research and development investment but produced little meaningful return in light of increasing incidents of computer crime. Even skeptics seem to be warming to the idea because of its simplicity and the fact that it eliminates the difficulties associated with firewall customization and key management.

It remains to be seen whether innovations such as VCC will make a difference in how we secure the data assets of our organizations going forward. For now, the key issues confronting storage security are less about technology than about training ”and the cultivation of data management as its own profession.



The Holy Grail of Network Storage Management
The Holy Grail of Network Storage Management
ISBN: 0130284165
EAN: 2147483647
Year: 2003
Pages: 96

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net