Foreword


Storage was once a peripheral. To mainframe systems managers it was DASD ”a direct-access storage device inextricably tied to big iron processors in the data center. While those days are gone, they are by no means forgotten. The majority of the world's data still resides on storage devices that remain hardwired to a single server.

Yet, there is now a near universal understanding among IT systems managers that data should not always be hidden behind, or bottled up inside, a single server. It should be positioned so that it can be made quickly but securely available to other applications and departments within the enterprise and with entities external to the enterprise as well. In short, data and the information it represents should be made fluid.

Why? Data ”translated into information ”has grown in value to the point that for many enterprises , it is the most valuable corporate asset. Used effectively, data is a competitive differentiator. And with the advent of e-commerce, it has risen in importance to become mission-critical to the very enterprise as viewed from the outside through the Web portal. The data storage domain is now the single most important IT resource for assuring that enterprise data will always be accessible, will always be "on" ”just like electricity. Lost access to data can be severely damaging and possibly fatal to the enterprise. In a health-care setting, for example, continuous access to data could make the difference between life and death.

The concept of the fluidity of data plays an essential role in understanding how to make data persistently available. Consider the legacy mainframe storage environment once again. When data centers had only one mainframe system with direct-attached storage, deciding where to put data was easy ”it had to reside on one of the volumes that the mainframe controlled.

Similarly in the open systems world, customers running environments in which storage is directly attached to servers had only one place to allocate data ”on the direct-attached storage. Data security was more or less as sured so long as the operating system managing the data was trusted toperform this critical function. However, data availability was severely compromised. To get to data held captive behind a server ”mainframe or otherwise ”an application had to first negotiate its way through bandwidth and operating system interoperability issues, adding an intolerable degree of latency. If the server in question was out of service, so to was its captive data.

With the advent of Network-Attached Storage (NAS) and storage area networks (SANs), IT administrators can now build environments with many different servers running their own operating systems connected to many different storage platforms. They can choose where to put the data for a particular application or set of applications ”on high speed arrays or slower arrays, SAN, NAS, or whatever may be appropriate. As a result, data rarely remains where it was originally created.

While data fluidity becomes more prevalent within enterprises, so too does the commoditization of the enterprise storage domain. Large, heavy, resource-hungry disk subsystems, for example, have given way to RAID arrays built up of small, modular, and inexpensive disk drives introduced to the mass market by the proliferation of PCs. The drives themselves are of questionable reliability when considered for use in mission-critical business applications. But the industry has found ways to bypass those shortcomings. Now, big iron disk is gone, replaced by disks that originated in the commodity PC world. The further commoditization of storage is inevitable and in fact even desirable.

Internet Protocol, as both a networking standard and a ubiquitous data transport utility, stands at the crossroads of these two storage megatrends ”data fluidity and the commoditization of the storage domain. IP is an obvious enabler of data fluidity. Its use in data communications is so commonplace that it is taken for granted. Its ubiquity in turn breeds commoditization and mass-market distribution of IP- related hardware and software. Its widespread adoption has led to standardization as well within all industries and all computing environments

Therefore, the marriage of IP and data storage is inevitable as well. The advantages that IP storage proponents claim over other alternatives, such as Fibre Channel, are numerous . IP is switchable and therefore capable of supporting high-speed, shared-resource applications. IP is mature (e.g., free of interoperability issues that were settled years ago), is well understood by the IT community at large, and has a huge mass of management software and service offerings available to users

On the surface, it's a compelling argument. IP is a known quantity with a mature infrastructure complete with management facilities. But just below the surface of IP storage, things get complex rather quickly. First, IP alone does not guarantee delivery of packets from source to target. Yet SCSI, the standard storage I/O protocol, requires that data packets not only arrive at a target destination, but also arrive in the exact order they were sent from the source. Therefore, if you want to send SCSI storage packets over an IP network, some method has to be devised that satisfies the guaranteed delivery requirements of the SCSI protocol and is done in such a way that packets are presented at the target destination in the same order they were sent.

Second, satisfying the requirements of the SCSI protocol and delivering acceptable performance for a broad range of applications and workloads will require that much of the necessary code be implemented in specialized chipset hardware. This will not be a trivial task, even for the most expert of IP networking vendors . Additionally, like Fibre Channel, a standard method for coding and decoding IP storage transmissions must be established. No vendor will make a serious commitment to low-cost, high-volume production of the required chipsets until standards have been established. The good news is that IP storage standards are now established that satisfy the requirements of the SCSI protocol and are currently in the process of productization.

Regardless of the current limitations, whatever they may be, with regard to the application of IP to storage, there is a belief that somehow those limitations will be overcome . Remember that RAID storage evolved from just a pile of inexpensive PC drives to disk arrays capable of supporting today's most critical business applications without interruption or data loss. The economic advantages of RAID drove creative and resourceful vendors to meet enterprise- user objections to the new technology. Then, pioneering IT administrators found RAID arrays to be worthy additions to production data centers. RAID implementations are now as commonplace as IP in enterprise computing.

Enterprise computing is now dominated by technologies created for the mass market. We have only to witness the rise of behemoths like Intel and Microsoft to understand the powerful influence commoditization wields. IP is firmly embedded in the world of commodity computing as well. It is the heart of the Internet. A seemingly infinite number of IP tentacles now reach into millions of households and businesses.

Storage networking stands squarely in the path of commoditization. In fact, no computing technology can hide from the onslaught commoditization. IP's march into the storage domain ”as both an enabler of data fluidity and a byproduct of commoditization ”is inevitable and inexorable.

John  Webster
Data  Mobility  Group



IP Storage Networking Straight to the Core
IP Storage Networking: Straight to the Core
ISBN: 0321159608
EAN: 2147483647
Year: 2003
Pages: 108

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net