4.2 Brief History and Evolution of Storage


Before delving into potential storage services locations within the overall infrastructure, it helps to have some context to the evolution of computing and storage architectures stretching back several decades. A look back in time across compute platforms, storage platforms, and the logical and physical interaction highlights that all roads point toward distributed systems. This is made possible by advances in storage devices, access technology, and storage software.

4.2.1 The Mainframe Era

The 1950s and 1960s witnessed rapid technology adoption in high-growth industries such as banking, airline travel, medical diagnosis, and space exploration. These industries required fast access to large amounts of data for transaction- intensive business processes, and mainframe architectures filled that need.

The independence of data storage can be linked to IBM's delivery of the first tape drive in 1952 and the first 2,400- foot tape reel in 1953. At the time, it was a cost-effective means to store large amounts of data over a long period of time. With increasing density every year, IBM's 2,400-foot reel tape remained the standard until the introduction of the half-inch tape in 1984. While tape was good for storage, it was slow for access. Transaction-intensive business processes required picking a needle out of the haystack in minutes. The answer came in 1956 from a team of IBM engineers in San Jose, California. The 305 RAMAC (Random Access Method of Accounting and Control) could store five million characters (five megabytes) of data on 50 disks, or the equivalent of 62,000 punch cards. This was the first device available that could go directly from point A to point B on a disk without reading the information in between.

The 1960s brought higher density disk storage, one head per disk, and airborne heads that allowed for faster access to more data. It also introduced the Cyclical Redundancy Check (CRC), which is an algorithm that allows a system to automatically check itself for errors and to correct some of these errors. Tape drives learned to "read backwards ," a method still used today for increasing tape performance.

Storage access for mainframes was direct-attached with a limited number of storage nodes, with devices located next to mainframes within a data center. This centralized architecture worked well for the mainframe applications served .

In the 1970s the innovation of the Winchester disk drive and floppy disk enabled data portability and began the slippery slope of migration of data from the mainframe. At the same time, industry visionaries such as Ken Olsen at Digital Equipment Corporation (DEC) proposed that a series of smaller networked machines, called minicomputers, could accomplish the same tasks as IBM's larger mainframe computers.

4.2.2 Minicomputers and Clustering

In October 1977 the first VAX prototype came off the production line. VAX was a 32-bit minicomputer that also enabled DEC's VAX strategy of clustering several smaller computers together to form a huge network. DECnet was the initial networking technology proposed and delivered for VAX clusters, but in 1979 Ethernet, a system developed by Bob Metcalfe at XEROX's Palo Alto Research Center (PARC) had gathered more supporters. That year XEROX, DEC, and Intel launched their Ethernet networking plans.

Minicomputers and clustering broke the centralized lock between storage and mainframes, and alternative approaches such as DEC's VAX clusters provided choices beyond those offered by IBM. However, IBM still dominated the R&D landscape for new technologies, especially storage, and created the Plug-Compatible Market (PCM). IBM retained over 90 percent of the market because it owned the connection control points upon which disk storage architectures were built. Aftermarket competitors were forced to follow IBM's lead, which usually meant they were 12 to 18 months behind Big Blue in introducing new products.

Even with IBM's market stronghold, the appearance of minicomputers and clustering began to decentralize storage. Users now had options to develop specific data stores for nonmainframe applications. This enabled choice but also led to the breakdown of centralized storage management. However, the cost advantages of avoiding mainframe storage outweighed the unforeseen consequences of decentralized storage architectures. Without the ability to see how complex this decentralization could become, it continued and picked up speed with new advances in disk storage. such as RAID (Redundant Array of Inexpensive Disks).

In 1987 IBM and the University of California at Berkeley jointly patented the first RAID device. RAID accelerated data access and, more importantly, introduced the concept of redundancy in computer systems for reliability. RAID systems were the impetus for the next growth stage in storage, allowing manufacturers to deliver performance and availability that had previously been achieved only through expensive large-platter disk systems dominated by IBM. The most cited case of this exploitation is EMC's capture of IBM's disk storage market through the 1990s.

While the famous RAID paper brought forward a common language for describing functions like striping or mirroring across multiple disk drives, the technology prospered with the ability to place large amounts of RAM in front of cheap disks and imitate high-performance mainframe boxes. The RAM allowed instant write acknowledgments and aggressive caching for read performance. It is no coincidence that EMC and other array vendors grew out of the memory market. In fact, one of the first disk products was called an Integrated, Cache-Based Disk Array.

4.2.3 Client-Server Computing

The increasing popularity of client-server computing in the 1990s launched the deployment of midrange servers throughout the enterprise, with each group having its own storage. Server storage evolved from direct-attached RAID systems to networked storage systems based on the introduction of Fibre Channel.

All major server vendors began to offer storage choices, and independent storage vendors also offered their own storage arrays optimized for specific needs. This migration to open systems, spurred by the adoption of interconnects such as Fibre Channel, ultimately led to the creation of independent storage islands. Typical customer configurations might include a small cluster of application servers with a small Fibre Channel network connected to one vendor's storage solution. For example, the enterprise resource planning (ERP) software often ends up on a completely different set of servers and storage devices from the customer resource management (CRM) software.

As storage for each application grew separately, the growth in the personal computer market meant that more and more storage was split among application servers and users' desktops and laptops. Data production activities such as office automation (word processing, presentations, email) or data-intensive applications such as computer aided design often resided outside of the application server's storage. This split storage deployment caused an excessive cost through redundant storage deployments and difficult administration.

4.2.4 Distributed Web, Application, and Database Servers

Recognizing the storage investments made to keep up with digital information and the increasing portion of corporate spending on data management, both vendors and users now aim to consolidate systems to maximize resources. Distributed servers for Web services, applications, and databases are possible through the use of sophisticated networks, including load balancers and "application aware" switches. Similarly, storage through intelligent storage networks can be located almost anywhere on a global network supporting hundreds, if not thousands, of nodes.

The ultimate goal of maximizing storage resources requires delivery of virtual storage pools, as provided by storage capacity services. Also required are the other primary storage services of manageability, recoverability, performance, security, and availability. Operating in concert, these services reside throughout the larger distributed system of servers, networks, and storage devices. In Section 4.3, "Placing Storage Intelligence," we examine optimized locations for a variety of these storage services.

Figure 4-3 outlines the overall shift to distributed architectures across computing and storage platforms. Whereas previously storage could not join the distributed playing field, the advent of storage networks and services permits this type of participation. Just as computing architectures have moved to distributed scalable models, so will storage.

Figure 4-3. Historical trends across computing and storage infrastructure design.

graphics/04fig03.gif

4.2.5 Scalability

All services must scale across three dimensions: size, speed, and distance. Scalability allows users to grow configurations without changing design parameters because of that growth. Size can be classified as the number of nodes in a network, such as servers, switches, and storage devices. Speed refers to the processing speed of the overall systems, such as application I/O operations per second, as well as the component bandwidth such as 2-Gbps Fibre Channel or 10-Gbps Ethernet. The ability to cover metropolitan, national, and global reach for applications requiring business continuity and disaster recovery mandates scalability in terms of distance as well. See Figure 4-4.

Figure 4-4. Storage services require size, speed, and distance scalability.

graphics/04fig04.jpg

4.2.6 Building More Scalable Routers

History has shown that distributed systems boost size and throughput scalability, particularly in networking applications. Consider initial routers' internal design compared to today. The first generation of routers had centralized routing tables. All data entering the router went through the single routing table, in effect creating a processing bottleneck. By distributing routing information directly to each port, decisions could be made independently, releasing the central processing bottleneck. While the central processor is responsible for the overall implementation, each port operates with a Forwarding Information Base (FIB) based on a forwarding table from the central processor. This architecture enables greater throughput across a larger number of nodes.

In effect, a router is a network within a chassis. As such, one would expect networking of any devices, including storage nodes, to benefit from the distributed architecture demonstrated on the right of Figure 4-5.

Figure 4-5. Routers progressed from single to distributed tables for greater speed and performance.

graphics/04fig05.gif

4.2.7 Network-Based Storage Services

Applying storage networking to the evolution of routing, the natural conclusion is a migration towards distributed architectures to boost scalability. The introduction of a switch between two storage end nodes breaks open the traditionally isolated path and presents openings for new solutions. From a storage networking perspective, these solutions come from intelligence previously residing in the host or target but now delivered through network-based platforms.

As with routing, early deployments for network-based storage services used a single location model for implementation. When the services engine resides outside the data path, more traffic exchanges take place between the end nodes by default. As shown in Figure 4-6A, a single services location model requires communication between ingress, egress, and services nodes. In this example, the ingress and egress end points are likely storage nodes such as server HBAs or disk arrays. Adding a distributed model to the configuration, much as routing added distributed functions to ports, facilitates simpler communication between ingress and egress points on the network. The direct path is shown in Figure 4-6B.

Figure 4-6. Using distributed storage services for optimized deployments.

graphics/04fig06.jpg

By distributing services information, each node on the network is empowered to carry out the requested operations between any ingress and egress points. This configuration guarantees the maximum efficiency of high-speed network connections specialized for storage. This streamlined approach is demonstrated in Figure 4-7.

Figure 4-7. Optimized network flow through distributed services.

graphics/04fig07.jpg

4.2.8 Network Choices for Storage Services

The introduction of a multipoint architecture, or network, into the storage domain provides choices for services locations and functions. Given the trends in changing applications, user requirements, and equipment adaptation, services will continue to shift based on processing bottlenecks. For a variety of markets and applications, location optimization will be unique.

Separating services between hosts and targets implies a functional split; however, moving services from either the host or target domain into a third network location represents another breakthrough . It is far simpler to set up an exchange between two end nodes than it is to add a third location. With two nodes, each node knows that any data received is from the other node. With three nodes, the identification process grows by an order of complexity. But, once the third node is established, the identification and communication infrastructure building blocks are in place to scale to n (any numbers of) nodes. This is the communications model deployed in mainstream networking today for use across the Internet, the world's largest network. Storage networking is also moving to n node scalability and is likely to follow the same model.



IP Storage Networking Straight to the Core
IP Storage Networking: Straight to the Core
ISBN: 0321159608
EAN: 2147483647
Year: 2003
Pages: 108

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net