8.1 Infrastructure Design Shifts


When evaluating new infrastructure deployments, IT managers need to consider key themes that shape current and future storage architectures. Two primary themes are the fluidity of end-to-end storage communications and the ability to shift storage services across various architectural components . While the first theme essentially leads to the second, taking each individually clarifies cause and effect.

8.1.1 Fluidity of End-to-End Storage Infrastructure

Through the history of technology, less-expensive and seemingly "less functional" components have evolved to suit applications previously unrealized by both users and vendors . A well-documented example of this phenomenon is Clayton Christensen's The Innovator's Dilemma: When New Technologies Cause Great Firms to Fail . In his work, Christensen outlines how smaller form-factor disk drives drove new market opportunities unrecognized by incumbent vendors and customers. For example, few predicted that the 5.25-inch disk drive would suit needs previously served by 9-inch platters. This resulted in a tumultuous cycle among disk drive vendors listening (and not listening) to customer input, with similar effects taking place with the introduction of 3.5-inch and 2.5-inch drives . In the end, smaller form factors prevailed.

The pace of technical development within the storage networking industry is creating a new-found fluidity across the infrastructure. Specifically, the commoditization of components across disk drives, storage nodes, server I/O interconnects, and networks is driving a price decline while enabling greater features and functionality. While these changes may not lead to immediate enterprise deployment, both storage networking vendors and customers should keep an eye on the ability to build an end-to-end storage infrastructure with increasingly commoditized technology.

  • Disk drives . New drive types, such as serial ATA, and increasingly smaller and denser form factors enable more storage per rack with lighter power requirements along with reduced costs.

  • Intelligent storage nodes . Ranging in functionality from basic NAS filers, to protocol conversion routers, to virtualization engines, intelligent storage nodes are increasingly appearing as Intel x86 rack-mounted servers with a variety of storage controllers, such as iSCSI, Fibre Channel, and parallel SCSI. These servers may run a version of Windows, Linux, or any other operating system, for that matter. While these servers may not offer the speed of specialized hardware or application-specific integrated circuits (ASICs), they present a fundamental shift in the placement of storage services and platform delivery.

  • I/O interconnects . Developments such as PCI-X and PCI-Express promise to simplify the input and output mechanisms of servers. Creating high-speed, general-purpose, serial I/O interconnects allows for greater server-to-peripheral speed, functionality, and scalability. More sophisticated I/O interconnects further enhance intelligent storage nodes.

  • Device interconnects . Similar to I/O interconnects, device interconnects are one step further removed from the CPU. Parallel SCSI, Fibre Channel, and Ethernet are examples of device I/O interconnects. Typically, an HBA will provide one of those interfaces externally and connect to the CPU via a PCI-based I/O interconnect. Some device I/O interconnects may be embedded directly on server motherboards.

  • Network . Chapter 4, "Storage System Control Points," covered the introduction of network control points and the migration of IP and Ethernet technologies into the storage market. There is no more dramatic way to increase the fluidity of the infrastructure than to provide an open -systems network technology used throughout the world today. By allowing storage traffic to move across IP and Ethernet networks, with or without the presence of other types of storage subsystems, the overall storage network becomes more transparent and fully integrated with overall corporate networking services.

The end-to-end storage chain from a hardware deployment perspective is outlined in Figure 8-1. For example, blade servers are chassis-based units with hot-swappable modules. These modules may or may not have embedded disk drives. Almost all have embedded Ethernet on the motherboard, and some may have other device I/O interconnects such as Fibre Channel or InfiniBand. Blade servers allow for high rack density and the ability to allocate distributed resources across a lower cost point.

Figure 8-1. Commoditization and open-systems drive fluidity of the end-to-end storage chain.

graphics/08fig01.jpg

In each segment of the chain thereafter, we see that enabling technologies continue to drive down cost while providing additional functionality. For example, iSCSI as a device I/O interconnect commoditizes that portion of the chain by directly linking to Ethernet. Intelligent storage nodes, leveraging the cost and performance of Intel x86 platforms, dramatically drive down the entry price point of sophisticated storage devices. In this category the ecosystem of the x86 chip (operating systems, developers, tools, volumes ) provides momentum to capture new market opportunities.

While the example of an intelligent storage node in Figure 8-1 is essentially an in-band device, such as an IP-to-Fibre Channel protocol converter, intelligent storage nodes can reside anywhere in the system. For example, SAN appliances that manage capacity services using virtualization could reside as out-of- band devices anywhere in the network.

Particularly when looking at new, distributed NAS applications and object-oriented approaches such as EMC's Centera line, the form factor of the one-rack unit-mounted server continues to appear. Coupling Ethernet-focused NAS and object-oriented storage platforms with block-oriented storage technologies delivers a powerful flexibility mechanism for consolidation. Most importantly, this open-systems, commodity-oriented hardware approach enables IT managers to build robust, enterprise class solutions. While perhaps not capable of serving all enterprise requirements, storage networks using x86 intelligent storage nodes are likely to gain from x86 ecosystem benefits and will increasingly capture a larger percentage of the overall functionality.

Benefits of the x86 ecosystem include

  • Size reduction : smaller sizes provide more form-factor flexibility.

  • Power reduction : lighter power requirements increase operating range.

  • Development scope : large number of developers and software tools.

  • Industry volumes : high volumes drive lower cost.

  • Scalability : increases in performance provide built-in scalability.

8.1.2 Migrating Storage Functions in a Fluid Environment

With a centralization on lower cost, commodity-oriented hardware for end-to-end storage functions, the migration of storage functions increases. The proliferation of storage appliances means that storage functions can easily move out of traditional host-based or target-based implementations and into network implementations . This trend was covered in more detail in Chapter 4. The increasing availability of lower cost platforms within the storage distribution layer means that this trend will only continue.

If we combine the core , distribution, and access layers of the storage models covered in Chapter 2, "The Storage Architectural Landscape," and Chapter 5, "Reaping Value from Storage Networks," along with the adoption of intelligent storage nodes, we get the model shown in Figure 8-2.

Figure 8-2. Applying fluidity to the IP storage model.

graphics/08fig02.jpg

The layered architecture of the IP storage model allows maximum flexibility by removing dedicated requirements between storage platforms (NAS, SAN, and object) and the underlying infrastructure. The same flexibility occurred when removing the direct-attached link between servers and storage. Now customers have widespread choices when locating storage services and storage platforms. For example, NAS servers can be easily set up and configured using Microsoft's Server Appliance Kit (SAK). An x86 PC running the SAK could also have a small back-end SAN, allowing a portion of a storage pool to be allocated to NAS platforms. The SAK would represent an intelligent storage node in Figure 8-2.

Common SAN-oriented solutions that fit the IP storage model include protocol conversion and volume functions. For example, a customer might want to provide remote office access to Fibre Channel storage. In that case, a server could be deployed with both iSCSI and Fibre Channel PCI adapters. The Fibre Channel adapter connects to the storage, and the iSCSI adapter connects to the IP network used for remote access. This intelligent storage node may also provide basic volume aggregation features to the remote servers for storage allocation.

Object-oriented storage platforms separate metadata about the storage object from the actual user data in the storage object and provide an easy, indexable, referenceable means to access and retrieve that information. For data types where the indexing information can add up to a significant portion of the data storage, separation of the metadata provides scaling and performance advantages. Consider email as an example. In the case of a short, two-or three-line email, the metadata about an email (to, from, date, subject, size, etc.) can add up to as much as the data itself (i.e., the actual message text). Many object-oriented platforms run on rack-mounted x86 servers, using IP and Ethernet as the primary communications means and storage access.

8.1.3 Retention of Specialized Storage Equipment

The use of x86 platforms as intelligent storage nodes is not meant to imply the replacement of other specialized storage equipment. We are likely to see the continued deployment of enterprise-class storage functions on specialized hardware platforms for storage. These were outlined in a figure from Chapter 4, repeated here as Figure 8-3. Within the network fabric or SAN core, hardware-enabled appliances, switches, and directors will continue to be deployed for the most demanding applications. There is simply no way that a multipurpose server platform could reach the same performance levels as custom hardware and ASICs. Yet even today these custom hardware products are gravitating towards more modular, flexible design. In the director category, vendors have introduced multiprotocol and application-specific blades, providing significant robustness to migrate additional storage functions into these devices.

Figure 8-3. Locations for storage intelligence.

graphics/08fig03.jpg

On the target side, RAID systems with intelligent controllers are likely to outperform more generic platforms. The caching functions of large storage systems provide significant throughput advantages for high-transaction applications. Given the design integration benefits of having the cache closely integrated with the drive controllers, this element of storage functionality will likely remain subsystem-focused and part of hardware-specific platforms such as the multi-terabyte disk arrays from Hitachi, EMC, and IBM.

Areas of intelligence in Figure 8-3 can be viewed as an overlay on the IP storage model in Figure 8-2. The host and subsystem intelligence layers fit in the access layer of the IP storage model. The fabric intelligence translates to the storage distribution layer and the IP core.



IP Storage Networking Straight to the Core
IP Storage Networking: Straight to the Core
ISBN: 0321159608
EAN: 2147483647
Year: 2003
Pages: 108

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net