Future Storage Connectivity

team lib

As the SAN/NAS core technologies evolve , they will be affected by additional external changes happening with the microprocessor and networking industries. These changes can be viewed from a perspective external to the server, many of which will be network driven. Other changes will occur from an internal perspective where technology advancements in chip-to-chip communications takes place within the microprocessor system.

iSCSI, LAN/WAN Storage

One limitation of the SAN I/O architecture revolves around distance. Performing a block I/O remotely requires the resources of the NAS solution and the inherent overhead of file I/O processing. If there was a method of extending the server I/O operation remotely, this would allow native I/O operations to take place without the necessity of an alternative network and required special devices, as in FC SANs, or the special server devices associated with NAS. This appears to be a very efficient and cost-effective solution for remotely accessing data in storage arrays.

This also describes an upcoming standard and product solution called iSCSI. The iSCSI solution ( iSCSI denoting Internet SCSI ) provides a standard for transmitting SCSI commands through an IP-based network. This would allow servers to send a block I/O request through an existing TCP/IP network and execute the SCSI storage read/write operations on a remote storage array.

The iSCSI configurations require special Network Interface Cards (NICs) that provide the iSCSI command set at the storage array end (as shown in Figure 20-4). This facilitates data and SCSI commands to be encapsulated into an IP packet and transmitted through an existing IP network, thereby bypassing file-level processing protocols and additional server-to-server communications inherent to NAS solutions. On the surface, this solution provides an effective mechanism for disaster recovery operations, data replication, and distribution.

click to expand
Figure 20-4: iSCSI configurations showing remote block I/O operation

However, this develops an increasing number of issues in implementing and managing an iSCSI configuration. First is the security of sending an unsecured block I/O across a public network. An associated aspect is the reliability of the IP packet transmission process. Overall, the ability to access data within unsecured environments that can withstand packet transmission errors may be the I/O workload that suits this future technology.

Extending I/O functionality beyond the server has been a goal of system vendors for many years in their struggle to replace the aging bus mechanisms that form the foundation of our current computing platforms. Multiple initiatives suggesting that I/O should be disengaged from the computing elements have coalesced into the industry initiative of InfiniBand. InfiniBand provides a switched fabric environment very similar to the FC SAN switched-fabric environment. However, the InfiniBand standard encompasses all I/O from the processor into the external connectivity of networks and storage devices. InfiniBand on the surface can be viewed as a replacement for the aging PCI bus technologies (see Chapter 7). However, due to its scalable I/O infrastructure, it provides a shift in computing fundamentals by providing a switched-fabric architecture that allows devices to communicate with processor nodes with increased bandwidth, reduced latency, and throughput.

InfiniBand, the Universal Bus

InfiniBand is an I/O standard developed by seven of the computer industry's key players: IBM, Compaq, HP, Sun Microsystems, Microsoft, Intel, and Dell. InfiniBand replaces traditional PCI bus technology with a switched fabric (think network), which allows peripherals (such as storage and client/data networks) to communicate within a network (think switched fabric) of servers. It also allows InfiniBand-enabled servers to utilize the same fabrics (network) to communicate amongst themselves .

InfiniBand is poised to cause a fundamental change to the data center, as we know it. As its scalable I/O infrastructure is integrated, the total cost of ownership (TOC) models and metrics will change, as will how applications are deployed and managed. IT departments, especially IT management, must understand this infrastructure if they hope to leverage intelligent adoption strategies. InfiniBand vendors, on the other hand, must move toward systems solutions in order to overcome the initial wave of anxiety and reduce the complexities of integration and adoption.

As illustrated in Figure 20-5, the I/O fabric is connected from the server through a Host Channel Adapter (HCA), while connections to peripherals move through a Target Channel Adapter (TCA). These components communicate through a switch that routes to all the nodes that make up the fabric. In addition, the I/O fabric enhances communication within the fabric by using remote direct memory access (RDMA) methods to facilitate I/O applications operations. InfiniBand links are serial, segmented into 16 logical lanes of traffic. Given this architecture, the bandwidth for each link eclipses all other data transports now available.

click to expand
Figure 20-5: InfiniBand's I/O fabric

Why is InfiniBand important? The traditional deployment of servers has come through vertical implementation. In most cases, expanding an existing application requires the installation of a new server. Each time this happens, total overhead and the latency needed to support the application increases. Left unchecked, the operating system services and network overhead required to support the additional servers can consume as much or more processing power than the application. Scalability of the application ultimately becomes non-linear as the number of servers increases .

From an I/O standpoint, the vertical deployment of servers poses additional problems. As application data grows, so must the required I/O bandwidth and paths needed to get to the data. If the server cannot provide either of these, the application suffers from having to wait on data to be processed . This same phenomenon happens when users are added to the application, or there is an increase in the users accessing the application, as often happens with Internet applications. The required bandwidth and paths must be in place, otherwise the application suffers by having to wait on large queues of user requests .

These conditions are further exacerbated by the inefficient usage of resources-in both hardware and software-needed to provide an adequate configuration for supporting the workload. All of this places additional burdens on data-center personnel, who must handle the complexity of installation, maintenance, system upgrades, and workload management.

What we have achieved is a 'gridlock' effect. Unfortunately, planning and implementing configurations for new applications is even more daunting.

However, InfiniBand's potential value can be significant. InfiniBand architecture addresses each of the challenges noted earlier. Scalability enhancements, in terms of link speeds, switched peripheral connectivity, and server-to-server connections, provide a significant boost to the scalability of an application. If we consider the prioritization and management within link technologies driven by fabric software, InfiniBand configurations can process multiple workloads within the same configuration, thereby reducing the vertical implementation required by traditional server technology.

Once vertical implementation has been addressed, a new model for TCO within the data center emerges. This is accomplished by utilizing two key characteristics of InfiniBand technology. First, InfiniBand servers only require a CPU(s), memory, and related interconnects, while using HCAs for outside communication. This reduces an InfiniBand server to a 'Blade' computer (a computer the size of a Plug and Play board-level component). Smaller servers will significantly reduce required space, power consumption, and installation complexities. Secondly, InfiniBand configurations centralize the processing of applications and provide for more dynamic operations within the fabric. Management of applications and related resources is centralized, which means software and resources can be shared and used more efficiently as workloads demand.

What will the price be? In realizing an InfiniBand infrastructure within the Data Center, the following considerations should be noted. InfiniBand will require a learning curve-there's no way around this. InfiniBand configurations affect all major components within the Data Center. This is especially true in regards to storage and networking components. Product offerings must support systems adoption, distribution, installation, and support. Software, as usual, will be the significant detractor. Server OS integration and services and application optimization and compatibility are a must.

Inside the Box

This section describes the requirements for HyperTransport and Rapid I/O. There are two in-the-box initiatives, one of which connects processor modules, thus providing the ability to interconnect multiple CPU configurations. The complementary initiative is to provide a faster, more efficient interconnect between the CPU components within the constructs of the board.

Rapid I/O

Rapid I/O is an industry initiative that in some ways provides an alternative to the InfiniBand standard, moving the universal bus concept down to the microprocessor level. It provides a serial connectivity protocol and physical link standard in connecting processor board-level components. As shown in Figure 20-6, Rapid I/O provides an alternative to PCI or other bus-connectivity solutions that interconnect components through a Rapid I/O switching mechanism.

click to expand
Figure 20-6: Rapid I/O: a future processor module interconnects

Rapid I/O is used to facilitate connectivity of clustered systems, multifunction embedded systems, and general-purpose systems that must communicate within close proximity of each other (for instance, within the system chassis). The standard facilitates the connection of an extensive set of internal and external components. However, Rapid I/O remains one of the few future technologies that takes into consideration legacy components by providing parallel technologies that offer compatibility for those components which require multipin physical connections.

HyperTransport

Providing a high-speed interconnect between components within the processor module has long been a goal of microprocessor designers. The usual bus connections have been eclipsed by new serial connections that form a switched environment. HyperTransport is a vendor-sponsored initiative that proposes a standard set of specifications when building an internal switched bus to increase I/O flexibility and performance. Staging and moving data in and out of CPU processing units is just as important as the performance of external I/O.

The HyperTransport standard defines an I/O protocol that facilitates a scalable interconnect between CPU, memory, and I/O devices. As a switching protocol, the architecture is defined within a layered network approach based to some degree on the OSI network model. Consequently, there are five distinct layers of processing that direct how internal components communicate.

Figure 20-7 depicts an example of the deployment of HyperTransport architecture. In this type of configuration, an I/O switch handles multiple data streams and interconnections between CPUs and memory. The configuration is extended to support external bus and network connections such as InfiniBand, PCI, and Gbe Ethernet. This enables processing and memory components to have scalability and bandwidth flexibility similar to switched network environments like InfiniBand and FC SANs.

click to expand
Figure 20-7: HyperTransport: a future high-speed CPU interconnect
 
team lib


Storage Networks
Storage Networks: The Complete Reference
ISBN: 0072224762
EAN: 2147483647
Year: 2003
Pages: 192

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net