Strategic Rationale

The goal of the N1 Grid strategy is to place today's data center within the context of the technology trends that shaped it. If you can understand how these trends have influenced today's architectures and implementations and how these trends could continue for the foreseeable future, then you can guess what data centers will look like in the future. You can use the context of the technology trends to define the appropriate abstractions for hiding the increasing complexity in the data center. The N1 Grid strategy combines this context with new abstractions to define a standard for current and future data center solutions and technology offerings.

As discussed in Chapter 1, the evolution of the data center was driven by two specific technology trends:

  • Server-centric applications becoming network-centric services

  • Steady increases in network bandwidth

From Server-Centric Applications to Network-Centric Services

The first marked trend is the shift towards decomposition and componentization of applications. Applications used to run on a computer, and you would access them through a terminal. Now, components of a service are distributed across devices, ranging from mainframes, to UNIX® and Linux servers as back ends, to PCs, mobile phones, PDAs, and kiosks as the clients.

The programming model for applications has also changed, along with the eventual deployment platforms. Over the last six or seven years, there has been a strong shift away from monolithic applications that are written in compiled procedural languages, such as C, Pascal, and Fortran, to componentized services written in object-oriented, translated, or byte-code compiled languages such as Java.

The former are server-centric applications (executable programs that run on top of specific combinations of operating environment and underlying processor architecture) compiled to a specific application binary interface (ABI). Provisioning these applications is usually a tangled combination of copying the application distribution onto storage associated with the target computer, binding the application to an operating system instance, and providing it an identity as part of a service. This approach leads to stacks of components that are inflexible and, in general, hard to deploy.

The move to technologies such as the Java platform and the J2EE platform, and the popularity of service-oriented architectures (SOAs) and web services, highlight the trend towards componentized (loosely coupled and network-distributed or network-centric) services. In addition, the execution model, in which the service component runs within the Java™ 2 Virtual Machine (Java VM) software, fundamentally abstracts it from the underlying server and operating system. Combining this with the J2EE platform, which provides life cycle management and common sets of platform services to applications, the Java VM removes the binding of a service component to the platform ABI. Its context is then moved to the virtual machine, and thus to a potentially uniform network-wide environment.

This has the enormous potential to enable very dynamic and efficient deployment and redeployment of service components within a network. They can be distributed across a variety of platforms so that the application delivers the desired business attributes in terms of performance, scalability, availability, security, and utilization. While the application programming interfaces (APIs) and ABIs associated with a server platform become less relevant in this context, the properties of the server and its operating system, in terms of performance, scaling, and availability, still matter because the attributes of the hosted service components still depend on them.

From a Network of Discrete Systems to a Fabric of Resources

The continuing growth of network bandwidth is the other noteworthy trend. Network bandwidth growth has consistently outpaced processor throughput improvements over the last twenty years (as shown in FIGURE 2-1).

Figure 2-1. Raw Single-Fibre Bandwidth Versus Single CPU Performance

For some classes of computers and some types of services, the bandwidth of the network connected to the computer is good enough to fetch and store data or to interact with other service components across the network, rather than requiring co-residency within a single operating system instance. This begins to blur the boundary between the computer and the network, whether it is a traditional network or a storage area network. Of course, there are classes of applications that still require the latency benefits of communication within a shared memory environment. These applications are said to be tightly coupled. However, most services do not. They are loosely coupled, and their functionality is decomposed and distributed across a set of servers on a network. In doing so, they become network-centric services.

A look at Sun's latest servers is very interesting. No longer do all of the processors and memory components simply communicate over a shared bus. Now, they communicate using a switch. A very high-bandwidth and low latency switch, but a switch nonetheless. And, just as a network Ethernet switch can be partitioned so that only certain ports can exchange packets with each other, so too can the memory and I/O switches in the Sun Fire and Sun™ Enterprise 10000 servers. This is what essentially enables a single rack of resources that share a switch to be partitioned into dynamic system domains each running its own operating system on physically separate components. In some sense, there is a network, or rather a fabric, already inside the server. The conceptual boundary between the computer and the network is already very blurred.

The modern data center has become a fabric of interconnected resources that are connected, in general, by pervasive standards-based Ethernet IP networking, by Fibre Channel-arbitrated loop (FC-AL) storage area networks (SANs), by Infiniband-based shared I/O fabrics, and by memory fabrics.

Buliding N1 Grid Solutions Preparing, Architecting, and Implementing Service-Centric Data Centers
Buliding N1 Grid Solutions Preparing, Architecting, and Implementing Service-Centric Data Centers
Year: 2003
Pages: 144 © 2008-2017.
If you may any questions please contact us: