Workload-to-Resources Mapping


Having described the resources and workload managed by the N1 Grid OE, it is now appropriate to explore how the workload is mapped onto the resources. In a traditional system, the operating system scheduler, memory manager, and device driver components map workloads onto processors, memory, and I/O resources based on simple policies. The operating system virtualizes the underlying resources and provisions the workloads onto them. In an N1 Grid system, virtualization and provisioning are also fundamentally important mechanisms that enable the N1 Grid OE to map workload onto resource.

Today, both terms are used to describe work being undertaken to solve the data center management problems enumerated in Chapters 1 and 2. However, they are not usually described within the context of a single systemic view. Consequently, it is often unclear how these technologies should be combined or used together to deliver integrated infrastructure solutions. The N1 Grid vision, and specifically the N1 Grid OE, provides a single context for understanding these two important areas: how they are related, and how they can be combined to provide a single context for developing infrastructure solutions.

Virtualization

In general, virtualization is the abstraction of some entity. Typically, virtualization involves adding a layer of software onto some entity so that the new layer exhibits the interface properties of the original entity. However, this layer hides the true implementation of the virtualized object so that the original entity can be changed or replaced without fundamentally impacting how other entities, which have a dependency on it, interact with it. This provides flexibility.

For example, a storage controller might virtualize a raw disk or a collection of raw disks by presenting a logical unit number (LUN). The LUN has all of the interface properties of a raw disk, yet that LUN might be a part of one physical disk or it might be a whole disk or a collection of disks in a RAID stripe. The point is that whoever or whatever uses the LUN does not need to care about its internal composition. They just use it and automatically take advantage of the properties of a specific underlying implementation (for example, greater performance or availability). Other examples include virtual local area networks (VLANs) and N1 Grid Containers.

Another example of virtualization involves adding a layer of software that changes the level of abstraction of interaction with an object, changing the attributes of the interfaces (for example, to make it more manageable). An example of this would be enabling the N1 Grid OE to manage an application (for example, a database service) based on quality-of-service goals (for example, the average transactional response time), rather than having to manually manage numbers of processors, amount of memory, and I/O allocation directly. Thus, software can be used to translate between the old entity that was managed and the new abstract entity. The administrator of an N1 Grid system manages services, while the N1 Grid OE translates this into the management of more traditional workloads and resources. This means that management scaling can be fundamentally changed and that efficiency, reliability, and agility can be improved through automation.

An operating system or operating environment virtualizes the sets of resources under its control, presenting them to the services that consume them in a consistent fashion (FIGURE 3-2).

Figure 3-2. Traditional Operating System Virtualization


A traditional operating system like the Solaris OS virtualizes processors, memory, and I/O within a large SMP. The operating system maps a workload onto resources in line with policies. The workload resolves to a set of processes or threads that run on some set of resources. The operating system virtualizes the resources so that they appear equivalent to the application. For example, an application that runs on four processors can run on any four processors within a 32-processor system.

The operating system and the system itself ensure that the identity of physical processors does not matter. Indeed, the actual processors used could change, from one instant to the next, as long as four of them are used. Thus, virtualization turns a collection of discrete, identical resources into a pool of shared resources. This enables greater flexibility and utilization of those resources because they can be efficiently shared. For example, two applications could run and use up all of the 32 processors. However, at any instant, application A could be using any four of the processors, and application B could be using 28. In another instant, application A could be using ten processors, and application B could be using 22. The system decides how many, and which specific ones, to use on behalf of the service, removing complexity and improving performance and utilization.

The N1 Grid system analogy of this (see FIGURE 3-3) is that the N1 Grid OE can turn 200 blade servers into a single pool of resources so that you no longer care which 12 are used for web server instances within a multitier bookstore service. The N1 Grid OE ensures that the right number, that is 12, are used to meet the business goals of the service.

Figure 3-3. N1 Grid Operating Environment Virtualization


FIGURE 3-4 shows the most basic view of virtualization. If you think about both previously described definitions of virtualization, then you can view the data center as a series of layers of components, each depending on those below it and each virtualizing those below them, either through simply separating the logical properties from the physical entities or through changing the level of abstraction to present new properties to the layers above. Each layer of virtualization provides the opportunity to hide complexity and improve flexibility and efficiency.

Figure 3-4. Layers of Virtualization


Unable to access graphic file. File is apparently not in standard EPS format.

Applying this technique to the N1 Grid system means that several of the problems associated with managing data center infrastructures can be addressed. These problems include:

  • Removing complexity

    This improves management scaling, reliability, availability, and security, and it reduces costs and risk.

  • Increasing flexibility

    This enables faster repurposing to recover from failure, faster repurposing to cater to load or goal variation, and faster time to market in deploying new services through the pipeline.

Provisioning Services

The systemic approach, combined with virtualization techniques, also changes the perspective on provisioning services or applications. Provisioning is the act of taking a service, installing it, and ending up with a running service on a system or collection of systems.

Because most provisioning today has a considerable manual element to it and is typically silo-ed (for example, into network, storage, and server-related aspects), it is risk laden, time consuming, and expensive. This results in a reluctance to repurpose infrastructure components, which in turn leads to static environments with poor utilization and a lack of flexibility. The model for provisioning is server-centric, rather than system-centric. If you take a system-centric approach, these problems can be resolved.

Provisioning typically requires the following sequence of events:

  1. Copying the bits somewhere (for example, from a compact disc to storage, associated with a specific operating system instance)

  2. Binding them to the underlying platform instance (base hardware or hardware and operating system stack, if already configured)

  3. Turning them into a service-specific instance

  4. Instantiating them (that is, running them)

Typically, this sequence is done in one or two steps, whereas logically, it might be better to break it down into several separate steps. FIGURE 3-5 shows a comparison of traditional and N1 Grid OE-based steps of provisioning. The new sequence, and tools that automate it, enable more dynamic provisioning in response to load or policy changes.

Figure 3-5. Flow Between Provisioning Model Phases


Provisioning today is viewed as server-centric. However, it is in fact system-centric, but in the sense of the old system a server. For example, when installing a database that requires four processors on a 24-processor system, the software is installed to the system, not to four specific processors. Then, the system instantiates the database on the four processors of its choosing, based on policies and goals.

Extending this model to an N1 Grid system, the database is installed to the system, in this case the N1 Grid system. It is then instantiated on any four processors of any platform that meets the dependency requirements of the database. For example, it might require the Solaris 8 OS with patches X, Y, and Z.

The model is in essence the same, if you think in terms of a system. However, the sequence of events changes, or rather, it becomes important. In a traditional view, the ordering of binding to the platform instance and turning a generic component into a specific service instance (for example, turning a generic database installation into the database for a bookstore by creating the table spaces and loading them) is not important. Both have to be done after installing the bits and before running the instance.

The order matters when provisioning within the context of the N1 Grid system. By creating a service-specific instance that is not bound to a specific platform or operating system instance, you end up with a service component (in this case a database) that is able to run on a number of potential platform components. The target platform can be decided at runtime, based on available resources and a comparison of the goals and priorities of this service versus those of others hosted within the same N1 Grid system. Indeed, the target platform can be easily changed if the initial one no longer meets the resource requirements of the service component, due to load or policy changes.

In a traditional system, the database is installed to the system that is a server, and the system runs it on any of the processors it deems appropriate. In an N1 Grid system, the database is installed to the system that is a network. Thus, it is installed on shared storage so that it can be run on any four processors of any server that has the right underlying instruction set architecture and operating system version. Ultimately, it is virtualization that enables this dynamic binding and rebinding.

As an aside, it should be obvious that Java and the J2EE platform become the ultimate in server virtualization technologies. The target platform for a Java or Enterprise JavaBeans™ (EJB™) component then becomes any J2EE application server that supports the appropriate version of Java or the J2EE platform. Thus, the compute elements are effectively made homogeneous, and maximum flexibility and choice of deployment is achieved.

Virtualization and Provisioning Examples

The following are some simple examples that illustrate the potential of combining virtualization with provisioning, using functionality available today, but within the context of the N1 Grid solutions.

Provisioning a Service Component on an Existing Operating System

This section contains an example of provisioning a service component, such as a database, on an existing operating system instance.

A database can be installed, and table spaces created and loaded, on shared storage, such as network-attached storage (NAS) or SAN storage, so that it can be instantiated on any one of a set of servers, as long as each meets the database requirements in terms of operating system and patch level and each is able to access the storage.

The database can also be provided with its own IP address and host name, independent of those associated with the underlying server, so that they can move with the database service to wherever it is instantiated.

After it has been installed and configured on an initial host environment, the set of variables and configuration parameters that associate it with a specific operating system instance and underlying platform need to be noted. Then, the database can be stopped.

When the database is to be instantiated, the storage containing the database application and that containing the data can be associated with the chosen, preinstalled, and already running target operating system instance. Then, the appropriate operating system tuning can be done, together with the configuration of any other operating system instance-specific dependencies of the database (for example, an IP address).

If the database is to run in a shared environment (that is, one in which the hosting operating system is also hosting one or more other service components), this configuration can be applied to an operating system partition, such as an N1 Grid Container.

Virtualization of the storage environment and of the network, combined with the N1 Grid vision of provisioning, results in a database service that can be instantiated on any server running the correct operating system version and set of patches. This enables the database to be stopped and restarted on a different compute element in the event of changing resource requirements or in the event of underlying platform failures. Repurposing becomes automated and reliable enough to enable greater agility, potentially greater availability, and greater resource utilization.

Provisioning a Service Component as Part of a Complete Bootable Image

This section contains an example of provisioning a service component, such as an application server, as part of a complete bootable image.

Consider an application server instance that is part of a bootable image in a SAN environment. Again, this complete bootable image could be created on a reference server, and all instance-specific tunables and configuration parameters noted. The application server could be a part of a cluster of application servers that act as a front end by a load balancer of some description.

This bootable stack could then be booted on one or more compute elements. If the compute elements are identical, little modification to the boot image would be required during the boot and application server instantiation.

As with the first example, this method results in greater agility, especially if management of the load balancing or application server clustering functionality is automated and integrated with the application server instantiation.

Both of these examples illustrate that the principal behavioral aspects of an N1 Grid system can be realized today. The reasons they are often not are that too many tools must be used to implement this behavior, and because service component instance-specific configuration parameters and tuning information are often not properly understood or noted. In short, implementation is complex and typically a manual process. Thus, it is time consuming and error prone. These implementations become viable only after the configuration information can be formally captured and an integrated mechanism can be provided for associating and binding a service component with an operating system instance or for associating a stack with a compute element. Combining virtualization techniques with a different focus on provisioning results in the ability to provision and reprovision service components dynamically, resulting in better resource utilization, agility, and availability.

Separating the aspect of provisioning that creates a service-specific instance of an application (for example, the creation of the database for the bookstore) from the aspect that binds it to the operating system instance it is to run on (that is, instantiation) is of fundamental importance. Combining this with virtualization to provide flexible mapping between the service-specific instance and an appropriate underlying resource enables enormous flexibility.

Many of the mechanisms required to implement these aspects of an N1 Grid OE exist today. However, to actually deliver this functionality and consequent value, a consistent context is required, and many tools are required to be manipulated or managed, often manually. The lack of such a context and the complexity in terms of available tools result in risk. Thus, infrastructure architectures and implementations that leverage all of the available mechanisms are rarely realized. The goal of the N1 Grid strategy is to both automate this binding or provisioning and to automate the decision-making process that determines what resources to allocate to the service components and how to change them to meet the high-level business goals for the service.



Buliding N1 Grid Solutions Preparing, Architecting, and Implementing Service-Centric Data Centers
Buliding N1 Grid Solutions Preparing, Architecting, and Implementing Service-Centric Data Centers
ISBN: N/A
EAN: N/A
Year: 2003
Pages: 144

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net