Global Workload Manager

HP has had great success with the Workload Manager product. They have learned much about how customers use the product and how they want to manage the sharing of resources. In addition, a new type of customer environment has emerged which has unique needs: IT as a service. This is a fast-growing trend in the industry because it makes it possible for IT organizations to take better advantage of the virtualization technologies that are now available. Some specific requirements that service providers and IT organizations that act like service providers are concerned about include:

  • They need to manage larger numbers of servers, so they need ease of use and the ability to create a small set of standard policies and use them over and over. This is much like a level of service with a web site hosting providerone plan might get 500MB of space and a second might get 1GB.

  • The IT organization does not know what the priorities of the applications are, so they need to allow sharing of resources where all workloads are at the same priority.

  • The IT organization needs to be able to guarantee a certain level of resources when a system is under heavy load.

  • The IT organization will need a way to measure how much resource each workload uses so that they can charge the business units for their actual usage.

  • The IT organization needs to be able to manage a large number of servers from a central location.

In order to meet these requirements, HP developed the new Global Workload Manager (gWLM) product, which is tightly integrated into the VSE management suite of products. It provides many of the same features as the WLM product, but does so from a central management server. Also, in order to simplify the product, the most important customer use cases are very easy to implement.


This section only provides an overview of the Global Workload Manager product. More details, and some usage examples, will be provided in Chapter 19, "Global Workload Manager."

gWLM Concepts

The gWLM product introduces a few simple concepts that allow you to understand how the product works. These are:

  • Workload: a set of processes that together make up some functionality that is useful to the business.

  • Compartment: gWLM compartments can be partitions or a system if that system has Temporary Instant Capacity or PPU to make it flexible.

  • Shared Resource Domain (SRD): the set of compartments over which you can share resources. The set of partitions that are allowable here will depend on the technologies you are using to allocate resources to your workloads. For example, a set of nPars on a system can be an SRD if there are Instant Capacity processors available so the active capacity can be reallocated to the different nPars as demand varies.

  • Policy: defines how resources should be allocated to the workload. Each workload is assigned a policy.

  • Mode: the mode of operation that gWLM should use when managing a particular SRD. It is possible to run gWLM in either advisory or managed mode. Running gWLM in advisory mode means that it will monitor each of your workloads and your compartments and make recommendations about how much resource each workload needs at any time, but it won't reallocate resources. Managed mode will cause gWLM to automatically act upon the recommendations.

A gWLM home page within Systems Insight Manager provides a very simple interface for doing whatever you might want to do with gWLM. This is shown in Figure 13-23.

Figure 13-23. The Global Workload Manager Home Page

This view is intended to provide a new user with a stepping-off point to get started with managing systems with gWLM. All of the links shown here can also be accessed by selecting the appropriate menu item from the gWLM menu.

Compartment Types

The 2.0 release of gWLM will support a larger number of compartment types. These include:

  • Secure Resource Partitions: FSS- or PSET-based resource partitions with optional security compartments. gWLM will allocate sub-CPU granularity to FSS-based secure resource partitions or whole-CPU granularity to PSET-based partitions.

  • Virtual Partitions (vPars): gWLM provides the ability to reallocate CPUs between different vPars on the same system or nPar.

  • nPars with Instant Capacity: If a system has Instant Capacity processors, gWLM can be used to reallocate the active CPUs between the nPars on the system.

  • Integrity VMs: gWLM is able to reallocate the amount of physical CPU that is allocated to each Virtual CPU in Integrity VMs.

  • Utility Systems: Any system with Temporary Instant Capacity or Pay per use processors can use gWLM to control the activation and deactivation of those utility processors.

  • Linux Processor Sets: gWLM supports the reallocation of CPUs between processors sets on a Integrity system or nPar running Linux. This does, however, require that the Linux distribution is running a 2.6 or above version of the Linux kernel. This was the kernel version where processor sets were added.

In addition to flexing these compartments, gWLM will also automate the activation and deactivation of Utility Pricing technologies.

  • Temporary Instant Capacity: gWLM activates Instant Capacity Processors using Temporary Instant Capacity on any system that has them if they are needed to satisfy defined policies.

The gWLM 2.0 product will also be supported on OpenVMS, which has some very similar flexing technologies. These include:

  • Galaxy: this is similar to HP-UX vPars.

  • Processor Affinity: This is similar to Processor Sets, which gWLM already supports on both HP-UX and Linux.

  • Class Scheduler: This is similar to the FSS groups that are available on HP-UX and the Class-based Kernel Resource Management (CKRM) scheduler that is available in some Linux distributions.

Policy Types

A policy is how you describe to gWLM how it should determine when the workload needs more or less resources. The policy types supported with the 2.0 version of gWLM will include:

  • Fixed: A fixed policy is used when the entitlement should stay the same regardless of what may be happening on the system.

  • OwnBorrow: This is a new policy type that is unique to gWLM. Each workload will "own" a certain amount of resources. This amount is guaranteed to be available if the workload needs it. However, if the workload is idle, some of these resources may be loaned out to other workloads that are busy. In exchange, this workload may be able to borrow resources from other idle workloads if it needs more than its "owned" value. This allows workloads to share idle resources but have a certain amount that will always be available if they need them.

  • Utilization: This policy allows you to set a minimum and a maximum entitlement for the workload as well as a utilization target. When the utilization of the resource is below the target, gWLM will remove resources from the compartment and allow a busy workload to use them. When the utilization is above the target, gWLM will attempt to allocate additional resources to satisfy this workload when it is busy.

  • Custom: The custom policy allows you to create an OwnBorrow type of control, but to use a custom metric rather than the default CPU uilizations.

gWLM Features and Benefits

The gWLM product has a number of key features. The most obvious is that it allows you to manage how resources are allocated to different workloads running in the infrastructure. In addition, it provides monitoring functionality that allows you to see how resources are being allocated. It also integrates with other VSE or ISV applications.

Resource Management

Arguably the most compelling feature of gWLM is its ability to automate the assignment of resources in your VSE to the workloads that need them in real time. Much like the original WLM product, gWLM provides automation capabilities on top of all the other flexing technologies available in the VSE.

One nice side effect of the way gWLM implements policies is that they are independent of the flexing technology used. This allows the same policy to apply to workloads that are running on nPars with Instant Capacity and to Linux PSETs at the same time.

Integrity Virtual Machines

Allocation of CPU resources to Integrity Virtual Machines is very different than nPars and vPars. This is because CPU resources are allocated in sub-CPU granularity. CPUs are not moved from one partition to another; instead, you simply increase one VM's share of the resource by decreasing another VM's share. This is illustrated in Figure 13-24.

Figure 13-24. gWLM Reallocation of CPU Shares between Integrity Virtual Machines

What you see in Figure 13-24 is two VMs running on a four-CPU Integrity system or partition. Each VM shows four virtual CPUs in its operating system. However, those CPUs are actually sharing the four physical CPUs with the other VM.

In the first block, both VMs virtual CPUs are getting about 50% of a physical CPU. When the workload in VM2 gets busy, WLM reallocates the physical CPU shares to give VM2 a 75% entitlement. Conversely, when the workload in VM1 gets busy, WLM will reallocate the shares to give VM1 a 75% entitlement. Inside the VMs, the change in entitlement is completely transparent. The only noticeable change is that the virtual CPUs are running slower or faster depending on how busy the VM is.


Because gWLM is making its resource allocations without any operator intervention, it is critical that administrators have the ability to see what it is doing. For this, gWLM has both real-time and historical monitoring reports. Figure 13-25 shows one of the real-time reports.

Figure 13-25. The gWLM Real-Time Workload Report

This graph shows the target and actual utilization in the workload compartment, along with the resource allocation based on that utilization. As you can see, when the actual utilization exceeds the target, gWLM responds by increasing the amount of CPU allocated to the workload. Conversely, when the actual utilization drops below the target, gWLM will take some of the CPU allocation away and apply it to other workloads on the system.

Because gWLM is collecting data for allocation and reporting purposes, it stores the data in a database so that historical reports can be provided as well. The following historical reports will be available with the 2.0 version of gWLM.

  • Workload Resource Audit Report

  • Top Borrowers Report

The first of these reports provides data that can be used to help a business unit understand how its workloads consumed resources over the course of a month. The second will help an IT organization determine if certain workloads are consistently borrowing idle resources from other workloads. This may warrant a change in policy to increase the owned value for the workload.

gWLM Architecture

The gWLM product was the first VSE management suite product introduced. It follows the standard manager-agent model of the rest of the VSE management suite. Figure 13-26 provides a high-level view of the architecture of gWLM.

Figure 13-26. The High Level gWLM Architecture

The central management server runs the gWLM console. This is the same system you are running HP Systems Insight Manager on. The console has the user interface screens that are integrated with HP SIM and some daemon functions. The console also holds the databases for configuration and performance data.

The agent will run on every operating system image you want to manage with gWLM. If you have a system with only one OS image but you want to manage multiple workloads using secure resource partitions, you will run one agent on that OS image.

If you have a system with partitions running separate OS images, like vPars, nPars, or VMs, you will run an agent on each of those partitions. On those systems the various agents will negotiate with each other to elect a master. The master is responsible for resource arbitration and managing the migration of resources between the partitions. Each agent collects information about the utilization and workload performance of local workloads and passes this information to the master. The local agent will send all this information to the master for arbitration. Once the master has determined how the resources should be allocated, it sends commands back to the other agents to allocate or deallocate local resources.

The HP Virtual Server Environment. Making the Adaptive Enterprise Vision a Reality in Your Datacenter
The HP Virtual Server Environment: Making the Adaptive Enterprise Vision a Reality in Your Datacenter
ISBN: 0131855220
EAN: 2147483647
Year: 2003
Pages: 197 © 2008-2017.
If you may any questions please contact us: