Workload Manager


The descriptions of the sweet spots of Workload Manager cover both the HP-UX Workload Manager and Global Workload Manager. This is because although they have very different architectures and user interfaces, they are attempting to solve the same basic problemcontrolled sharing of resources between workloads running on a shared infrastructure. They both accomplish this by flexing the size of the partitions running the workloads or by activating and deactivating Utility Pricing resources in the partitions running the workloads.

Key Benefits

A number of key benefits can be gained from the use of HP's workload management products. These include:

  • Automation: These products automate the reallocation of resources between partitions that can share resources.

  • Increased Utilization: These products give you strict control over how shared resources are allocated to competing workloads. This makes it easier to increase the utilization of the system while still ensuring that each workload gets the resources that it needs.

  • Optimization: These products ensure that resources get applied to the highest-priority workloads while also allowing idle resources to be applied to lower- priority workloads. This ensures that resources don't go idle if they can be used by another workload.

  • Utility Pricing cost control: Workload management ensures that utility resources are only on when they are needed, thereby minimizing your utility costs.

  • Consistent Performance: These products make it possible to maintain consistent performance as loads vary and can also ensure applications don't overperform on systems that are overprovisioned for consolidation.

Both Workload Manager and Global Workload Manager are automation tools. All of the flexing done by these tools could be done manually using the underlying partitioning or Utility Pricing tools. However, performing these functions manually would require a tremendous amount of time and effort. You would have to:

1.

Monitor all of your workloads 24 hours a day, 7 days a week

2.

Detect when a workload needs additional resources

3.

Survey the rest of the environment to determine if there are available resources that could be moved to the workload

4.

Remember and run the commands required to deallocate the resources that are idle

5.

Remember and run the commands to allocate them to the workload that needs them

6.

Ensure that all the commands were executed in the right order in a hierarchical configuration (for example where Instant Capacity processors are being moved between nPars and subsequently allocated to one or more VMs)

And you would need to do all of this before the load spike subsided. Clearly this would require an inordinate amount of some very talented administrator's time. If you really want to take advantage of the flexing characteristics of the Virtual Server Environment, you will need to use workload management tools to automate these tasks.

The second major benefit of workload management tools is that they give you tight control over how resources get shared by competing workloads. This gives you the ability to run both high- and low-priority workloads on a system to increase the overall utilization of the system while ensuring that the high-priority workloads will get the resources they need when they get busy. A good example is when you have a development or test workload running on a server that is also the target for the failover of a production workload. With workload management, the development or test workload can continue to run after the failover, but its resources may get constrained to make resources available to the production workload.

Optimization in this context means that resources are applied to the highest-priority workloads and are not allowed to go to waste if there is a workload that can use them. The example from above fits here also. When you run both high- and low-priority workloads on a system, workload management will ensure that the high-priority workloads get preferential access to resources. This can allow you to run a system at a high level of utilization and still ensure that the high-priority applications will get the resources they need when they get a spike in load. The idea here is that you have low-priority workloads that will consume resources you would normally allow to go to waste to ensure that there is no performance impact on high-priority workloads when they get busy.

Another benefit is Utility Pricing solutions. Since these workload management products are designed to minimize wasted resources, they will ensure that if resources are idle, they will be turned off. This will minimize your costs because the resources will be activated the moment they are needed and deactivated the moment they are idle.

The last benefit is that these products control the performance of applications. The result is that the performance of applications will be consistent regardless of how heavy the load is on the application. Another interesting use case for this are when a system has been provisioned to run many workloads, but may only have a few initially. These products can be used to provide planned performance even when the system is only running a subset of the workloads that are planned for the system.

Key Tradeoffs

There are some tradeoffs that you will have to accept if you want to get the advantages above. The first is that a side effect of running workload management is that it may impact the raw performance of applications when compared to running them on an uncontrolled environment. Since you will be running multiple workloads, the overall throughput of your highest-priority workloads will be increased, but in order to accomplish this, the amount of resources allocated to each workload needs to be constrained so that idle resources can be applied to other workloads that can use them. Workload management is designed to optimize the running of multiple workloads.

Another tradeoff for gWLM is that it requires Java on each of the managed partitions. Java Virtual Machines are generally available and shipped with virtually all major operating systems. However, some customers prefer not to run Java on their production systemstypically because of manageability concerns. If you are one of these customers, you can still use Workload Managerits daemons are standard HP-UX executables.

One tradeoff for WLM is that it supports only HP-UX. The 2.0 version of the gWLM product supports HP-UX, Linux, and OpenVMS on HP Integrity servers.

Sweet Spots

Both Workload Manager and Global Workload Manager are automation tools. They provide a way to take advantage of the flexibility of the VSE to allocate resources to the applications that need them in real time. The sweet spots for this take place when automation can provide significant benefits. These include:

  • increased utilization

  • automatic allocation of resources to workloads

  • reduced costs of Utility Pricing solutions

  • automated allocation of resources upon a failover

One of the primary benefits of the VSE is the ability to increase utilization by sharing resources between multiple workloads on a system. HP's workload management products make this much more accessible because they give you control over how the resources get shared.

Sweet Spot #1: Workload Management Improves Utilization

Workload management improves utilization by ensuring that idle resources do not go to waste if other workloads on the system can use them.


Attempting to manually manage the reallocation of resources between workloads would be somewhere between difficult and impossible.

Sweet Spot #2: Workload Management Automates Sharing of Resources between Multiple Workloads

It would be very difficult to manage the reallocation of resources manually. If you want dynamic reallocation of resources, automation is not really optional.


Workload management will minimize the cost of a Utility Pricing solution. The cost of Temporary Capacity and Pay per use will vary based on the amount of resources being consumed by your workloads. HP's workload management products are specifically designed to provide only enough resources to each workload to satisfy their needs in real time. If resources are idle, the products will deactivate them. This will ensure that you are not paying for resources that you are not using. Even if you have Percent Utilization PPU, you can take advantage of WLM. In the second part of the book we described the fact that PPU version 7 now supports the ability to deactivate CPUs to ensure that you get some CPUs with 0% utilization. These workload management products can be used to manage how many CPUs are active at any time to minimize your utility costs even for Percent Utilization PPU.

Sweet Spot #3: Workload Management Will Minimize the Cost of your Utility Pricing Solution

Because these workload management tools will deactivate any CPUs that are idle, you will not be paying for resources that you are not using.


The last sweet spot is using workload management to give you automatic allocation of resources upon a failover. This gives you the ability to make productive use of your failover environment for the 99+ percent of the time that it is not needed for the production workload.

Sweet Spot #4: Workload Management Will Automate Reallocation of Resources upon Failover

These tools can automatically react to a failover to ensure that your highest-priority applications get the resources they need. This combined with Temporary Capacity can provide a low-cost failover environment that automatically responds to a failover by activating additional resources.


Choosing between Workload Manager and Global Workload Manager

The primary difference between WLM and gWLM is the fact that WLM is configured one system and one partition at a time. gWLM has a central management server that manages the policies and the data collected by many nodes in the infrastructure. There are some differences in their support for specific technology features, but these will disappear over time.

The short answer here is that you should use gWLM if it meets your needs. gWLM has some very nice benefits over WLM; it has been specifically designed to allow a central IT organization to manage a large number of servers on behalf of its business units.

Some features of WLM will be added to gWLM over time. For example, early releases of gWLM had support for secure resource partitions, but only for the CPU controls. The ability to configure and even flex the other resources will be added over time.

The key tradeoffs you will have with WLM when compared to gWLM is that WLM does not support the central management server. You need to configure each system separately. Interestingly, this may also be an advantage for some customers that have a small number of systems to manage and don't want to create a central management server to manage them. The other big tradeoff, of course, is that WLM will not support Integrity Virtual Machines or operating systems other than HP-UX. If you want to autoflex VMs, you will need to use the gWLM product.

The bottom line is that if you have a small number of systems with a small number of workloads (which don't require VM support), then WLM may be a good choice. If you are a central IT organization that wants to manage a larger number of systems, gWLM is the best choice.

Tips

Some general tips apply to workload management in general and some are specific to WLM or gWLM because the use models of the two products are very different.


General Workload Management Tips
Keep it Simple

The most common problem customers have with either of these tools is that they attempt to be too ambitious the first time they use them. Get your feet wet by creating a simple configuration with CPU utilization goals or gWLM's OwnBorrow policies. After you have become comfortable with how the product manages resources, you can branch out to performance goals and custom metrics for gWLM policies.

Create a mixture of performance-sensitive and -insensitive applications to maximize resource utilization

By putting performance-sensitive production applications in the same shared-resource domain as batch or test/development workloads that can absorb (some) performance impact, you can increase the utilization on a server fairly significantly. This is because the insensitive applications can consume a large amount of resources while the performance sensitive production applications are relatively idle and they can be scaled back when they are very busy.

Have Business Units Project Utilization in Probability Ranges

Projections of the load on new applications are notoriously inaccurate. That's because it is very hard to predict what the load might be before an application is deployed. Application owners tend to overestimate the expected load to ensure that the system isn't undersized. However, this leads to massive over-provisioning. A trick is to have the business units project the load by probability distribution. Determine what the worst-case and "most likely" load would be. This way you can ensure that there are sufficient aggregate spare resources (which are sharable) to handle the worst case but you can size the partition for the most likely.

Use Headroom to Handle Bursty Loads

One concept that is true for both WLM and gWLM is that they are using data from the last interval to predict the resource requirements for the next interval. If you have very bursty loads, the tendency is to attempt to create very short intervals to speed up the reaction of these products to the bursts. The problem with this is that it can cause uneven fluctuations in resources. The real issue is that the bursts need to be handled by the resources that were allocated from data in the previous interval. Both of these products provide "headroom" built into the entitlements. That is, they actually deliberately assign slightly more resources than are really required to meet the expected demand. The trick to handling bursty loads is to increase the amount of headroom so that if a burst happens sufficient resources are already available to handle most or all of the burst.

HP-UX Workload Manager

Here are some tips specific to the HP-UX Workload Manager product.

Use the GUI

The GUI is a fairly new feature of WLM, and it has been enhanced several times in the last few releases. It really is a nice tooluse it. One recommendation, though, is that you set the preferences for the graph view to 5 or 10 minutes. All of the data for your graphs is stored in memory on your desktop. If you leave it set for a long time, it will load lots of data into memory, which may impact the performance of your desktop.

Use the Configuration Wizard

The WLM configuration wizard provides a very simple step-by-step approach to almost all of the common WLM configurations. One tradeoff is that the wizard can't read in a configuration file and reconstruct the path you went down to create it. Therefore, another recommendation is that you keep the wizard running (don't hit the Finish button) while you are testing your configurations. That way if you want to tweak the configuration, you can simply go back to the wizard and hit the Back button to change the configuration.

Putting wlmpard in a Serviceguard Package

The wlmpard daemon reallocates resources across different operating system images on a system. Many customers ask if this daemon is highly available. The answer is that it, by itself, is not, but it has been designed to support Serviceguard as a packaging mechanism. Because wlmpard manages multiple OS images on a single system, it is normally running on one of the partitions on the system. Actually, prior to version 3.0 this was a requirement. Starting with version 3.0 of WLM, you can have wlmpard run on a separate node so you can put it into a package in the same cluster the other workloads are running in.

Global Workload Manager
Familiarize Yourself with OwnBorrow Policies

We have found that gWLM's OwnBorrow policies are the ones that most customers get excited about. It will help to familiarize yourself with these and then create a number of them that you can then deploy to many workloads on your various systems.

Use the Command Line to Move gWLM Policies between CMS Nodes

If you have multiple gWLM central management servers, you can use the command-line utilities to dump configuration data out of one and upload it to another. This is particularly useful if you have created a standard set of policy definitions and don't want to enter them again manually.



The HP Virtual Server Environment. Making the Adaptive Enterprise Vision a Reality in Your Datacenter
The HP Virtual Server Environment: Making the Adaptive Enterprise Vision a Reality in Your Datacenter
ISBN: 0131855220
EAN: 2147483647
Year: 2003
Pages: 197

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net