Choosing a Partitioning Solution


Given all the partitioning solutions that are available, it can be a daunting task to decide which of these would best meet the needs for your particular situation. The real answer is likely to be that you will want to take advantage of some combination. However, we first want to focus on each technology individually; we will discuss some interesting combinations later in this chapter.

There are a number of key benefits that you get from partitioning in general. These include:

  • Application isolation: The ability to run multiple workloads on a system at the same time while ensuring that no single workload can impact the normal running of any other workload.

  • Increased system utilization: This is an outgrowth of the first benefit above. If you can run multiple workloads on a system, you can increase the utilization of the server because resources that normally would have gone to waste can be used by the other workloads.

The focus of this chapter is to identify the key benefits each partitioning technology has over the other options you have with HP's servers.

Why Choose nPars?

As was mentioned in the last chapter, HP nPartitions provide for fully electrically isolated partitions on a cell-based server.

Key Benefits

A number of benefits can be gained from using nPartitions to break up a large system into several smaller ones. The few that we are going to focus on here are the ones that make this a technology that you won't want to do without. These include hardware-fault isolation, OS flexibility, and the fact that using nPars does not impact performance.

The fact that nPartitions are fully electrically isolated means that a hardware failure in one partition can't impact any other partition. In fact, it is possible to do hardware maintenance in one partition while the other partitions are running. This also makes it possible to run different partitions with different CPU speeds and even different CPU families (Precision Architecture (PA-RISC) and Itanium). The key benefit is that you can perform some upgrades on the system one partition at a time.

Another advantage of the electrical isolation is the fact that the operating system can't tell the difference between a partition and a whole system. Therefore, you can run different operating systems in each partition. The only supported OS on a PA-RISC-based HP 9000 system is HP-UX, so this is only possible on an Integrity server running Itanium processors. If you have an Integrity server, you can run HP-UX in one partition, Microsoft Windows Datacenter in another, Linux in a third, and OpenVMS in a fourth. All on one systemall at the same time.

The electrical isolation between nPars will also allow you to run PA-RISC processors in on partition and Itanium processors in another on the same system. This can simplify the upgrade process by allowing a rolling upgrade from PA to Itanium.

Another key benefit is that you can partition a server using nPars with no performance penalty. In fact, the opposite is true. Partitioning a server with nPars increases the performance significantly. Some performance benchmarks for a Superdome which has been partitioned into 16 four-CPU nPartitions are roughly 60% faster than the same benchmark for a fully loaded 64-CPU Superdome. There are several key reasons why this is so. The first is that there is lower multiprocessing overhead with smaller partitions and the other is that there are fewer connections between the crossbars that are traversed with smaller partitions.

Key Tradeoffs

Our goal here is to ensure that you understand how to best take advantage of the Virtual Server Environment (VSE) technologies. We want to explain some of the tradeoffs of using each of them to ensure that you put together a configuration that has all the benefits you want and that you can minimize the impact of any tradeoffs. Many of the tradeoffs can be mitigated by combining VSE technologies.

The first real tradeoff of nPartitions is granularity. The smallest partition you can create is a single cell. If you are using dual-core processors, this is a two-to-eight CPU component. The granularity can also be improved by including instant capacity processors. This way you can configure a partition with up to eight physical CPUs but have as few as two of them be active. We will discuss instant capacity in Part 2 of this book.

Another tradeoff is that although you can use instant capacity processors to adjust the capacity of an nPar, its cell configuration can't be changed online. This really isn't a limitation of nPartitions but rather of the operating systems themselves. Currently HP-UX is the only supported OS that will allow activation and deactivation of CPUs while online. None of the supported operating systems allow reallocation of memory while they are running. You will need to shut down any partition to remove a cell, for example. You can, however, reconfigure both partitions while they are running and then issue a "reboot for reconfiguration" for each of them when it is convenient. A future release of HP-UX will support online addition and deletion of memory.

One other tradeoff is that because each partition is electrically isolated from the others, there is no sharing of physical resources. If you need redundant components for each workload, they will be needed in every partition. This is simply a cost of hardware-fault isolation.

You can migrate active CPU capacity between nPars, but only if you have instant capacity processors. This means that you must have sufficient physical capacity in each partition to meet the maximum CPU requirements for the partition. You would then purchase some of that capacity as instant capacity. These processors can be activated when needed by deactivating a processor in another partition or by activating temporary capacity.

A clarification about nPars: Running nPartitions is not the same as running separate systems. There are events that can bring all the partitions down. The most common is operator error. If an operator with administrator privileges on the Management Processor should make a serious mistake, they could bring the whole system down. Another would be a natural disaster, like a major power outage or fire. The bottom line here is that using nPartitions does not eliminate your need to implement high-availability software and hardware if you are running mission-critical workloads on the system. The complex should be configured with at least one other complex in a cluster to ensure that any failure, whether it is a single partition or multiple, would result in a minimum of downtime. This is discussed in Part 3 of this book.

nPar Sweet Spots

Now that we have provided a brief overview of the benefits and tradeoffs of nPartitions, we will provide some guidance on a few "sweet spot" solutions that allow you to get the benefits while minimizing the impact of the tradeoffs.

First of all, if you are doing consolidation of multiple workloads onto a cell-based server, you will want to set up at least two nPars. It would just not make sense to have the option of hardware-fault isolation and not take advantage of it. You might want to set up more nPars so you can provide hardware isolation between mission-critical applications. This ensures that the need to do hardware maintenance doesn't require that you take down multiple mission-critical applications at once.

Sweet Spot #1: At least two nPars

If you have a cell-based system that supports nPars and you are doing consolidation of multiple workloads, you should seriously consider setting it up with at least two. The resulting hardware-fault isolation and the flexibility with instant capacity makes this compelling.


You will want to make your nPars as big as possible. Steer clear of single-cell partitions unless you have a really good reason. Bigger partitions provide you with more flexibility in the future. If your capacity-planning estimates end up being off and one partition isn't big enough for the workload there, you can easily react to that by reconfiguring the system. But if you have many single-cell partitions you will need to rehost one of the workloads to reallocate a cell. This makes it very difficult to take advantage of the flexibility benefits of partitioning a larger server.

Sweet Spot #2: Separate nPars for each Mission-Critical Application

If you set up a separate nPar for each mission-critical application, you will ensure that a hardware failure or routine hardware maintenance will impact only one of them at a time.


Clearly, there is a tradeoff between larger partitions and more isolation. You really want to find the happy medium. Setting up a system with a few nPars and then using one of the other partition types inside the nPars to allow you to run multiple workloads provides a nice combination of isolation and flexibility. One interesting happy medium is to set up an nPar for each mission-critical production application and then use vPars or Integrity VM to set up development, production test, or batch partitions in the nPar along with the mission-critical production application. That way the lower-priority applications are isolated from the mission-critical application by a separate OS instance, yet some of the resources normally used for the lower-priority applications can be used to satisfy the production application if it ever experiences an unexpected increase in load.

Sweet Spot #3: vPars or VMs inside nPars

Subdividing nPars using vPars or VMs provides a very nice combination of hardware-fault isolation and granularity in a single solution.


Consider this scenario: You have a large Integrity server running several HP-UX partitions and because you are taking advantage of the VSE technologies you find that you have spare capacity you thought you would need. At the same time, you have a Windows server that has run out of capacity and you need a larger system to run it on. Rather than purchasing a separate Integrity server for the Windows application, you can set up another nPar on the existing system and put the application there. You might even be able to use this as a stopgap solution while waiting for the "real" server this application will be running on. You could then set up the Windows partition with the new server in a cluster and migrate it over. Because of the flexibility of the system with instant capacity processors, you can use this partition as a failover or disaster-recovery location for the primary Windows server.

Sweet Spot #4: Use Spare Capacity for Other OS Types

If you have an Integrity server with spare capacity, you have the flexibility of creating an nPar and running any of four different operating systems on that spare capacity.


A newer feature of nPars is the ability to run PA-RISC processors in one partition and Itanium processors in another. This provides a nice solution for a rolling upgrade from PA to Itanium inside a single system. You could also add some Itanium cells to an existing system for either testing or migration.

Sweet Spot #5: Use nPars to Migrate from PA to Itanium

Use nPars on existing HP 9000 systems to set up Itanium partitions for migration of existing partitions or other systems.


Last, when using nPartitions, you should always have some number of instant capacity processors configured into each partition. We will describe these technologies in more detail in Part 2 of this book, but suffice it to say that the flexibility it brings and the dramatic simplification of capacity planning makes this compelling. An example configuration would be a single-cabinet Superdome with dual-core processors split into four nPars. Each nPar has two cells and 16 physical processors. Since most systems are only 25% to 30% utilized, you can get half the CPUs as instant capacity processors and increase the utilization to over 50%. That way you still have the extra capacity if you need it, but you can defer the cost until later. In addition, you will get the flexibility of scaling each partition up to 16 CPUs by deallocating CPUs from the other partitions in real time. You can also get temporary capacity in case you have multiple partitions that get busy at the same time. We will talk about instant capacity and temporary capacity in the next part of the book. Figure 3-1 provides a view of this configuration which shows the dual-core processors and the configuration of inactive instant capacity processors.

Figure 3-1. A Single-Cabinet Superdome with four nPars and Instant Capacity Processors


This picture shows that each partition contains two cells each with eight physical CPUs. Each partition has the ability to scale up to 16 CPUs because there are eight inactive CPUs that can be activated by deactivating a CPU in another partition or by using temporary capacity.

Sweet Spot #6: Always Configure in Instant Capacity

Instant capacity processors are very inexpensive headroom and provide the ability to flex the size of nPars dynamically.


Why Choose vPars?

When HP Virtual Partitions (vPars) was introduced in late 2001, it was the only virtual partitioning product available that supported a mission-critical Unix OS. Even today there continue to be a number of features that make vPars an excellent solution for many workloads.

Key Benefits

This section compares vPars with each of the other partitioning technologies to help you determine when you might want to use vPars in one of your solutions.

When comparing vPars to nPars, the primary benefits you get with vPars is granularity and flexibility. vPars can go down to single-CPU granularity and single card-slot granularity for I/O. With nPars, each partition must be made up of whole cells and I/O card cages. This means that you can have a vPar with a single CPU and a single card slot (if you use a LAN/SCSI combo I/O card). In addition, you can scale the partition in single CPU increments. vPars also provides the flexibility of dynamically reallocating CPUs between partitions without the instant capacity requirement. In other words, you can deallocate a CPU from one vPar and allocate the same CPU to another vPar. The tradeoff, of course, is that you won't have the hardware-fault isolation you get with nPars.

When comparing vPars with Integrity VM, the key benefits of vPars are its scalability and performance. vPars has no limit on its scalabilityin other words, you could create a 64 CPU vPar and get only a slight degradation of the performance you would get with an nPar. The first release of Integrity VMs is tuned for four virtual CPUs in each VM (although you can create VMs with more). This will be increased with time, but will take some time to reach the scalability of vPars. In addition, because vPars are built by assigning physical I/O cards to each partition, the OS talks directly to the card once it is booted. There is almost no performance degradation at all.

When comparing vPars with Secure Resource Partitions, the primary benefit you get is isolation. This includes OS, namespace, and software-fault isolation. Because each vPar has its own OS image, you can tune each partition for the application that runs there. This includes kernel tunables, OS patch levels, and application versions. Also, an application or OS-level problem that each vPar isolated from software faults in other vPars and can be independently rebooted.

Key Tradeoffs

The first tradeoff is that vPars only supports HP-UX. Both nPars and Integrity VM will eventually support all four operating systems that are targeted for Integrity Servers (HP-UX, Windows, Linux, and OpenVMS) on an Integrity server.

Several other tradeoffs come as a result of the same reason vPars has better performancethe vPar monitor is emulating the firmware of the system. The two most significant tradeoffs from this are the fact that it is not possible to share I/O card slots between vPars and that vPars doesn't support sub CPU granularity. In addition, vPars is not supported on all platforms and doesn't support all I/O cards. Realistically, it does support most of the high-end systems and most of the more common I/O cards. The bottom line here is that when considering a system for vPars, you should work with your HP sales consultants, or authorized partner, and ensure you get the right configuration.

vPar Sweet Spots

You should always set up some number of nPars if you doing consolidation on a cell-based server. The key question is whether you want to further partition the nPars or system with another partitioning solution, such as vPars, VMs, or Secure Resource Partitions. If you are planning to run more than one workload in each nPar, you may want to run each workload in its own isolated environment. The key question is whether you need OS level isolation. If so, your choices are vPars or VMs. You can't run both of these at the same time on the same nPar or system. However, you can run vPars in one nPar and VMs in another on the same system. This is another nice advantage of the electrical isolation you get with nPars.

Sweet Spot #1: vPars Larger than eight CPUs

If you require finer granularity than nPars and partitions larger than eight CPUs, vPars is an excellent option.


If you need the OS-level isolation, vPars are a good choice if you need large partitions or if the workload is I/O intensive and performance is critical.

Sweet Spot #2: I/O Intensive Applications

If you require finer granularity than nPars and have I/O-intensive applications that require peak performance, vPars has very low I/O performance overhead.


Why Choose Integrity VM?

The newest addition to the partitioning continuum provides OS isolation while allowing sharing of CPU and I/O resources with multiple partitions.

Key Benefits

We will again compare VMs with each of the other partitioning alternatives to provide some context for discussing the key benefits of implementing Integrity VM.

VMs provide the same level of OS and software-fault isolation as nPars but provides it at a much higher level of granularity. You can share CPUs and I/O cards and you can even provide differentiated access to those shared resources. You have control over how much of each resource should be allocated to each partition (e.g. 50% for one VM and 30% for another). As with vPars, sharing and flexibility comes at the cost of hardware-fault isolation. This is why we recommend first partitioning the system with nPars and then using other partitioning solutions to further partition the nPars to provide finer granularity and increased flexibility. There is one other benefit when comparing VMs to nParsyou can run Integrity VM on non-cell-based platforms. Any Integrity system that supports a standard HP-UX installation can be set up to run Integrity VM.

VMs and vPars provide many of the same benefits. You have OS-level isolation, so you can create these partitions and tune the operating systems to meet the specific needs of the applications that run there. This includes kernel tunables as well as patches and shared library versions. The key differentiation for the Integrity VM product is its ability to share CPUs and I/O cards. This makes it much more suitable for very small workloads that still need the OS-level isolation.

One other interesting thing you can do with VMs that you can't do with vPars is that you can create an ISO image of a CD and mount that image as a virtual CD onto any or all of your VMs. This is very convenient for the installation of updates or patch bundles that need to be applied to multiple VMs. One other significant benefit of Integrity VM is its future support for Windows, Linux and OpenVMS. VMs also support I/O virtualization solutions like Auto-Port Aggregation on the VM host so that you can create a large I/O interface and then share it with multiple VMs. This allows the VMs to get access to losts of bandwidth as well as the redundancy you get with API without requiring any special configuration on the VMs themselves.

The last comparison for this section is with Secure Resource Partitions (SRPs). There are many similarities in how Secure Resource Partitions and Integrity VM manage the resources of the system. The implementations of these controls are different, but they have many of the same user interface paradigms. One key thing you get with VMs is OS-level isolation, including file systems, namespaces, patch levels, shared library versions, and software faults. You get none of these with Secure Resource Partitions.

Key Tradeoffs

There are a few key tradeoffs of the Integrity VM product that you should be aware of before deciding whether it is the right solution for you. These include no support for HP 9000 systems and performance of some I/O intensive workloads.

The first of these is the fact that Integrity VM requires features of the Itanium processor that are not available or very different on Precision Architecture.

Because Integrity VM is a fully virtualized system technology, all I/O traffic is handled by virtual switches in the VM monitor. This makes it possible to have multiple VMs sharing a single physical I/O card. However, this switching imposes some overhead on I/O operations. This overhead is relatively small, but it is impacted by the fact that the virtual CPU that is holding the I/O interrupt that is servicing the I/O request doesn't have all the CPU cycles on the real CPU. This makes it possible for the interrupt to come in when another VM has control of the CPU, which further delays the receipt of the I/O that was requested. When the system is lightly loaded, this impact can be fairly minor, but when the CPUs are very busy and there are I/O intensive workloads issuing many I/O requests, you can expect a higher level of overhead. Future releases of the VM product will improve this and provide alternative solutions specifically for I/O intensive workloads.

Integrity VM Sweet Spots

Most companies have a number of large applications and databases that require significant resources to satisfy their service-level requirements. Most also have a large number of applications that are used daily but have a small number of users and therefore don't require significant resources. Some of these are still mission critical and require an isolated OS instance and a mission-critical infrastructure like that available on HP's Integrity servers. These are the applications that are best suited to VMs. They have short-term spikes that may require more than one CPU, but their normal load for the majority of the day is a fraction of a CPU. These would normally be installed on a small system, possibly two to four CPUs, to meet the short-term peaks. However, the average utilization in this environment is often less than 20%. Putting these types of workloads in a VM provides the flexibility to scale the VM to meet the resource requirements when the load peaks but scale back the resources when the workload is idle. Putting a number of these workloads on an nPar or small server allows you to increase the overall utilization while still providing isolation and having sufficient resources available to react to the short term spikes.

Sweet Spot #1: Small Mission-critical Applications

If you have applications that don't need a whole CPU most of the time but do need OS isolation, Integrity VM provides both granularity and isolation.


Another sweet spot is the ability of Integrity VM to support small CPU-intensive workloads. It turns out that CPU-intensive workloads incur very little overhead inside a VM. We have seen cases where the difference in performance for some CPU-intensive benchmarks in a VM compared to a stand-alone system was less than 1%. This was using a single virtual CPU VM, so if you have small CPU-intensive workloads, a VM is a great option.

Sweet Spot #2: Small CPU Intensive Applications

Single CPU Integrity VM carry very little performance overhead for CPU-intensive applications.


The first release of Integrity VM will be tuned for 4 virtual CPUs. A real sweet spot for VMs is running a bunch of four virtual-CPU VMs on a system or nPar with four or eight physical CPUs. This way there is a very even load of virtual CPUs to physical CPUs and each VM can scale up to nearly four physical CPU speeds. If you have a handful of workloads that normally average less than one CPU of consumption but occasionally spike up from two to four CPUs at peak, running a number of these on a fouror eight-CPU system will allow the average utilization of the physical resources to exceed 50% while still allowing each VM to scale up to four CPUs to meet the peak demands when those usage spikes occur.

Sweet Spot #3: VMs with the same CPU count as the system or nPar

Running a number of four-virtual-CPU VMs on a four-physical-CPU system or partition allows sharing of CPUs and ensures an even load across the physical CPUs. This will also work if there are a number of four-virtual-CPU VMs running on an eight-CPU system or nParbut the key is you want to have an even load on the physical CPUs.


A derivative of this sweet spot is one where you run a handful of application clusters in VMs that share the physical servers they are running on. Figure 3-2 shows an example of three application clusters with three two-CPU nodes each.

Figure 3-2. Three Application Clusters Running on Nine Two-CPU Systems


These applications will occasionally peak to the point where they need more than one CPU on each node, but the average utilization is in the 1015% range. Now let's consider running these same clusters using VMs. This is shown in Figure 3-3.

Figure 3-3. Three Clusters Running in VMs on Three Physical Systems or nPars


We have configured these as four-virtual-CPU VMs and they are running on four-physical-CPU systems or nPars. Now let's consider what we have done. We have:

  • Nearly doubled the maximum CPU capacity for each clusterbecause each virtual CPU can be scaled to get close to a physical CPU if the other clusters are idle or near their normal average load. Each cluster can now get nearly 12 CPUs of capacity if needed.

  • Reduced the number of systems to manage. Although we still have nine OS images, we now have only three physical systems.

  • Lowered the CPU count for software licenses from 18 to 12.

  • Increased the average utilization of these systems.

To summarize, we have increased capacity, lowered the software costs, lowered the hardware costs, and lowered system maintenance costs.

Sweet Spot #4: Multiple Application Clusters in VMs

Running a number of overlapping application clusters on a set of VMs increases average utilization, increases the peak capacity of each cluster, and lowers hardware, software, and maintenance costs.


Why Choose Secure Resource Partitions?

When security compartments were added to HP-UX and integrated with Resource Partitions, the name was changed to Secure Resource Partitions (SRPs). This didn't fundamentally change the effectiveness of Resource Partitions, but it did increase the number of cases where Secure Resource Partitions would be a good fit. This is because you can now ensure that the processes running in each SRP cannot communicate with processes in the other partitions. In addition, each compartment gets its own IP address, and network traffic to each compartment is isolated from the other compartments by the network stack in the kernel. This, on top of the resource controls for CPU, real memory, and disk I/O bandwidth, provides a very nice level of isolation for multiple applications running in a single copy of the operating system.

Key Benefits

When comparing SRPs to nPars we could probably repeat the discussion from the section on VMs with one additional benefit, which is a reduction in the number of OS images you need to manage. Secure Resource Partitions provides much higher granularity of resource allocation. In fact, it allows you to go down below 1% CPU if you want. This might be useful if you have applications that are not always running in the compartmenteither a failover package or maybe a batch job. You can also share memory and I/O. In fact, SRPs are currently the only partitioning solution in the VSE that allows online reallocation of memory entitlements. This will change with a future version of HP-UX that will begin to support online memory reallocation between partitions. The most significant advantage here is the fact that you have fewer OS images to manage. Industry analysts estimate that 40% of the total cost of ownership of a system is the ongoing maintenance and management of the system. This includes the mundane daily tasks such as backups, patches, OS upgrades, user password resets, and the like. Because you would have fewer copies of the OS and applications installed, less of your time and money would be spent managing them.

When comparing SRPs to vPars, the key things to consider are finer granularity of resource allocation, resource sharing, more-complete system support, and fewer OS images to manage. In other words, they are much the same benefits when compared to nPars. Even though vPars provide finer granularity than nPars, SRPs are still finer, and you can share CPUs and I/O cards, which lowers the costs of running small partitions. Also, the tradeoffs on the types of systems supported by vPars don't exist for SRPs. Any system that supports HP-UX will support Secure Resource Partitions.

When comparing SRPs to Integrity VM, there are only a few key benefits because the resource controls are very similar. The first is the fact that SRPs are supported on all HP-UX systems, including PA-RISCbased HP 9000 systems, whereas VMs are supported only on Integrity systems. The other was mentioned above for vPars and nParsSRPs do not require separate OS images to be built and managed for each partition. One other benefit is that SRPs allow the sharing of memory, which none of the OS-based partitions will support until after a future release of HP-UX. You can also get slightly finer CPU-sharing granularity with SRPs. VMs allow you to have a minimum of 5% of a CPU for each VMyou can go down below 1% with SRPs, although you should be very careful when taking advantage of that. It would be pretty easy to starve a workload this way. This might be useful if you have workloads that normally are not running except under special circumstancesthings such as failover or a job that only runs a small portion of the day, week, or month. In these cases it would be important to implement some type of automation (eg. Workload Manager) that would recognize that the workload has started and increase the entitlement to something more appropriate.

One other feature of Secure Resource Partitions that isn't available in the other solutions is the full flexibility to decide which resources you want to control, whether you want the security controls, and even what types of security controls you want. This, of course, can be a double-edged sword because you will want to understand what the impact of each choice will be. We provide some guidance on this later in this section.

Key Tradeoffs

The reduction of the number of OS images that need to be managed has both benefits and tradeoffs. Even though the applications are running in isolated environments, they are still sharing the same copy of the operating system. They share the same file system, the same process namespace, and the same kernel. A few examples of this are:

  • There is one set of users: A user logging into one compartment has the same user ID as they would have if they logged into another compartment.

  • All applications are sharing the same file system: You can isolate portions of the file system to one or more compartments, but this is not the default. A design goal was to ensure that all applications would run normally in default compartments.

  • There is one set of kernel tunables: You need to configure the kernel to support the maximum value for each tunable as required for all the compartments at the same time. Many of the more commonly updated kernel tunables are now dynamic in HP-UX 11iV2, but this is still something to be aware of.

Secure Resource Partitions Sweet Spots

The most compelling benefit of Secure Resource Partitions over the other partitioning solutions is the fact that you can reduce the number of OS images you need to manage. The tradeoff, of course, is that you have less isolation. The primary places where this isolation is an issue are:

  • When the different applications need different library versions or their patch-level requirements are incompatible: If different applications need different patch levels, different kernel tunables or have some other incompatibility.

  • The fact that the applications are owned by different business units: It can be difficult to get approval to reboot a partition if it will impact applications that are owned by different lines of business.

The first of these issues can be resolved by focusing on the sweet spot where you run multiple copies of the same application in a single OS image. This way they will all have the same requirements for patches and kernel tunables. It is often a well-understood process for consolidating multiple instances of the same application in a single OS instance. This is especially true if the applications are the same version of the same application. The second issue can also be resolved if you have multiple copies of the same application that are owned by the same line of business or if the different lines of business have a very good working relationship and are willing to "do the right thing" for each other.

These issues can also be mitigated by limiting the number of applications you attempt to consolidate. If you are used to doing consolidation, you have probably already worked through these issues. But if you haven't, you might want to start by limiting the number of applications you attempt to run in a single OS image. One thing to consider is that if you consolidate only two applications in each OS image, you have cut your OS count in half. That is a huge savings. So if each of your business units has a number of databases, or a number of application servers, you can put two to three of them on each OS image and get a tremendous savings because you cut the number of patches, backups, etc. every time you put two or more of them in the same OS image.

Sweet Spot #1: Multiple Instances of the Same Application

If you have multiple instances of the same version of the same application, put two or more of them in a single OS instance and use Secure Resource Partitions to isolate them from each other.


Another opportunity is the possibility of implementing the same overlapping cluster solution described in Figures 3-2 and 3-3 using Secure Resource Partitions. This carries the additional benefit of a reduction of the system management overhead because you would also be reducing the number of OS and application installations that need management and maintenance from nine to three.

Sweet Spot #2: Multiple Overlapping Clusters

Running multiple overlapping clusters of the same application on a set of systems or nPars provides all the benefits of this solution in VMs but also reduces the number of OS and application instances that need to be managed and maintained.


What Features of SRPs Should I Use?

You have a tremendous amount of flexibility in deciding which features of Secure Resource Partitions to activate. How do you decide which controls to use and what impact might they have on your workloads?

Choosing a CPU Allocation Mechanism

The first control to consider is for CPU. You have two choices here. You can use the fair share scheduler or processor sets. We described each of these in the last chapter, so here we will focus on the practical implications of these choices.

The key difference between FSS and PSETs is how they allocate CPUs to each of the partitions. The FSS groups allocate a portion of the CPUs cycles on all the CPUs, whereas PSETs allocates all the cycles on a subset of the CPUs. An example would help here. Let's consider running two applications on an eight-CPU system where we want one of them to get 75% and the other to get 25% of the CPU. With FSS, you would configure in 75 shares for the first and 25 shares for the second application. The first application would use all eight CPUs but would get only three out of every four CPU ticks and the second application would get the other CPU tick. With PSETs, you would configure the first application to have a PSET with six CPUs and the other with two CPUs. Each application would only see the number of CPUs in its PSET and would get every CPU cycle on those CPUs.

Given the sweet spot configuration of a small number of the same application running in an OS image, all you need to consider is whether you want to use FSS or PSETs. The only reason you would want to use PSETs is if there was some reason that the application required exclusive access to the CPUs. Because this is Unix, there are no applications that require exclusive access to CPUs. However, if you are using any third-party management tools that allocate CPU resources themselves, you might want to find out if they will work correctly when they get partial CPUs.

Another benefit of FSS is that it allows sharing of unused CPU ticks. When CPU capping is turned off, any time a partition has no processes in the CPU run queue, the next partition is allowed to use the remainder of the CPU tick. Effectively, FSS allows you to define a minimum guaranteed entitlement for each application, but if one application is idle, it allows other busy applications to use those unused shares. When everyone is busy, everyone will get their specified entitlement.

Should I Use Memory Controls?

Memory controls are implemented as separate memory-management subsystems inside a single copy of the operating system. The control mechanism is paging, so the impact on applications is transparent, with the exception of performance, of course. The key benefit you would get from turning on memory controls is when you have a low-priority workload that is consuming more than its fair share of memory and causing performance problems on a higher-priority application.

If you decide to turn on memory controls, you then need to decide if you want to isolate the memory or allow sharing. The tradeoff here is that isolating memory might result in some idle memory not being available to a different application that could use it. The other side is that without capping it is likely that more paging will occur to reallocate memory to the partition that "owns" it.

Should I Use Disk I/O Controls?

The disk I/O controls are implemented through a callout from the queuing mechanisms in the volume managers LVM and VxVM. Several things to note about this are:

  • This only takes effect when there is competition for disk I/O. The queue only starts to build when there are more requests than can be met by the available bandwidth. This isn't a problem, but you should know that an application can exceed its entitlement if there is bandwidth available. There is no capping mode for I/O.

  • This is only at the volume group level. Multiple compartments need to be sharing some volume groups to get the benefits of these controls.

The short answer is that turning on the disk I/O controls is useful if you occasionally have bandwidth problems with one or more volume groups and you have multiple applications that are sharing that volume group.

Should I Use Security Compartments?

The default security containment is intended to be transparent to applications. This means that processes will not be allowed to communicate to processes in other compartments, but they can communicate freely with other processes in the compartment. The default is that the file system is protected only by the standard HP-UX file system security. If you want more file system security, you will need to configure that after creating the default compartments. Other features include the fact that each SRP will have its own network interfaces.

You should consider turning security on if:

  • You have multiple different applications and you need to be sure they won't be able to interfere with or communicate with each other.

  • You have multiple network facing applications and you want to make sure that if the security of any of them are compromised, the damage they can cause will be contained to the compartment they are running in.

  • You want each application to have its own IP address on the system and the packets on those interfaces will only be visible to the application they were destined for. Multiple interfaces can share the physical cards, but each interface and its IP addresses can be assigned only to a single SRP.



The HP Virtual Server Environment. Making the Adaptive Enterprise Vision a Reality in Your Datacenter
The HP Virtual Server Environment: Making the Adaptive Enterprise Vision a Reality in Your Datacenter
ISBN: 0131855220
EAN: 2147483647
Year: 2003
Pages: 197

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net