Integrity Virtual Machines Overview


Integrity Virtual Machines provides soft partitioning that includes sub-CPU granularity, shared I/O devices, security isolation, and dynamic resource allocation. Figure 6-1 shows the high-level architecture of Integrity Virtual Machines. The image depicts three VM guests running on a single VM host. Each of the VM guests has the ability to run workloads in an isolated instance of an operating system, such as HP-UX. In addition, as shown in Figure 6-1, each of the VM guests has virtualized CPU, memory, and I/O hardware resources.

Figure 6-1. Integrity Virtual Machines Architecture


The topmost layer in the VM host portion of the diagram is the virtual machine monitor (VMM). The VMM provides the virtualized platform and virtualized IPF processors to the VM guests. In essence, the VMM emulates a hardware platform that has the configured CPU, memory, and I/O resources that have been assigned to each particular VM guest. The VMM layer presents a hardware platform to the VM guest that makes the virtualized platform indistinguishable from a real hardware platform; so much so that any operating system capable of running on a physical HP Integrity hardware platform is also capable of running within an Integrity VMs environment without modification.

Below the VMM in the VM host lie the VM applications (vm_apps). Each of the vm_apps is a process running on the VM host. There is one vm_app process for every booted VM guest. The purpose of the vm_app is to request resources to be allocated to the associated VM guest. The memory size of the vm_app process in the VM host will be the same as the amount of memory that has been assigned to the VM guest. In addition, there is one kernel thread within the vm_app for every virtual processor allocated to the VM guest.

The final component in the VM host portion of the diagram is the VM driver (VMDVR). The VMDVR is a dynamically loadable kernel driver that is responsible for starting, stopping, creating, and removing VM guests.

VM Configuration Overview

Configuration of Integrity Virtual Machines consists primarily of defining CPU resource allocation, memory allocation, storage devices, and network connectivity. In addition, each VM guest requires configuration for its name, boot attributes, and other miscellaneous settings, but these topics are straightforward and don't require a detailed explanation.

CPU Resource Allocation

CPUs are specified in two components in Integrity Virtual Machines. The first is the number of virtual CPUs configured in a VM guest and the second is the percentage of a physical CPU that should be allocated to each virtual CPU.

The number of virtual CPUs configured in a VM guest dictates how many CPUs the VM guest operating system is able to use for running workloads. For example, if a VM guest is configured with four virtual CPUs, running the top command on the VM guest would show four CPUs. This means a multiprocess or multithreaded application has the ability to execute four processes or threads simultaneously on the VM guest. However, each of the virtual CPUs does not necessarily translate to a dedicated physical CPU for the VM guest.

The second component of the CPU configuration is the entitlement (the percentage of physical CPU guaranteed to each virtual CPU). This value can be specified such that each virtual CPU corresponds to a percentage of a physical CPU. For example, a virtual CPU can be defined such that is backed by 50% of a physical CPU. Or virtual CPUs can be specified according to a desired clock speed. For example, a virtual CPU can be defined such that it is backed by the equivalent of a CPU running at 1GHz, regardless of the physical CPU frequency. (Of course the frequency of the virtual CPU cannot exceed that of the physical CPU.)

Even though virtual CPU entitlements can be quite specific in their relationship to the physical CPUs, Integrity Virtual Machines allows CPU resources to be shared. Busy VM guests may use unused resources, even if that guest receives more than its entitlement. The entitlements are enforced only when all of the VM guests are busy simultaneously, at which point the entitlements represent the guaranteed amount of processor resources available to each VM guest. Said another way, CPU resources that are entitled to a VM guest but are not currently being used will be made available to other VM guests that require the resources. Thus, for a VM guest with four virtual CPUs that are each entitled to 50% of a physical processor, the entitlement is not a minimum, nor is it a maximum; it is a guarantee of resources when they are needed by the VM.

It should be understood that many Integrity Virtual Machines configurations simply assign a certain number of virtual CPUs to each VM guest, such as two or four virtual CPUs, and specify the default percentage of physical CPU to back the virtual CPUs. This configuration is easy to configure and maintain, and each VM guest receives an entitlement sufficient to run an operating system. This type of configuration also provides the ability for all VM guests to equally share unassigned CPU resources as needed. The example scenario uses this approach when allocating CPU resources.

Memory Allocation

Each virtual machine must be assigned a portion of the VM host's physical memory. The memory assigned to each virtual machine can be changed by shutting down the VM guest and modifying the amount of assigned memory. When each VM guest is booted, a check is performed to ensure that adequate memory is available on the VM host to boot the VM guest; if there is not enough available memory, the VM guest will not be permitted to boot. When each VM guest operating system is booted, the amount of memory assigned to the VM is locked. Therefore, if a VM guest is assigned 1GB of memory on a VM host with 4GB of memory, booting the VM guest will cause the entire 1GB of memory to become unavailable to the VM host and other VM guests, regardless of the amount of memory the owning VM guest is using. Therefore, the amount of memory assigned to each VM guest should be carefully considered. Allocating too much memory could result in underutilized memory resources, and allocating too little memory could result in excessive memory-swapping and poor performance in the VM guest.

Networking Configuration

Network configuration for a virtual machine using Integrity Virtual Machines can be set up in a manner very similar to a stand-alone system, nPartition, or virtual partition. In this type of environment, each VM guest is assigned one or more dedicated network adapters that are not shared. However, one of the primary benefits of Integrity Virtual Machines is the ability to share hardware to achieve higher utilization. Therefore, most configurations of Integrity Virtual Machines involve a virtual network switch that runs on the VM host. The vswitch is configured with zero or more physical network adapters as backing devices. When zero physical network adapters are backing the vswitch, it is referred to as a local switch, which means that no network traffic leaves the VM host. Instead, local vswitches serve as a high-speed internal LAN connection between VM guests. Alternatively, virtual network switches can be configured with a single network adapter that provides sharing by multiple VM guests. Finally, using the HP Auto Port Aggregation (APA) product, multiple network adapters can be grouped together to serve as a backing device for a vswitch.

Virtual network switches can be configured as either dedicated or shared. As these terms indicate, a shared vswitch can be used multiple VM guests, whereas a dedicated vswitch is limited to a single VM guest.

Storage Configuration

The configuration of storage devices for VM guests involves creating a mapping between the desired virtual devices for the VM guest to the physical backing devices in the VM host. Virtual storage devices for Integrity Virtual Machines can be a disk or a DVD. The physical backing devices for the virtual devices can be a raw disk device, a block disk device, a logical volume, a DVD, or a file. The most common mapping between virtual storage devices and physical backing devices uses a virtual disk device to map to a physical raw disk. This provides the VM guest with direct access to the disk device while introducing the least amount of overhead. After configuring this mapping, the VM guest is able to create a volume group, logical volumes, and file systems on the virtual device just as would be done on a stand-alone system.

Virtual Machine High Availability

The process of making a system highly available can be greatly simplified using Integrity Virtual Machines. The simplicity comes from the ability to configure the VM host once as a highly available system and the consequent ability of each VM guest to partake of the high-availability features without individually configuring each VM guest for high availability. Networking and storage connectivity are two of the primary features that can be configured on the VM host to simplify the high-availability configuration in each VM guest.

Networking High Availability

The networking configuration of VM guests can be one of the most powerful benefits of using Integrity Virtual Machines. The networking configuration of VM guests can be built upon the HP Auto Port Aggregation (APA) product. APA allows multiple physical network interfaces, or links, to be logically grouped together for a single, high-performing, fault-tolerant network interface. Consider a VM host system with four physical network adapters. If four VM guests were to be created on this VM host, one configuration approach would be to assign each VM guest direct access to one of the four physical network adapters. However, if the network adapter assigned to any one of the VM guests were to fail, the associated VM guest would experience complete loss of network connectivity.

An alternate configuration approach is to configure all of the network adapters together using APA on the VM host; in this scenario, all four VM guests can be configured to use the APA device. The end result is that each VM guest has hardware-fault isolation without degradation in performance or the added cost of redundant hardware devices. In fact, unless all of the VM guests are busy at the same exact time, each VM guest can experience higher performance because each VM has access to all four physical network devices. Further, should a network interface card or network cable experience a hardware failure, the VM guests can continue operating, but in a slightly degraded state.

Finally, the most significant reason for employing APA on the VM host is because the configuration of APA is performed only on the VM host and all of the VM guests are afforded the benefits of the configuration. Configuration of the VM guest operating system is greatly simplified because there is no need to perform special network configuration; APA configuration does not need to be performed on each guest. This feature alone greatly simplifies the administration of each VM guest operating system instance while providing a high-performance and fault- tolerant network infrastructure.

Storage High Availability

Another Integrity Virtual Machines configuration component that provides high availability while simplifying administration for VM guests is storage configuration. As with networking configuration, high-availability features that have been available in HP-UX for several years can be exploited to simplify the configuration of VM guests. Each VM host can be configured with multiple fiber adapters for connectivity to a storage array, as is typical for systems with high-availability requirements. Because the redundant paths can be put in place on the VM host, all of the VM guests are able to benefit from the redundant links without configuring them in the VM guest. From the perspective of the VM guest, storage configuration is greatly simplified. There is no need to configure multiple physical paths to the storage device. Instead, all of the VM guests are able to benefit from the VM host's fault-tolerant and high bandwidth storage connections.

For example, consider four stand-alone servers with fiber-channel connectivity to a storage array. This configuration would traditionally require at least one, usually two, fiber-channel host bus adapters. Using Integrity Virtual Machines, the same four stand-alone servers can be consolidated to a single VM host with the same four fiber-channel host bus adapters. Instead of each system having a single fiber-channel connection to the storage array, as many as four channels would be available at times of high bandwidth requirements. This results in better performance without additional hardware requirements. Additionally, failure of a fiber-channel host bus adapter, fiber-channel cable, or fiber-channel switch will not result in a loss to storage connectivity. Instead, all of the VM guests will retain connectivity, albeit in a degraded state.

Having discussed the most significant VM configuration components and the high availability configuration features, a discussion of the VM management paradigms that are used to configure these components follows.

VM Management Paradigms

Integrity Virtual Machines supports two management paradigms. The first is shown in Figure 6-2, which depicts local management from the VM host. Using this management paradigm, the VM host serves as the management platform and runs the applications necessary for VM management. The HP System Management Homepage is used to launch the virtual machine management GUI. This application relies on the virtual machine WBEM provider to read data from the virtual machine core interfaces. When the virtual machine management GUI makes a change to the VM guest configuration, the virtual machine commands are used. One difference in the management paradigms between nPartitions and VMs is that the virtual machine commands do not rely on the virtual machine WBEM provider, whereas the nPartition commands utilize the nPartition WBEM provider for all tasks. The ramification of this difference is the VM commands are only capable of performing administrative tasks on the local VM host. The VM commands can be executed remotely by relying on standard remote command execution tools such as remote shell or secure shell.

Figure 6-2. Local Integrity Virtual Machines Management Paradigm


The second VM management paradigm is shown in Figure 6-3. This diagram illustrates VM management from an HP Systems Insight Manager central management station (CMS). There are three primary differences between the mode shown in Figure 6-2 and that shown in Figure 6-3:

1.

The VM management GUI accesses WBEM providers remotely to collect data for the VM host and the VM guests. Since this paradigm relies on network connectivity between the CMS and the WBEM providers, the WBEM connection between the VM management GUI and the WBEM server is encrypted and transported over the network using HTTPS.

2.

The VM management GUI executes VM commands remotely on the VM host using secure shell to make VM configuration changes.

3.

The VM host is not required to run the HP System Management Homepage. This offloads the responsibility of running the user interface to the CMS that is intended to host management applications.

Figure 6-3. Remote Integrity Virtual Machines Management Paradigm


An important similarity should be noted when examining the two VM management paradigms. VM guests are largely uninvolved in both paradigms. The management paradigms for VM management intentionally omit the VM guests from having involvement. This results in simplified management because all VM management tasks are performed from the VM host. Furthermore, this model allows individual VM guest administrators to focus on management of the operating system, as they typically would with a stand-alone system, and rely on the VM administrator to perform the VM-specific administration tasks.

While the VM guests are largely uninvolved in the VM management paradigms, HP recommends several WBEM providers to run on each VM guest and on the VM hostthe VM WBEM provider, the IO tree WBEM provider, and resource utilization WBEM provider. When the VM WBEM provider runs within the VM guest, its purpose is to identify that it is a VM guest and to provide the universally unique identifier (UUID) of its VM host. This allows tools such as HP Systems Insight Manager to associate VM guests discovered on the network with the appropriate VM host. The IO tree WBEM provider allows I/O data to be gathered from each VM guest. The resource utilization WBEM provider is used to collect resource usage for the VM guest for the purposes of displaying utilization metrics, allocating resources, and capacity planning.

Finally, it's important to understand that this management paradigm discussion does not involve the typical administration tasks for operating systems running within the VM guests such as tuning the kernel, adding users, and configuring applications. This discussion applies only the administration of VM guests. It should also be understood that the two management paradigms are not mutually exclusive. Both the local and remote management paradigms can be utilized in the same environment depending on system administration policies and preferences.



The HP Virtual Server Environment. Making the Adaptive Enterprise Vision a Reality in Your Datacenter
The HP Virtual Server Environment: Making the Adaptive Enterprise Vision a Reality in Your Datacenter
ISBN: 0131855220
EAN: 2147483647
Year: 2003
Pages: 197

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net