Global Workload Manager Example Scenario


This example scenario has two parts. The first part demonstrates the integration of gWLM and virtual partitions on HP-UX. In this example, there are two vPars configured to share CPU resources based on an own-borrow-lend policy. The second part of the scenario demonstrates the use of gWLM in a Linux environment using processor affinity which is standard in the 2.6 version of the Linux kernel. gWLM uses the processor affinity capabilities in the Linux kernel in a way that resembles the functionality of PSETs on HP-UX. At the end of the scenario, the two parts are brought together in an illustration of gWLMs; monitoring capabilities for all workloads being managed from the CMS.

It should be noted that Global Workload Manager version 1.0 was used in this example scenario. While it is not illustrated in this scenario, version 2.0 of Global Workload Manager adds support for Instant Capacity, Temporary Instant Capacity, and Pay per use. It will integrate with Integrity Virtual Machines and Serviceguard. Finally, version 2.0 of Global Workload Manager will be integrated with Virtualization Manager as described in Chapter 17, "Virtualization Manager."

Global Workload Manager and Virtual Partition Scenario

The first example scenario discussed illustrates the use of gWLM in a virtual partition environment. There are two vPars, zoo19 and zoo21, configured within the same nPartition named zoo9. Each of the vPars is running two instances of the Oracle database. This is the same configuration used in Chapter 15, "Workload Manager." Table 19-1 illustrates the configuration for each of the vPars with respect to the gWLM configuration.

Table 19-1. Virtual Partition Configurations

Virtual Partition

Workload Name

CPUs Owned

Minimum Number of CPUs

Maximum Number of CPUs

Shared Resource Domain

zoo19

zoo19.vpar.dbs

4

1

7

zoo9.vpars.dbs

zoo21

zoo21.vpar.dbs

4

1

7


An important distinction when comparing gWLM version 1.0 to Workload Manager is that gWLM does not allow workload compartments in a single SRD to be hierarchical. For example, an SRD may contain multiple vPars or an SRD may contain a single vPar that then contains processor sets or FSS groups. However, an SRD cannot contain multiple vPars that are also hosting processor sets or FSS groups. When gWLM is managing an SRD comprised of multiple vPars, all of the processes within the vPar are considered the workload. Therefore, gWLM treats the sales_db and finance_db workloads that are running in the zoo19 vPar as a single workload. If either of them are busy, gWLM will allocate more resources to the zoo19 vPar if zoo21 is not fully utilizing those it owns. This also means that the sales_db and finance_db are on equal ground with respect to workload priority and resource allocation. In the example scenario shown in Chapter 15, "Workload Manager," the finance_db was given a higher priority; thus, if it was busy, the sales_db workload would get its resources only after finance_db had met its service-level objectives. The benefit of the gWLM model is a major simplification in the configuration and management of the workloads because workloads within each vPar need not be individually specified and monitored.

Configuring Managed Nodes

The first step in configuring gWLM is to set up and boot the vPars according to the requirements of each of the workloads. This includes memory allocation, network connectivity, and storage configuration. HP highly recommends that you configure each vPar's minimum number of CPUs to be 1 and the maximum number of CPUs to be the total number of CPUs in the nPartition or stand-alone system. Configuring the minimum and maximum values as described enables gWLM polices to be created that specify the desired minimum and maximum number of CPUs. The benefit of relying on gWLM policies instead of the vPar configuration to define the minimum and maximum number of CPUs is that gWLM policies can be changed without rebooting the operating system. Chapter 5, "Virtual Partitions," explains the process of configuring vPars similar to those used in this example scenario.

Before configuring gWLM, the workloads must also be configured and running, at least to the point where the workload can be identified by gWLM. In this case, the Oracle databases are up and running before the configuration process for gWLM is started.

After configuring each of the vPars and their workloads, the next step is to install the gWLM agent software on each of the vPars. This software is contained in the bundle T2743AA and will be shipped as part of the HP-UX 11i Operating Environments starting in 2005. For Linux, the software is available in a Red Hat Package Manager (RPM) package named gWLM-Agent-A.xx.yy. The exact version number has been removed because the version number will change with each release of the gWLM agent.

HP recommends that you configure the gWLM agent to start automatically at boot time on all of the managed nodes. This will ensure that the gWLM agent will be restarted in the event of a system failure or scheduled reboot. The gWLM agent can be configured to start automatically by editing the configuration file located at /etc/rc.config.d/gwlmCtl for HP-UX and /etc/sysconfig/gwlmCtl for Linux. The variable GWLM_AGENT_START to should be set to 1.

The next step involves the optional configuration of the properties file for the gWLM agent. This file is located in /etc/opt/gwlm/conf/gwlmagent.properties. For most installations the default properties file is adequate; however, you can configure properties such as the logging level and timeout using this file.

Finally, the gWLM agent should be started using the following command:

 # /opt/gwlm/bin/gwlmagent 

Configuring Global Workload Manager's CMS

The configuration of the gWLM managed nodes is complete and now the gWLM CMS software must be configured. This example scenario does not describe the process of installing and configuring the HP Systems Insight Manager product. That product must be installed and properly configured before continuing through the scenario.

After installing and configuring HP Systems Insight Manager, the first step is to install and configure the gWLM CMS software that is contained in the bundle T2412AA. At the first release of gWLM, HP-UX is the only supported operating system for the CMS. HP recommends users allocate 4GB of space in the /var directory for every 100 workloads. This space will allow approximately two years' worth of data to be stored for capacity-planning and performance management purposes. This space recommendation applies only to the CMS, not the managed nodes.

The next step is to initialize the gWLM CMS by running the following command:

 # /opt/gwlm/bin/gwlminitconfig initconfig 

Next, the gWLM CMS daemon should be configured to start automatically when the CMS server is restarted or experiences a system failure. This is done by changing the GWLM_CMS_START variable to 1 in the /etc/rc.config.d/gwlmCtl file.

The next step involves configuring the properties file for the gWLM CMS daemon and gWLM user interface. For most installations the default properties file is adequate and does not require editing. The file is located in /etc/opt/gwlm/conf/gwlmcms.properties and contains variables such as the logging level, caching sizes, and settings for the graphs displayed in the GUI.

After you have modified the gWLM configuration files, you'll need to take steps to ensure that the communications between each of the gWLM agents and the CMS are secure. The gwlmsslconfig(1M) manual page describes this procedure in detail. It involves creating certificates on each of the managed nodes and copying the public key to every node in the Shared Resource Domain, including the CMS. This step is not required, but HP recommends it.

Finally, the gWLM CMS daemon should be started by executing the following command:

 # /opt/gwlm/bin/gwlmcmsd 

Creating the Global Workload Manager Policy

At this point, gWLM agents are running on both of the vPars and HP Systems Insight Manager and the gWLM CMS daemon are both configured and running. The next step is to create a new policy for the zoo19 and zoo21 vPars using the gWLM GUI. In this example scenario, the CMS is rex04. Directing a web browser to the standard HP SIM URL opens the HP Systems Insight Manager main screen. In this example, rex04.fc.hp.com is the hostname of the CMS, so that value is specified in the URL

 https://<CMS hostname>:50000 

The screen in Figure 19-4 shows HP Systems Insight Manager with the list of gWLM actions. This menu is reached by selecting the Optimize menu and then Global Workload Manager (gWLM). From this menu, most of the actions available within gWLM are readily accessible. To create a new policy, select Edit Policies.

Figure 19-4. gWLM Action Menu


Figure 19-5 shows the list of gWLM policies currently defined on the CMS. From this page, a new policy can be created or an existing policy can be modified or removed. The other tabs shown in this view are the list of SRDs on the Shared Resource Domains tab, the list of workloads on the Workloads tab, and the associations between workloads and policies on the Associations tab.

Figure 19-5. gWLM Policy List


Many standard polices are shipped with gWLM. In many cases, a new policy does not need to be created because the pre-defined policies are adequate. However, in this scenario, the maximum number of CPUs for each of the vPars will be seven because each vPar must have one CPU. So neither vPar can have all eight of the CPUs in the zoo9 nPar. As a result, the standard policy of owning four with a maximum of eight is close to what is needed, but it is not correct. You must create a new policy by clicking on the New button.

The screen in Figure 19-6 is the interface that allows a new policy to be defined. For this scenario, a new own-borrow-lend policy will be defined. Specify the name of the policy first, then select the type OwnBorrow. Finally, specify the minimum, owned, and maximum number of CPUs according to the values listed in Table 19-1. When you click on the OK button, the policy is created and is displayed in the list of existing policies. The newly created policy will be used in the next step, which goes through the Manage New Systems wizard.

Figure 19-6. gWLM Create New Policy Screen


Manage New Systems with Global Workload Manager

The two vPars must now be configured to operate under the control of gWLM in a Shared Resource Domain with the newly created policy associated with each of them. This is performed by selecting the Manage New Systems action from the main Global Workload Manager (gWLM) menu in HP Systems Insight Manager. The screen in Figure 19-7 is the first step of the Manage New Systems wizard. The first task is defining the systems to be part of the Shared Resource Domain. In this example, zoo19 and zoo21 are specified in the System list field. Clicking on the Next button causes gWLM to contact each of the gWLM agents on the systems to determine what type of compartment is being managed.

Figure 19-7. gWLM Manage New Systems Screen


Important

The gWLM agent must be running on every node specified in the list of systems for the nodes to be discovered.


After you have specified the set of systems to be part of the shared resource domain, gWLM queries the gWLM agents on each of the systems. The resulting compartment layout is shown in Figure 19-8. gWLM discovered that both vPars are within the same nPartition, zoo9, and within the same complex named zoo. gWLM has also selected the default compartment type according the information discovered. For this scenario, the compartment type is vPar, in which case, gWLM will create a single SRDs. If the compartment type were changed to PSET or FSS group, then gWLM would create a separate SRDs. This is because gWLM version 1.0 does not support hierarchical compartments. When the vPars are members of the SRD, as they are in this example, gWLM does not support additional compartments within the vPars.

Figure 19-8. gWLM Manage New Systems Compartment Type Screen


The next step of the Manage New Systems wizard, shown in Figure 19-9, is to specify the name of the Shared Resource Domain and to decide whether it should operate in advisory or managed mode. For this example, the name is specified as zoo9.vpar.dbs. Initially the shared resource domain will be placed in advisory mode so that no configuration changes will be made by gWLM. The graphs and reports will display the actions gWLM would take if it were in managed mode, but it will not make changes to the vPars while it is in advisory mode.

Figure 19-9. gWLM Manage New Systems SRD Name and Mode


Next a policy must be associated with each of the vPars. Remember that all of the processes in each of the vPars are considered to be a single workload, so the policy is associated with the entire vPar, not for individual Oracle database instance. The screen in Figure 19-10 provides an interface to define the name of the workloads and associate a policy with them. The name of the workload should be globally unique within the CMS to allow individual workloads to be recognized and managed. The hostname or any other identifier can be used for the name of the workload. In this example, the name of the vPar is specified along with the word "vpar" to make it clear that the workload is a vPar, followed by the type of workload running within the vPar, which is "dbs" in the example.

Figure 19-10. gWLM Manage New Systems Policy Association Screen


After specifying the names of the workloads, the next step is to choose the policy to be associated with the workloads. This is the step where the policy defined earlier in this example is selected. From the list of policies, the Owns_4_CPUs-Max_7_CPUs policy is chosen for each of the vPars.

The final step in the Manage New Systems wizard is the summary screen shown in Figure 19-11. This screen provides the details of the workloads and their associated policies. When the Finish button is clicked on, gWLM will deploy the policies to each of the vPars and begin to monitor the systems. Since the shared resource domain is in advisory mode, no changes will be made to the CPU allocation between vPars based on their utilization. Instead, the gWLM reports and graphs will show the resource allocation changes it would make if it were in managed mode.

Figure 19-11. gWLM Manage New Systems Summary Screen


Monitoring Workloads in Virtual Partitions

One of the most valuable tools provided in the gWLM GUI is the graphing of workload utilization and allocation. The value comes from being able to see the current workload utilization in conjunction with the resource allocation changes that gWLM would make in managed mode. Figure 19-12 shows the real-time report for the zoo19 vPar, which is available by selecting View Real-time Reports from the gWLM action menu.

Figure 19-12. Workload Utilization in Advisory Mode for zoo19


During the time period from 12:06 to 12:10 shown in Figure 19-12, the workload CPU utilization was very low. During that same period, as the graph shows, the allocation of resources would be decreased. It is important to realize that the gWLM graphs showing resource allocation while in advisory mode depict only the first step gWLM would take. In this example, the graph shows that a single CPU would be removed from zoo19, but in reality, after removing one CPU, gWLM would likely continue removing CPUs until the minimum value was reached because the CPU utilization is less than one CPU in zoo19 during the aforementioned time period.

In addition to the utilization and allocation traces, the graph also displays the minimum and maximum compartment sizes, which are constant in this scenario.

Figure 19-13 shows the workload utilization graph for zoo21. This graph shows that the workload in the zoo21 vPar is constantly at peak utilization. A careful comparison of the graph shown in Figure 19-12 and Figure 19-13 reveals gWLM's power. During the same time period of 12:06 to 12:10, CPU utilization in zoo21 was extremely high. If gWLM had been in managed mode, it would have added CPUs to the zoo21 vPar because the zoo19 vPar wasn't using them. Again, the height of the CPU allocation trace isn't the important factor. Instead, the movement of the trace indicates when gWLM would make a resource allocation change.

Figure 19-13. Workload Utilization in Advisory Mode for zoo21


Converting the Shared Resource Domain to Managed Mode

At this point, gWLM has been running for an adequate amount of time for the user to feel comfortable with its capabilities. Therefore, the SRD is being converted from advisory to managed mode. This is performed by selecting the Edit SRDs item from the Global Workload Manager action menu. This action opens a screen similar to that shown in Figure 19-5, except that the Shared Resource Domains tab is selected. From this screen, select the zoo9.vpar.dbs SRD and click on the Edit button. The resulting Edit Shared Resource Domain screen is shown in Figure 19-14. This window provides mechanisms for changing the name, toggling the mode, and switching the state of the SRD. In this instance, only the mode is changed from advisory to managed. When you click on the OK button, the change takes affect immediately and gWLM begins actively managing the resources in the two vPars.

Figure 19-14. Edit Shared Resource Domain


Monitoring Workloads in Managed Mode

Once you have changed the mode of the SRD to managed, the real-time reports will provide a clear view of the actions gWLM is taking while it ensures that each workload is receiving its required resources and that the overall resource utilization is as high as possible. The graph in Figure 19-15 shows that the actions taken match closely with the hypothetical resource changes that were shown when the SRD was in advisory mode. From the time period of 12:36 to 12:40, the resource utilization of the zoo19 vPar was low, and its size was reduced to a single CPU. During the period from 12:40 to 12:44, additional CPUs were added to zoo19 until it reached the maximum compartment size of seven CPUs. Viewing the workload utilization for zoo21 will complete the picture regarding the overall resource utilization for the entire zoo9 nPartition.

Figure 19-15. Workload Utilization in Managed Mode for zoo19


The graph in Figure 19-16 shows the workload utilization for zoo21. Using the time values to correlate the resource allocation between the two graphs, from 12:36 to 12:40 the zoo21 vPar was consuming close to 100% of its resources. Since zoo19 was idle during the same period, zoo21 was allocated its compartment maximum of seven CPUs. At 12:40, the workload in zoo21 began to lighten, and as a result, CPUs were migrated to zoo19.

Figure 19-16. Workload Utilization in Managed Mode for zoo21


Reporting with Global Workload Manager

At this point, the workloads in each of the vPars are running under the control of gWLM. However, because of gWLM's automation capabilities, a common question gWLM customers ask is, "How have my workloads been running and what has gWLM been doing with the resources available?" The gWLM product provides a workload reporting tool. Three types of workload reports are available from the gwlmreport command:

A Resource Audit report shows detailed information for the specified workloads, including the shared resource domain and compartment, policy, and utilization information.

A Top Borrowers report provides utilization information for each of the specified workloads and provides a clear picture of which workloads are consuming the bulk of the resources.

A Data Extraction report provides an interface to extract data from gWLM's workload database in a machine-readable format for use in alternate data analysis tools.

A resource audit report for the zoo19.vpar.dbs workload is shown in Listing 19-1. Two of the most important sections in the report are the Samples and Utilization sections. The Samples section provides a summary of the overall gWLM state during the samples represented in the report. Of particular interest are the Borrow and Want2Borrow metrics. If the Borrow metric is regularly high, one or more workloads are frequently borrowing resources. This is not necessarily indicative of a problem, but continued patterns may indicate that the policies should be reconfigured to more accurately reflect each workload's requirements. Second, if Want2Borrow is high on a regular basis, one or more workloads is requesting more resources than it has been allocated. Again, this does not necessarily represent a problem, but it could indicate that the policies are not properly configured for the requirements of the workloads. Finally, the Utilization section at the end of the report shows the peak, average, and minimum resource utilization. In addition, the trade balance field shows that zoo19 had a negative trade balance. This means that zoo19 was lending resources more often than it was borrowing from zoo21. A Top Borrowers report is shown in Listing 19-2.

Listing 19-1. Global Workload Manager Resource Audit Report
 # /opt/gwlm/bin/gwlmreport resourceaudit \ > -workload=zoo19.vpar.dbs Generating report for workload(s) [zoo19.vpar.dbs]. Please be patient, this may take several minutes. #---------- Resource Audit for zoo19.vpar.dbs---- #- Report information:     ReportDate=2004/12/31 10:39:25 MST     Workload=zoo19.vpar.dbs     TotalSamples=    1076     AvgSampleDuration=0.9979117565055762 (min)     ReportDateRange= [2004/12/30 - 2004/12/31]     PossibleSamples= 1346.812622697556 #-- Workload context information (from most recent sample): #- Shared Resource Domain info:     SRDName=         zoo9.vpar.dbs     SRDMode=         Managed #- Policy info:     PolicyName=      Owns_4_CPUs-Max_7_CPUs     PolicyType=      OwnBorrow     PolicySettings=  [min=1.0/own=4.0/max=7.0] #- Compartment info:     CompartmentName=   zoo19     CompartmentType=   vpar     CompartmentHost=   zoo19.fc.hp.com #- Samples info:      (%)     (count)     Total=           100.0%     1076     OwnedOnly=       041.0%     441     Borrowing=       005.6%     60     Lending=         053.4%     575     Able2Lend=       058.7%     632     Want2Borrow=     038.1%     410     CompClipped=     000.0%     0     PolClipped=      001.2%     13     PriClipped=      047.4%     510 #- Utilization: info     AvgUtil=          049.19     MaxUtil=          100.00   (Date occurred=2004/12/30                                 20:03:00 MST)     TradeBalance=    -001.19   (negative=Lending,                                 0=Own,                                 positive=Borrowing)     AvgUtilWhileLending=       021.01     MaxUtilWhileLending=       100.00   (Date occurred=                                          2004/12/30                                          20:03:00 MST)  

Listing 19-2. Global Workload Manager Top Borrowers Report
 # /opt/gwlm/bin/gwlmreport topborrowers \ > -workload=zoo19.vpar.dbs \ > -workload=zoo21.vpar.dbs \ > -duration=1day \ > -startdate=2004/12/30 Generating report for workload(s) [zoo19.vpar.dbs,                                    zoo21.vpar.dbs]. Please be patient, this may take several minutes. #---------- Top Borrowers Report ------ #- Report information:     ReportDate=2004/12/31 toc=1104514326563 Workload         Number    Lend/Own/Borrow Avg     TradeBalance                  Samples      %/%/%        Util zoo21.vpar.dbs   564       5.0/41.5/53.5   088.27  001.20 zoo19.vpar.dbs   564       53.5/41.5/5.0   049.01  -001.20 

The first portion of this example scenario is now complete. Both vPars are running under the control of gWLM and the workloads are being actively monitored. The detailed reporting capabilities provide a mechanism to generate monthly, weekly, or even daily reports. The next portion of the example scenario configures a Linux nPartition to use processor sets that will provide resource isolation between the workloads running within the nPartition.

Global Workload Manager and Linux Processor Sets Scenario

This example illustrates the use of gWLM to provide resource isolation, whereas the previous part of the example scenario focused on maximizing resource sharing. Table 19-2 shows the configuration of the three processor sets within the zoo20 nPartition.

Table 19-2. Linux Processor Set Configurations

Processor Set

Workloads

CPUs Owned

Shared Resource Domain

zoo20.pset0.default

All other processes

1

zoo20.psets.tomcat_samba

zoo20.pset1.samba

App: /usr/sbin/smbd

App: /usr/sbin/nmbd

1

 

Zoo20.pset2.tomcat

User: www

2

 


The first processor set will be allocated a single CPU and will run all processes that are not associated with one of the other two workloads. The second processor set will be dedicated to running a Samba server and is also assigned a single CPU. This workload is specified by using the full path to the application's executables. The final processor set is for the Tomcat application server. The processor set is assigned two CPUs and is identified by the name of the user who owns the Tomcat process. The name of the SRD for these three processor sets will be zoo20.psets.tomcat_samba.

Manage New System with Global Workload Manager

The first step in creating the processor sets for the zoo20 nPartition is to go through the Manage New Systems wizard. The first step of the wizard, which is not shown, provides an interface to specify the systems to be managed. In this case, the only system in the SRD is zoo20. The next step, shown in Figure 19-17, displays the layout of the discovered system. Notice that there is only the default PSET under the zoo20 nPartition at this time. Since the compartment type will be PSET in this part of the scenario, that option is selected.

Figure 19-17. Manage New Systems Wizard Processor Set Compartment Type


Clicking on the Next button takes you to the screen in Figure 19-18. This screen provides the interface to specify the name of the SRD and the mode. The mode is set to Managed from the beginning for this part of the scenario. Since the PSETs will be fixed in size, gWLM won't be making any changes to the compartments even in managed mode. Finally, the number of workloads in addition to the default PSET is specified. In this case, two workloads will be added.

Figure 19-18. Manage New Systems Shared Resource Domain Name and Mode


The next screen is shown in Figure 19-19. This screen allows you to specify the names of the workloads and select a policy to be associated with each of the workloads. Allocate a single CPU to the default PSET by selecting the Fixed_1_CPU policy. Similarly, assign a single CPU to the Samba workload. Allocate the final two CPUs in the nPartition to the Tomcat workload by selecting the Fixed_2_CPUs policy. Both of the policies used in this part of the scenario are standard policies provided by gWLM.

Figure 19-19. Manage New Systems Policy Selection


After the names for the workloads have been specified and the policies have been selected, the SRD summary is displayed. Selecting Finish on summary screen causes the SRD and the associated processor sets to be created. The next step in the scenario is to edit the two new workloads for Tomcat and Samba so that the processes are placed within the correct PSET.

Edit Workloads

After creating the SRD for zoo20, you'll need to modify workload definitions for the Tomcat and Samba applications. Use the Edit Workloads option in the gWLM action menu to display the list of workloads. From this screen, select the Samba workload and click on the Edit button; this will take you to the screen shown in Figure 19-20. Click on the New button below the Applications tab to add a new application entry for the workload. The first Application Pathname to be specified is /usr/sbin/smbd for this example. When the application path has been specified, click on the OK button below the Alternate Name List. Repeat this process for the /usr/sbin/nmbd executable, which is also associated with the Samba application. The final result is the list of application paths that contains the set of executables that constitute the workload. After entering both of the executables, clicking on the OK button below the Workload Name field updates the workload.

Figure 19-20. Specify Workload by Application Pathname for Samba Workload


Now the Tomcat workload must be modified. Select this workload from the list of workloads by clicking on the Edit button. This workload will be identified by username instead of executable path. Tomcat is a Java-based application, and it is somewhat difficult to distinguish Java processes from one another, so it is often more effective to identify the workload by the name of the user who owns the process. For this example, the user www is specified in the screen shown in Figure 19-21 and it will be the username of the user who owns the processes associated with the Tomcat application. Clicking on OK on this screen updates the Tomcat workload definition.

Figure 19-21. Specify Workload by User Name for Tomcat Workload


Monitoring Processor Set Workloads

At this point, the three processor sets are configured and gWLM is monitoring their resource usage. The graph shown in Figure 19-22 illustrates the default processor set and its resource consumption. This graph provides a prime example of a situation where using gWLM can be beneficial when applications are stacked on the same system. In this case, a process or set of processes running in the default PSET is consuming the entirety of its resource allocation. If gWLM and the associated processor sets were not in place, the processes consuming the CPU resources could overtake the Tomcat and Samba processes resulting in unsatisfied end-users. However, with gWLM in place, the processes are restrained to a single processor and cannot affect the CPU resources allocated to the other two workloads.

Figure 19-22. Workload Utilization for the Default Workload


The graph shown in Figure 19-23 further illustrates the default processor set possibly overtaking the other workloads and gWLM's actions to isolate the workloads. Notice that during the same time period between 14:36 to 14:47 when the default PSET workload was at peak utilization, the Tomcat workload had excess capacity. If Tomcat needed the CPU resources, they were immediately available. The obvious drawback to the fixed resource allocation approach is that the unused resources are not shared and can be underutilized.

Figure 19-23. Workload Utilization for Tomcat Workload


As of now, the workloads in the zoo20 nPartition are isolated from one another in terms of CPU resources. Each workload receives a fixed allocation of CPU resources and those resources are guaranteed to be available regardless of whether they are being used. The task of administering the workloads under the control of gWLM is very lightweight at this point.

Viewing the Workload Summary

With both of the shared resource domains in this example scenario up and running, monitoring and occasional minor configuration adjustments based on workload utilization changes are the only interactions you will need to have with gWLM. The screen shown in Figure 19-24 provides a high-level summary of the SRDs defined on the CMS for the two parts of this example scenario. This summary shows the overall CPU resource utilization and status for all of the SRDs on the CMS in a single view.

Figure 19-24. Global Workload Manager Workload Summary Screen


In addition to the graphical summary screen shown in Figure 19-24, gWLM also provides a command-line interface to monitor the SRDs or workloads. The output shown in Listing 19-3 is the monitoring command-line interface, which continually refreshes. As with the graphical view, the list of SRDs is displayed along with their respective size and CPU resource utilization.

Listing 19-3. Global Workload Manager Monitor Command
 # /opt/gwlm/bin/wlm monitor Thu Jan 8 15:25:15 2005 Number of deployed Shared Resource Domains: 2 Shared Resource Domain    Allocation      Size  Utilization _________________________ __________ __________ ___________ zoo9.vpar.dbs               8.00 CPU   8.00 CPU      87.7 % zoo20.psets.tomcat_samba    4.00 CPU   4.00 CPU       0.8 % _________________________ __________ __________ ___________ Totals                     12.00 CPU  12.00 CPU      58.7 % Thu Jan 8 15:25:30 2005 Number of deployed Shared Resource Domains: 2 Shared Resource Domain    Allocation       Size Utilization _________________________ __________ __________ ___________ zoo9.vpar.dbs               8.00 CPU   8.00 CPU      87.6 % zoo20.psets.tomcat_samba    4.00 CPU   4.00 CPU       0.4 % _________________________ __________ __________ ___________ Totals                     12.00 CPU  12.00 CPU      58.5 % 



The HP Virtual Server Environment. Making the Adaptive Enterprise Vision a Reality in Your Datacenter
The HP Virtual Server Environment: Making the Adaptive Enterprise Vision a Reality in Your Datacenter
ISBN: 0131855220
EAN: 2147483647
Year: 2003
Pages: 197

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net