Example Secure Resource Partitions Scenario


This example scenario walks through the process of configuring SRPs within two different vPars. Each of the vPars is running within a single nPartition. Two SRPs will be created within each of the vPars and each SRP will contain an instance of an Oracle database.

Table 7-1 shows the initial configuration of the system. In this example, the zoo21 and zoo19 virtual partitions reside within a two-cell, eight-CPU nPartition. The nPartition has a total of 8 GB of memory. The zoo21 virtual partition contains two databases instances, the HR and the Payroll databases. The zoo19 virtual partition also contains two database instances, the Finance and Sales databases.

Table 7-1. Initial System Configuration

Database Instance

Contained in

CPUs Num

Amount of Memory

hr_db

zoo21 (vPar)

4

2048 MB

payroll_db

zoo21 (vPar)

  

finance_db

zoo19 (vPar)

4

2048 MB

sales_db

zoo19 (vPar)

  


The system's current configuration as shown in Table 7-1 does not allow resource controls on the individual databases. In this example, the HR database is expected to be three times as busy as the Payroll database. However, nothing in the current implementation ensures that the HR database has access to the required resources when they are needed. One solution would be to create another virtual partition for each of the database instances, but the result would be twice the number of operating system instances to install, configure, and maintain. In addition, the database application is able to run multiple instances within a single operating system with little concern of affecting the other database instances. The primary aspects to consider when stacking similar applications such as database instances are in the area of coordinating maintenance widows and application availability. The former requires up-front planning and coordination for scheduling hardware and software maintenance. The risks associated application availability in a consolidated environment can be mitigated by employing HP Serviceguard as described in Chapter 16, "Serviceguard."

Another common area of concern when consolidating applications is resource contention. To remedy this problem, Secure Resource Partitions will be implemented to ensure that each instance of the database receives a fair share of resources. Table 7-2 shows the final resource allocation for each of the database instances. In the zoo21 vPar, PSET PRM groups are used to guarantee CPU resource allocation. In the zoo19 vPar, FSS PRM groups are used to ensure proper CPU resource allocation. In addition, the PRM memory manager will be enabled in the zoo19 vPar to ensure that each PRM group has adequate memory available and that each of the instances receives their required memory. CPU capping will also be enabled in the zoo19 implementation of PRM.

Table 7-2. Final System Configuration

SRP Group Name

Contained In

Num CPUs

CPU Manager

Security Compartment

Memory Manager

Memory Capping

CPU Entitlement

CPU Capping

hr_db

zoo21 (vPar)

4

Yes (Psets)

No

No

No

3 CPUs

No

payroll_db

zoo21 (vPar)

 

Yes (Psets)

No

No

No

1 CPU

No

finance_db

zoo19 (vPar)

4

Yes (FSS)

No

Yes (60%)

Yes (80%)

60%

Yes

sales_db

zoo19 (vPar)

 

Yes (FSS)

No

Yes (40%)

Yes (60%)

40%

Yes


Configuring PRM Processor Set Groups

The first step in configuring the PRM software is ensuring that the software is properly installed. The PRM software is bundled in the HP-UX 11i Enterprise Operating Environment. The following command shows the products that constitute the PRM software. The three components of PRM are the kernel infrastructure, the PRM user-level library, and the actual PRM product. The following command can be used to verify that the software is installed:

 # swlist -l product | grep "Process Resource Manager"   PRM-Sw-Krn     C.01.01 Process Resource Manager PRM-Sw-Krn                          product   PRM-Sw-Lib     C.02.02 Process Resource Manager PRM-Sw-Lib                          product   Proc-Resrc-Mgr C.02.02 Process Resource Manager Proc-Resrc-Mgr                           Product 

After verifying the PRM software is properly installed, the PRM configuration process can be initiated. PRM allows a configuration file to be edited either directly or through a graphical user interface (GUI). For the entirety of this scenario, the PRM GUI will be used to create the configuration file. The generated configuration file will be shown after the configuration process is complete.

The PRM GUI is located in /opt/prm/bin/xprm. The initial PRM screen is shown in Figure 7-3. The name of the system is shown on the left-hand side and the default configuration file is shown on the upper-right-hand side. Notice that the status of the configuration file shows it is not currently loaded. On the lower-right-hand side, each of the PRM managers are listed along with their respective status values.

Figure 7-3. PRM Graphical Configuration Tool


In order to create the desired configuration, select the default configuration file, /etc/ prmconf. Then select the Action menu and choose the Edit menu. This opens a forms-based editor for configuring PRM groups, as shown in Figure 7-4.

Figure 7-4. Secure Resource Partition Configuration Editor


The first PRM group to be created is the hr_db group. This group will be allocated three CPUs in a PSET. To create the group, specify the name in the Group field. Select the PSET checkbox. Finally, specify three CPUs in the Number of CPUs field. Add the PRM group to the configuration by pressing the Add button.

The second PRM group is the payroll_db group. As shown in Table 7-1, there are four CPUs in the zoo21 vPar. Three of the CPUs have been allocated to the hr_db PRM PSET group. This leaves one processor in the default PSET that must always have at least one processor. As a result, all processes other than those belonging to the hr_db PRM group will run in the default PSET. Actually, the configuration of PRM groups could stop here because the hr_db will be allocated 3 CPUs and the payroll_db instance will be restricted to the single processor in the default PSET. However, the payroll_db group will be competing with all the other processes on the system, including login shells, cron jobs, and system daemons. In order to ensure that the payroll_db receives a majority of the CPU resources in the default PRM group, a fair share scheduler group will be created within the default PSET. The payroll_db FSS group will be allocated 90 percent of the CPU resources in the default PSET. This is achieved by specifying the name in the Group field and specifying the value 900 in the Shares field. Next, 100 shares are allocated to the OTHERS group. This creates the desired 90% allocation to the payroll_db group; the remaining 10% is for all the other processes on the system.

The final PRM group configuration is shown in Figure 7-5. Notice that the hr_db group has 75% of the systems resources with three CPUs. The payroll_db group has been allocated 22.5% of the system and the OTHERS group has been allocated 2.5% of the system. The payroll_db and the OTHERS group are sharing the default PSET. The PRM_SYS group is not explicitly listed in the configuration, but the PRM software will create the group automatically. The PRM_SYS group will be allocated a percentage of the resources in the default PSET to ensure that system processes receive the resources they need; the actual utilization is typically low for processes in the PRM_SYS group.

Figure 7-5. Final PRM Group Configuration


Having completed the PRM group and CPU configuration, the PRM applications must be specified. The PRM application manager uses application groups to determine to which group processes should be associated. To configure the PRM applications, select the Applications tab in the PRM configuration editor, as shown in Figure 7-6. For each of the PRM groups, an application record is defined. An application record consists of the application name that is the full path to the executable, the group to associate with the application, and a set of alternate names. The alternate name list is especially important for applications such as Oracle databases because the database processes rename themselves upon startup according to the name of the instance that is specified in the ORACLE_SID environment variable. When the application manager is assigning processes to PRM groups, special checks are performed to ensure that the executable matching the alternate name is really the same executable as specified in the application field. For example, a shell script whose name matches the regular expression for a PRM group's alternate name will not be moved to the PRM group even though the name of shell script matches the regular expression. This check ensures that only the desired applications are running within each PRM group.

Figure 7-6. PRM Secure Resource Partition Applications


To configure application records, specify the absolute path of the executable in the Application field. Wildcard characters are allowed in the filename portion of the application name. This is useful when all of the executables in a given directory should be associated with a specific PRM group. Use the Group field to select the PRM group as configured in Figure 7-5. Finally, specify alternate names in the Alternate Names field. These steps are performed for the hr_db and payroll_db database instances. Notice that the Application field is identical for both groups; the Alternate Name field is the distinguishing factor that appropriately assigns the processes to their PRM groups.

The PRM configuration is now complete, but the file has not been saved. If you press the OK button on the dialog box shown in Figure 7-6, the application returns to the original PRM dialog shown in Figure 7-3. The configuration will have a status of Not Loaded and the Modified field will indicate Yes. Save the PRM configuration by selecting Configuration File from the Action menu and then selecting the Save menu item.

After saving the PRM configuration, you can view it from the command line using the prmlist command or by viewing the configuration file directly with a text editor. Listing 7-1 shows the output of the prmlist command.

Listing 7-1. PRM Configuration with prmlist
 # /opt/prm/bin/prmlist PRM configured from file:  /etc/prmconf File last modified:        Wed Oct  6 20:06:32 2004 PRM Group                      PRMID    CPU Entitlement ------------------------------------------------------- OTHERS                             1              2.50% hr_db                             65             75.00% payroll_db                         2             22.50% PRM User       Initial Group      Alternate Group(s) ---------------------------------------------------- adm            OTHERS bin            OTHERS daemon         OTHERS hpdb           OTHERS lp             OTHERS nobody         OTHERS nuucp          OTHERS root           (PRM_SYS) smbnull        OTHERS sys            OTHERS uucp           OTHERS webadmin       OTHERS www            OTHERS PRM Application                Assigned Group Alternate Name(s) --------------------------------------------------------------- /oracle/app/oracle/product/9.2 hr_db          ora*hr /oracle/app/oracle/product/9.2 payroll_db     ora*payroll 

Listing 7-2 shows the resulting PRM configuration file. Notice that the PRM group records for the hr_db and payroll_db groups are present with their respective configuration settings. The application records are also listed as configured. No disk I/O bandwidth, memory, or user records have been configured.

Listing 7-2. PRM Configuration File
 # cat /etc/prmconf # # Group/CPU records # OTHERS:1:100:: hr_db:PSET:::3: payroll_db:2:900:: # # Memory records # # # Application records # /oracle/app/oracle/product/9.2.0.1.0/bin/oracle::::hr_db,ora*hr /oracle/app/oracle/product/9.2.0.1.0/bin/oracle::::payroll_db,ora*payroll # # Disk bandwidth records # # # User records # 

At this point the PRM configuration is complete, but none of the PRM managers are enabled. Listing 7-3 shows the process listing for all processes matching the executable names of the database instances. Notice that the second column, PRMID, is blank for all of the processes.

The ps command accepts a special option, -P that displays the PRMID column. Because PRM is not yet enabled, the command displays a warning message that makes it clear PRM is not configured. Several columns have been removed from the ps command's output in this chapter for formatting purposes.

Listing 7-3. Process Listing before Enabling PRM
 # ps -efP | grep -e ora.*payroll -e COMMAND ps: Process Resource Manager is not configured      UID  PRMID    PID  PPID  TIME COMMAND   oracle      -   3768     1  0:00 ora_d000_payroll   oracle      -   3756     1  1:44 ora_ckpt_payroll   oracle      -   3760     1  0:00 ora_reco_payroll   oracle      -   3754     1  0:41 ora_lgwr_payroll   oracle      -   3752     1  0:35 ora_dbw0_payroll   oracle      -   3758     1  0:55 ora_smon_payroll   oracle      -   3766     1  0:01 ora_s000_payroll   oracle      -   3762     1  1:03 ora_cjq0_payroll   oracle      -   3764     1  4:48 ora_qmn0_payroll   oracle      -   3750     1  0:24 ora_pmon_payroll # ps -efP | grep -e ora.*hr -e COMMAND ps: Process Resource Manager is not configured      UID  PRMID    PID  PPID  TIME COMMAND   oracle      -   3718     1  1:32 ora_ckpt_hr   oracle      -   3730     1  0:00 ora_d000_hr   oracle      -   3732     1  0:00 ora_d001_hr   oracle      -   3714     1  0:37 ora_dbw0_hr   oracle      -   3726     1  4:22 ora_qmn0_hr   oracle      -   3712     1  0:21 ora_pmon_hr   oracle      -   3720     1  0:55 ora_smon_hr   oracle      -   3728     1  0:32 ora_s000_hr   oracle      -   3716     1  0:40 ora_lgwr_hr   oracle      -   3722     1  0:00 ora_reco_hr   oracle      -   3724     1  0:57 ora_cjq0_hr 

Loading the PRM Configuration

To load a saved PRM configuration, select Configuration File from the Action menu of the main PRM GUI shown in Figure 7-3 and then select Load, moving processes to assigned group. This step creates PSETs and moves all processes to their respective PRM groups. Loading the configuration does enable the PRM resource managers. Listing 7-4 shows the process listing after the processes have been moved. The second column, PRMID, illustrates the processes associated with each database instance having been assigned the appropriate PRM group.

important

It is important to use the ps command as shown to ensure that the application name and alternate name fields are specified correctly. If the processes do not have the correct PRMID, then the processes have not been properly placed in their groups and none of the PRM resource controls will be effective.


Listing 7-4. Process Listing before Enabling PRM
 # ps -efP | grep -e ora.*hr -e COMMAND      UID  PRMID   PID   PPID  TIME COMMAND   oracle  hr_db   3718     1  1:32 ora_ckpt_hr   oracle  hr_db   3730     1  0:00 ora_d000_hr   oracle  hr_db   3732     1  0:00 ora_d001_hr   oracle  hr_db   3714     1  0:37 ora_dbw0_hr   oracle  hr_db   3726     1  4:23 ora_qmn0_hr   oracle  hr_db   3712     1  0:21 ora_pmon_hr   oracle  hr_db   3720     1  0:55 ora_smon_hr   oracle  hr_db   3728     1  0:32 ora_s000_hr   oracle  hr_db   3716     1  0:40 ora_lgwr_hr   oracle  hr_db   3722     1  0:00 ora_reco_hr   oracle  hr_db   3724     1  0:58 ora_cjq0_hr # ps -efP | grep -e ora.*payroll -e COMMAND      UID  PRMID        PID   PPID  TIME COMMAND   oracle  payroll_db   3768     1  0:00 ora_d000_payroll   oracle  payroll_db   3756     1  1:44 ora_ckpt_payroll   oracle  payroll_db   3760     1  0:00 ora_reco_payroll   oracle  payroll_db   3754     1  0:41 ora_lgwr_payroll   oracle  payroll_db   3752     1  0:35 ora_dbw0_payroll   oracle  payroll_db   3758     1  0:55 ora_smon_payroll   oracle  payroll_db   3766     1  0:01 ora_s000_payroll   oracle  payroll_db   3762     1  1:03 ora_cjq0_payroll   oracle  payroll_db   3764     1  4:49 ora_qmn0_payroll   oracle  payroll_db   3750     1  0:24 ora_pmon_payroll 

GlancePlus is used throughout this chapter to monitor the applications and PRM groups. The GlancePlus parameter file located in /var/opt/perf/parm can be modified to include application entries for each of the instances of the database as shown in Listing 7-5.

Listing 7-5. GlancePlus Parameter File Defining Database Applications
 # cat /var/opt/perf/parm [...] application = payroll_db file = ora*payroll application = hr_db file = ora*hr application = other_user_root user = root 

After starting GlancePlus, select Application List from the Reports menu. Figure 7-7 shows the resulting report. Notice that both the hr_db and payroll_db are consuming roughly the same amount of CPU resources because the PRM resource managers are not yet enabled.

Figure 7-7. GlancePlus Application List before Enabling PRM


In addition to the textual reports, GlancePlus offers CPU graphs. Select Application CPU Graphs from the Reports menu of the Application Lists window to see graphical CPU consumption. Figure 7-8 shows a graph of CPU use for each of the database instances. Again, each of the database instances is consuming roughly the same amount of CPU resources.

Figure 7-8. GlancePlus Application CPU Graphs


Enabling PRM Processor Set CPU Manager

You will need to enable the PRM CPU manager to enforce the resource entitlements. Using the PRM configuration editor, select the CPU manager from the lower-right-hand corner, as shown in Figure 7-9. Then select Resource Managers from the Actions menu. Finally, select Enable Resource Manager. Follow the same steps to enable the application manager, shown in the lower-right-hand corner as APPL. After these two steps are complete, PRM is fully enabled for this PRM configuration. Existing processes are being controlled with the PRM CPU manager and new processes will be moved to their appropriate PRM groups.

Figure 7-9. Enabling PRM CPU Manager


Viewing PRM Processor Set Group Resource Utilization

After PRM has been enabled, the Application List in GlancePlus shown in Figure 7-10 will demonstrate that PRM is properly restricting CPU consumption based on the configured entitlements. The hr_db database instance is now receiving approximately 75% of the system's CPU resources and the payroll_db database instance is receiving approximately 25%. The payroll_db, other_user_root, and other applications listed in Figure 7-10 are all in the default PSET, sharing a single processor. The hr_db application has exclusive access to the second PSET, which has three processors.

Figure 7-10. Glance Plus Application List after Enabling PRM CPU Manager


Another tool that can be used to monitor PRM groups is the prmmonitor command. This command provides textual output of resource consumption of the PRM groups. Listing 7-6 shows the output of the command. The s command-line option displays the default PRM groups, PRM_SYS and OTHERS, in the output. The 5 option at the end of the command is the sample period's time in seconds. At every sample period, a new section of output is displayed showing the current resource consumption and the status of the PRM managers.

Listing 7-6. PRM Monitor Command Output
 # /opt/prm/bin/prmmonitor -s 5 PRM configured from file:  /etc/prmconf File last modified:        Wed Oct  6 20:06:32 2004 HP-UX zoo21 B.11.11 U 9000/800    10/09/04 Sat Oct  9 12:33:12 2004    Sample:  5 seconds CPU scheduler state:  Enabled                                                 CPU      CPU PRM Group                       PRMID   Entitlement     Used ____________________________________________________________ (PRM_SYS)                           0                  0.40% OTHERS                              1         2.50%    0.00% payroll_db                          2        22.50%   24.60% hr_db                              65        75.00%   70.70% PRM application manager state:  Enabled  (polling interval: 30 seconds) Sat Oct  9 12:33:17 2004    Sample:  5 seconds CPU scheduler state:  Enabled                                                 CPU      CPU PRM Group                       PRMID   Entitlement     Used ____________________________________________________________ (PRM_SYS)                           0                  1.35% OTHERS                              1         2.50%    0.00% payroll_db                          2        22.50%   23.65% hr_db                              65        75.00%   73.15% PRM application manager state:  Enabled  (polling interval: 30 seconds) 

The configuration of PRM is complete for the zoo21 vPar. Each instance of the database has been allocated a subset of the operating system's resources. Using various monitoring tools, the configuration has been verified to be working as expected. Now the PRM groups must be configured for the zoo19 vPar.

Configuring PRM Fair Share Scheduler Groups

The second portion of this example scenario demonstrates the configuration of fair share scheduler PRM groups in the zoo19 virtual partition. Two instances of the Oracle database are running in the zoo19 vPar, the finance_db and the sales_db. As shown in Table 7-2, the finance_db instance will be allocated 60% of the system's CPU resources and the sales_db will be allocated the remaining 40%. In addition, memory groups will be created to ensure that each database has adequate memory.

PRM fair share schedule groups are configured in much the same way as PSET PRM groups using the PRM configuration editor, /opt/prm/bin/xprm. When executed, the initial screen similar to the one in Figure 7-3 is displayed. From the initial screen, select the default configuration file /etc/prmconf and then select Edit from the Action menu. The PRM configuration editor similar to the one in Figure 7-4 is displayed.

Using the PRM configuration editor, specify the name of the group in the Group field and input the number of shares in the Shares field. These two steps are performed for both the finance_db and the sales_db. Assign the finance_db 60 shares and the sales_db 40 shares. Modify the OTHERS group so it is allocated 1 share. The PRM software automatically allocates shares to the PRM_SYS group by default, even though it is not shown in the list. The complete PRM Group configuration is shown in Figure 7-11.

Figure 7-11. PRM Configuration Editor with FSS Groups Defined


Configuration of the PRM FSS CPU groups is complete. The next step is to configure the application records. The application records are configured identically, regardless of whether PRM FSS or PSET groups are used. Figure 7-12 shows the final application records for the finance_db and sales_db groups.

Figure 7-12. PRM Configuration Editor with Application Groups Defined


Unlike the PSET groups created in the zoo21 vPar, memory groups will be configured for the PRM groups in zoo19. Configuration and activation of the memory groups ensures that each application is allocated sufficient memory during peak utilization. The Memory tab of the PRM configuration editor allows memory groups to be defined.

Figure 7-13 shows the initial screen for specifying memory resource groups. The first step in configuring memory resource groups is to select the Add all missing button near the bottom of the screen.

Figure 7-13. PRM Configuration Editor for Memory Resource Groups


important

When using memory resource groups, you must define a memory group for every PRM CPU group. Using the Add all missing feature ensures that a memory group exists for every PRM group.


Figure 7-14 shows the memory groups as added when the Add all missing feature is used. Notice that each group is allocated 1 share, meaning equal allocation between the groups. Since equal allocation is generally not what you intend, select each of the finance_db and sales_db records. When a record is selected, the fields on the right side of the screen are populated and can be edited. Modify the finance_db so it is allocated 60 shares and has a cap of 80 percent. Then modify the sales_db so it is allocated 40 shares with a cap of 60 percent. The memory caps specify the upper bound of memory consumption for each of the PRM groups. When applications reach their memory cap, they will be suppressed.

Figure 7-14. PRM Configuration Editor after Adding Missing Groups


The memory shares and caps are used in this example to illustrate their configuration. Choosing the proper values for the memory allocation and caps requires an analysis of the workload and an understanding of its memory requirements.

Figure 7-15 shows the final configuration of the PRM memory groups. Notice that the finance_db application has been assigned 60 shares with a cap of 80%. The sales_db has been allocated 40 shares with a cap of 60%.

Figure 7-15. PRM Configuration Editor after Configuring Memory Entitlements


note

The sum of the Estimated MB fields in Figure 7-15 is much less than the physical memory available on the system. In this example, the zoo19 vPar is allocated 2GB of memory, but the sum of the Estimated MB fields is roughly 400MB. The disparity exists because PRM removes kernel memory, locked memory, and shared memory from the estimated values.


The configuration of the PRM groups for zoo19 is complete. Select the OK button to return to the main PRM configuration screen. To save the configuration, select Configuration File from the Action menu and select Save.

The resulting PRM configuration file is shown in Listing 7-7. Notice that the memory records start with the characters "#!". These lines are not comments, they are actual configuration settings.

Listing 7-7. PRM Configuration File
 # cat /etc/prmconf # # Group/CPU records # OTHERS:1:1:: finance_db:2:60:: sales_db:3:40:: # # Memory records # #!PRM_MEM:1:1:::: #!PRM_MEM:2:60:80::: #!PRM_MEM:3:40:60::: # # Application records # /oracle/app/oracle/product/9.2.0.1.0/bin/oracle::::finance_db,ora*finance /oracle/app/oracle/product/9.2.0.1.0/bin/oracle::::sales_db,ora*sales # # Disk bandwidth records # # # User records 

At this point the configuration file has been saved but the applications have not been moved to their application groups, and none of the PRM resource managers are running. Once again, the GlancePlus parameter file can be updated as shown in Listing 7-8 to track the primary workloads running within zoo19.

Listing 7-8. GlancePlus Parameter File Defining Database Applications
 # cat /var/opt/perf/parm [...] application = sales_db file = ora*sales application = finance_db file = ora*finance application = other_user_root user = root 

Figure 7-16 shows that the two database instances are consuming almost identical amounts of CPU and memory resources.

Figure 7-16. GlancePlus Application List before Enabling PRM


Figure 7-17 shows the Application CPU Graphs in GlancePlus. This view shows that although the workloads are varying in CPU consumption, the average of the two is close to equal.

Figure 7-17. GlancePlus Application CPU Graphs before Enabling PRM


Loading the PRM Configuration

The next step in the configuration process is to load the configuration. From the main PRM configuration screen, select Configuration File from the Action menu and then select Load, moving processes to assigned groups. This step moves all processes to their assigned groups but does not enable the CPU, application, or memory managers.

After loading the configuration, use the ps command to verify that the processes have been assigned to the proper groups. The PRMID column of Listing 7-9 shows each of the processes associated with the workloads is assigned to the proper PRM groups.

Listing 7-9. Process Listing after Enabling PRM
 # ps -efP | grep -e ora.*sales -e COMMAND      UID  PRMID     PID   PPID  TIME COMMAND   oracle  sales_db  28014 28012 1:46 oraclesales   oracle  sales_db  28015 28012 1:46 oraclesales    oracle  sales_db  28016 28012 1:46 oraclesales    oracle  sales_db  27049     1 0:00 ora_d001_sales   oracle  sales_db  28013 28012 1:46 oraclesales    oracle  sales_db  27037     1 0:01 ora_smon_sales   oracle  sales_db  27045     1 0:00 ora_s000_sales   oracle  sales_db  27029     1 0:00 ora_pmon_sales   oracle  sales_db  27031     1 0:03 ora_dbw0_sales   oracle  sales_db  27033     1 0:05 ora_lgwr_sales   oracle  sales_db  27043     1 0:04 ora_qmn0_sales   oracle  sales_db  27035     1 0:01 ora_ckpt_sales   oracle  sales_db  27039     1 0:00 ora_reco_sales   oracle  sales_db  27041     1 0:01 ora_cjq0_sales   oracle  sales_db  27047     1 0:00 ora_d000_sales # ps -efP | grep -e ora.*finance -e COMMAND      UID  PRMID       PID   PPID  TIME COMMAND   oracle  finance_db  28056 28054 1:34 oraclefinance   oracle  finance_db  26897     1 0:03 ora_dbw0_finance   oracle  finance_db  26903     1 0:01 ora_smon_finance   oracle  finance_db  28058 28054 1:34 oraclefinance   oracle  finance_db  28057 28054 1:34 oraclefinance   oracle  finance_db  28055 28054 1:33 oraclefinance   oracle  finance_db  26915     1 0:00 ora_d001_finance   oracle  finance_db  26913     1 0:00 ora_d000_finance   oracle  finance_db  26895     1 0:00 ora_pmon_finance   oracle  finance_db  26899     1 0:05 ora_lgwr_finance   oracle  finance_db  26907     1 0:01 ora_cjq0_finance   oracle  finance_db  26905     1 0:00 ora_reco_finance   oracle  finance_db  26911     1 0:00 ora_s000_finance   oracle  finance_db  26901     1 0:02 ora_ckpt_finance   oracle  finance_db  26909     1 0:04 ora_qmn0_finance 

Enabling PRM Fair Share Schedule Groups

The final step in activating the PRM configuration is to enable the PRM managers. The following actions must be taken from the main PRM configuration screen shown in Figure 7-9.

1.

Enable the CPU manager by selecting the CPU line in the lower-right-hand corner of the screen. Then select Resource Managers from the Action menu and select Enable Resource Manager.

2.

Enable CPU capping by selecting Resource Managers from the Action menu and then selecting Enable CPU Capping.

3.

Enable the memory manager by selecting the MEM line in the lower-right-hand corner of the screen. Then select Resource Managers from the Action menu and select Enable Resource Manager.

4.

Enable the application manager by selecting the APPL line in the lower-right-hand corner of the screen. Then select Resource Managers from the Action menu and select Enable Resource Manager.

When these steps have been performed, PRM actively monitors the system and controls resource utilization. Figure 7-18 shows the final PRM configuration screen with all of the configured PRM resource managers enabled.

Figure 7-18. Final PRM Configuration and Status after Enabling PRM


Monitoring PRM Fair Share Scheduler Groups

The PRM groups have been configured and enabled as is evident in Figure 7-19, which shows the GlancePlus Application List. Notice that the finance_db is now receiving over 60% of the system's resource while the sales_db is receiving roughly 34% of the system's resources.

Figure 7-19. GlancePlus Application List after Enabling PRM


In addition to the GlancePlus Application List, views are available within GlancePlus specifically for monitoring PRM groups. Selecting PRM Group List from the Reports menu of the main GlancePlus screen provides a view similar to the Application List but does not require modifications to the GlancePlus parameter file. The PRM Group List in GlancePlus is shown in Figure 7-20.

Figure 7-20. GlancePlus PRM Group List


Using the PRM-specific reports and graphs has the added benefit of showing the default PRM groups PRM_SYS and OTHERS. The PRM CPU Graphs shown in Figure 7-21 confirms the expected amounts of CPU consumption for each of the PRM groups.

Figure 7-21. GlancePlus PRM CPU Graphs


Viewing the Effects of CPU Capping

The PRM configuration on zoo19 has CPU capping enabled. In some situations this is desired in order to guarantee that an application receives no more than its share of the system. In fact, users of the workload may like the increased performance when a workload is using more than its share of the system and may become dissatisfied when the system is busy and the workload is receiving only its allocated share of the system.

However, in other cases, CPU capping may not be required. Figure 7-22 shows the two database workloads. Currently the finance_db is idle, but the sales_db is consuming about 36% of the system. In this case, the sales_db is consuming all of its CPU resources and could use more resources if they were available.

Figure 7-22. GlancePlus Application List with PRM CPU Capping Enabled


For illustration purposes, CPU capping will be disabled in the zoo19 PRM configuration. In the PRM configuration editor, select the CPU manager in the lower-right-hand corner of the screen. Then select Resource Managers from the Action menu. Finally, select Disable CPU Capping. Figure 7-23 shows the resulting resource consumption. The sales_db is now consuming 88% of the system. This is allowed because CPU capping has been disabled and the finance_db application is not using its allotment of CPU resources. When the finance_db application becomes busy, the CPU allocations will go back to their configured entitlements.

Figure 7-23. GlancePlus Application List with PRM CPU Capping Disabled


It should be clear from this illustration that CPU capping may result in decreased system utilization when idle resources are not being shared.

Disk I/O Bandwidth Management

PRM also offers a disk I/O bandwidth resource manager. It monitors the I/O requests in the kernel for LVM volume groups and VxVM disk groups. The I/O requests are reprioritized in the kernel to ensure that disk bandwidth entitlements are met. The configuration of the disk manager is not covered in this book, but its usage and configuration is very similar to that of the other PRM managers.



The HP Virtual Server Environment. Making the Adaptive Enterprise Vision a Reality in Your Datacenter
The HP Virtual Server Environment: Making the Adaptive Enterprise Vision a Reality in Your Datacenter
ISBN: 0131855220
EAN: 2147483647
Year: 2003
Pages: 197

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net