14.3 Application tuning

 < Day Day Up > 



Per our methodology discussed earlier, the tuning of an application starts here. If the performance engineer has no clue regarding what the application does (either because he is new to the organization or because he is a consultant) or has no insight into the various tiers of the application, the first and foremost requirement is to get a basic understanding of the application. Get an overview or basic understanding of the user behavior before taking the step of tuning any quadrant or any piece of the system.

Once a basic understanding of the application is obtained, the next step is to select a tool that could be used for load testing the application. There are several off-the-shelf products available, for example LoadRunner, e-Load, etc.[2] Before starting the tuning process, it is important to capture a performance baseline using the load-testing tool that would be used throughout the testing process for comparison. As the testing progresses and when several areas of the application are changed in some form or the other, it would always be helpful to compare any future testing against this baseline to ensure that progress in the right direction is being achieved.

Every time improved performance is achieved due to changes in the application, changes in the system configuration, etc., the new test result becomes the new baseline. This also helps maintain a progress report as tuning progresses through the various iterations and through the other quadrants.

The application interacts with the database through its persistence layer. Let us start the analysis from this layer. The buck starts here!

One of the common problems identified during application tuning is that the application connects and disconnects for each database interaction. This is a common problem with stateless middleware in application servers. This mistake has over twice the magnitude of impact on performance, and it is totally unscalable. A possible solution here is to have a three-tier architecture where users or clients connect to the middle tier, which has permanent connections to the database. In this way, different users can use a single permanent connection to the database.

In order to tune the database, there should be sufficient evidence that the database is performing slowly, and which areas of the database are performing slower than expected. In order to gather the data for analysis, certain parameters have to be enabled in the init<SID>.ora file.

TIMED_STATISTICS

This is a very important parameter and turning this off will disable most of the statistics-gathering useful for a scientific analysis on the performance of the system. Check if the parameter has been enabled in the parameter file and if it is not enabled, this needs to be turned on. Enabling certain parameters may also automatically enable certain other parameters. For example, enabling the parameter STATISTICS_LEVEL to a value of TYPICAL automatically enables TIMED_STATISTICS to TRUE.

The myth that enabling the TIMED_STATISTICS parameter causes significant performance degradation is false. The overhead from enabling this parameter is so insignificant that, compared to the benefits that this parameter would provide, there should be no hesitation in enabling it. As we have discussed, tuning is not an art, it is scientific, and for scientific analysis there should be evidence in the form of data or statistics. To gather statistics, this parameter should be enabled.

TIMED_OS_STATISTICS

The TIMED_STATISTICS parameter will only enable gathering of statistics at the database and instance level. Starting with Version 9i, Oracle has introduced a new parameter called TIMED_OS_STATISTICS. This parameter will enable collection of operating system level statistics.

STATISTICS_LEVEL

This parameter sets the amount of statistics that will be collected. This is a new parameter in Oracle 9i and could be set to one of these three values, BASIC, TYPICAL, or ALL. TYPICAL is the default; when the value is set to TYPICAL or ALL it automatically enables the STATISTICS_LEVEL parameter.

Setting the STATISTICS_LEVEL to ALL will enable the collection of O/S-level statistics. This should be done only under controlled conditions because it could degrade performance to a great extent.

V$STATISTICS_LEVEL

The V$STATISTICS_LEVEL view provides the status of the statistics or advisories controlled by the STATISTICS_LEVEL initialization parameter. Most of the advice views will provide statistical advice when the STATSTICS_LEVEL is set to TYPICAL or ALL.

COL NAME FORMAT A30 COL VIEW_NAME FORMAT A25 SELECT   STATISTICS_NAME NAME,   SESSION_STATUS SES_STATUS,   SYSTEM_STATUS SYS_STATUS,   ACTIVATION_LEVEL ALEVEL,   STATISTICS_VIEW_NAME VIEW_NAME,   SESSION_SETTABLE SESTABLE FROM V$STATISTICS_LEVEL / NAME           SES_STAT   SYS_STAT   ALEVEL   VIEW_NAME          SES -------------  ---------- -------   --------  ------------------ ---- Buffer Cache   ENABLED    ENABLED   TYPICAL   V$DB_CACHE_ADVICE   NO Advice MTTR Advice    ENABLED    ENABLED   TYPICAL   V$MTTR_TARGET       NO                                               _ADVICE Timed          ENABLED    ENABLED   TYPICAL                       YES Statistics Timed OS       DISABLED   DISABLED  ALL                           YES Statistics Segment Level  ENABLED    ENABLED   TYPICAL   V$SEGSTAT           NO Statistics PGA Advice     ENABLED    ENABLED   TYPICAL   V$PGA_TARGET        NO                                               _ADVICE Plan           DISABLED   DISABLED  ALL       V$SQL_PLAN          YES Execution                                     _STATISTICS Statistic Shared Pool    ENABLED    ENABLED   TYPICAL   V$SHARED            NO Advice                                        _POOL_ADVICE   8 rows selected.

The query that was used as an example in Chapter 13 to discuss how the tools behave, is probably a good starting point here:

SELECT UP.USPRL_ID,          UP.USPRL_FIRST_NAME,          UP.USPRL_LAST_NAME,          UP.USPRL_CITY,          UP.USPRL_STATE_ID,          UP.USPRL_LOGIN_NAME,          UP.USPRL_EMAIL,          UP.USPRL_PHONE,          UP.USPRL_ROLE_CD,          CMP.COMP_NAME,          CMP.COMP_SCAC_CODE,          US.USEC_TOTAL_LOGINS,          UL.USRLI_ID     FROM  USER_PROFILE UP,          COMPANY CMP,          USER_LOGIN UL,          USER_SECURITY US     WHERE UL.USRLI_ACTIVE_STATUS_CD =  AND UL.USRLI_LOGGED_IN_USPRL_ID = UP.USPRL_ID   AND UP.USPRL_COMP_ID    = CMP.COMP_ID  AND UP.USPRL_ID    = US.USEC_USPRL_ID ORDER BY CMP.COMP_TYPE_CD,            CMP.COMP_NAME,            UP.USPRL_LAST_NAME     Execution Plan --------------------------------------------------------------------- 0      SELECT STATEMENT Optimizer= CHOOSE (Cost= 11 Card= 19 Bytes= 3078) 1    0 SORT (ORDER BY) (Cost= 11 Card= 19 Bytes= 3078) 2    1   TABLE ACCESS (BY GLOBAL INDEX ROWID) OF          'USER_         (Cost= 2 Card= 1 Bytes= 8) 3    2     NESTED LOOPS (Cost= 8 Card= 19 Bytes= 3078) 4    3       NESTED LOOPS (Cost= 7 Card= 19 Bytes= 2926) 5    4        NESTED LOOPS (Cost= 6 Card= 19 Bytes= 2204) 6    5         VIEW OF 'index$_join$_003'         (Cost= 4 Card= 19 Bytes= 342) 7    6          HASH JOIN 8    7           INDEX (FAST FULL SCAN) OF 'PK_USRLI' (UNIQUE)        (Cost= 3 Card= 19 Bytes= 342) 9    7           INDEX (FAST FULL SCAN) OF 'USRLI_INDX1'        (NON-UNIQUE) (Cost= 3 Card= 19 Bytes= 342) 10   5           TABLE ACCESS (BY LOCAL INDEX ROWID) OF         'USER_PROFILE' (Cost = 2 Card= 1 Bytes= 98) 11   10         INDEX (UNIQUE SCAN) OF 'PK_USPRL' (UNIQUE) 12   4       TABLE ACCESS (BY LOCAL INDEX ROWID) OF 'COMPANY'         (Cost=  COCard= 1425 Bytes= 54150) 13   12         INDEX (UNIQUE SCAN) OF 'PK_COMP' (UNIQUE) 14   3       INDEX (RANGE SCAN) OF 'USEC_INDX1' (NON-UNIQUE) Statistics -----------------------------------------------------------------------          0  recursive calls        394  db block gets     433218  consistent gets       8366  physical reads          0  redo size      27787  bytes sent via SQL*Net to client       1007  bytes received via SQL*Net from client         34  SQL*Net roundtrips to/from client          1  sorts (memory)          0  sorts (disk)        487  rows processed 

The above output is generated from an EXPLAIN PLAN that was created for the query listed above. Looking at the statistics generated, the first question that probably arises would be, why are there 433,218 consistent gets and 8366 physical reads to get 487 rows back to the user? This tells us that Oracle had to filter through 433,218 rows of data to finally retrieve 487 rows that actually met the result set. During this process, Oracle also had to perform 8366 disk reads.

14.3.1 What are consistent gets?

Consistent gets indicate the rows found in memory; Oracle had scanned through the large volume of rows in memory, filtering out what was not needed. Basically rows were retrieved from memory or buffer instead of from the physical media such as a disk. Since there is no physical operation to retrieve the row, it could be considered a logical operation, hence it is also called a logical I/O (LIO). However, when data is retrieved from disk (physical) these rows are also loaded into the buffer cache, thus it should also be measured as an LIO influencer.

The following descriptions or statistics indicate an LIO operation:

  • Buffer gets

  • Consistent gets

  • Db-block gets

  • Buffer pinned count

Retrieving from memory has always been less expensive compared to retrieving from disk. In the case of retrieving from disk there are physical disk seeks and read time involved. When it comes to LIO operations in

Oracle, they are cheaper compared to physical I/O (PIO); however, LIOs are not cheap enough to be ignored.

Oracle depends on the operating system to manage the buffer, which means that every time an Oracle process needs to search for a row in the buffer there is a latch involved. A latch means to place a lock in memory. This means whenever a row is to be retrieved and validated, there is a latch and a lock every single time, which consumes a high amount of CPU cycles. Now getting a latch and lock is not a straightforward task either. When an Oracle process needs a latch, the O/S may not have any to give, which means it has to wait.

The other value from the query statistics above that catches one's attention, is the physical reads or PIOs. As we have discussed earlier, PIOs are even more expensive. PIOs can be tuned by examining the query and the underlying tables to ensure that there are indexes and the optimizer is using the correct path when generating the execution plan. Event 10053 discussed in Chapter 13 could be a helpful method to get this accomplished.

The EXPLAIN PLAN generated using the AUTOTRACE function could also be generated using the SQL_TRACE = TRUE parameter or by using the 10046 events.

In this situation the query needs to be tuned or rewritten to reduce the LIO. Query tuning may involve rewriting the query, adding a missing index to the table, or adding a hint to the query to use a different execution path.

14.3.2 Hard parses

Hard parses is another area to look into. If Oracle is encountering a large number of hard parses, this could also significantly affect the overall performance of the system. Hard parses are mainly caused because the queries are not being shared or, more precisely, reused. This means that if the query has been executed for the first time, and the query has been parsed, the query is loaded into the shared pool or the library cache. Subsequently, when the query is executed again, Oracle should find the query in the shared pool and should be able to reuse it. However, if it is not found, the query has to be parsed again. This could happen because the query was not identical to the previous query, they were inefficiently written, or the query was not frequently used and hence was aged out of the shared pool, providing space for more frequently used queries.

The number of hard parses in the system can be obtained using the following query:

SELECT  PA.SID,  PA.VALUE "Hard Parses", EX.VALUE "Execute Count"  FROM V$SESSTAT PA, V$SESSTAT EX WHERE PA.SID=EX.SID AND PA.STATISTIC# =(SELECT STATISTIC#              FROM V$STATNAME              WHERE NAME ='parse count (hard)') AND EX.STATISTIC# =(SELECT STATISTIC#              FROM V$STATNAME              WHERE NAME ='execute count') AND PA.VALUE > 0 / SID   Hard Parses  Execute Count ----  ----------- ------------- 13            28             853 14             1              14 15             4            4939 16            15             360 17            22           14389 20             1           13806 21            33          207273 22            22           95052 23            26          172151 24            58          186645 25            25           98134 SID   Hard Parses  Execute Count ----  -----------  ------------- 26            93         178495 28             1          74224 30             1          89114 32             9           2185 36             2         114054 41            14          10116 53             6          65957 61             6             20 72             1         143838 20 rows selected.

Bad or inefficient SQL is the cause of hard parses. Inefficient queries include those that do not use bind variables. Not using bind variables causes SQL queries to be parsed every single time, because literals in queries make each one different from a query that was executed previously, and that is already saved in the library cache of the instance.

Inefficient queries with hard-coded values/literals are something that are within the control of the developer and should be rewritten to use bind variables. In the case of Java applications, the method would be to use prepared statements.

In the SQL query example above, the literal Active has been hard coded, and subsequently the code when requiring data with USRLI_ACTIVE_STATUS_CD of inactive will execute another statement with the literal Inactive hard coded. That means there would be two variations of the same query in the shared pool. Now what if this is transactional data, for example an order number in an order management system? The variations could be many.

WHERE UL.USRLI_ACTIVE_STATUS_CD =  AND UL.USRLI_LOGGED_IN_USPRL_ID = UP.USPRL_ID   AND UP.USPRL_COMP_ID           = CMP.COMP_ID   AND UP.USPRL_ID                = US.USEC_USPRL_ID

If this query was to use bind variables it would probably be rewritten as

WHERE UL.USRLI_ACTIVE_STATUS_CD = :STATUS_CD  AND UL.USRLI_LOGGED_IN_USPRL_ID = UP.USPRL_ID   AND UP.USPRL_COMP_ID = CMP.COMP_ID  AND UP.USPRL_ID = US.USEC_USPRL_ID

In the rewritten query, :STATUS_CD is a bind variable. During execution time, this variable could have different values depending on what the application business logic requires.

In a Java application, bind variables are denoted by the use of prepared statements. If the query were to use prepared statements, it would probably be rewritten as

package rac.chapter14; //Java Imports import java.sql.Connection; import java.sql.ResultSet; import java.sql.PreparedStatement; //Oracle Imports import oracle.jdbc.OracleConnection; public class BetterPerformance { ... ...     void runQuery() throws SQLException {     void runQuery() throws SQLException {     String query = "SELECT UP.USPRL_ID, " +                             "UP.USPRL_FIRST_NAME, " +                             "UP.USPRL_LAST_NAME, " +                            "UP.USPRL_CITY, " +                            "UP.USPRL_STATE_ID, " +                            "UP.USPRL_LOGIN_NAME, " +                            "UP.USPRL_EMAIL, " +                            "UP.USPRL_PHONE, " +                            "UP.USPRL_ROLE_CD, " +                            "CMP.COMP_NAME, " +                            "CMP.COMP_SCAC_CODE, " +                            "US.USEC_TOTAL_LOGINS, " +                            "UL.USRLI_ID " +                 "FROM      USER_PROFILE UP, " +                            "COMPANY CMP, " +                            "USER_LOGIN UL, " +                            "USER_SECURITY US " +                            "WHERE UL.USRLI_ACTIVE_STATUS_CD = ? " +                 "AND       UL.USRLI_LOGGED_IN_USPRL_                 ID=UP.USPRL_ID"+                 "AND       UP.USPRL_COMP_ID=CMP.COMP_ID                            " +                 "AND       UP.USPRL_ID = US.USEC_USPRL_ID                            " +                 "ORDER BY CMP.COMP_TYPE_CD,                              CMP.COMP_NAME,                              UP.USPRL_LAST_NAME";             pStmt = conn.prepareStatement(query);             pStmt.setString(1,"Active");             rs = pStmt.executeQuery();        } ... ... 

In both these situations, using bind variables and prepared statements, the query itself is the same and would be reused most of the time that a user executes this query from the shared pool.

While using bind variables or prepared statements would be an efficient practice, Oracle provides parameters that can help improve efficiency of SQL queries.

CURSOR_SHARING

To help with the bad SQL and to improve Oracle execution and reusability of the queries that use literals, Oracle has introduced a parameter in Version 8.0 called CURSOR_SHARING. By enabling this parameter, Oracle will generate bind variables for all literals that it encounters, which means queries are now shareable.

SESSION_CACHED_CURSORS

This parameter specifies the number of session cursors to cache. When the cursors are cached, subsequent calls of the same SQL statement will move the cursor to the session cursor cache. Subsequently, when the same SQL statement is executed, the parse calls will find the cursor in the cache and use the already open cursor.

Like the shared pool algorithm, Oracle uses the LRU algorithm to remove entries in the session cursor cache.

V$SQL_PLAN_STATISTICS

This is a new dynamic view introduced in Oracle 9i and captures the execution statistics for each operation in the execution plan. The statistics are collected for all cached cursors. Data in this view is collected only if the STATISTICS_LEVEL has been set to ALL.

14.3.3 Tuning parallel operators

SELECT SID, DECODE(EVENT,'PARALLEL QUERY DEQUEUE WAIT', 'PQ DEQ WAIT', 'PARALLEL QUERY QREF LATCH', 'PQ REF LATCH', EVENT) EV, SEQ#, P1,P2,P3,WAIT_TIME FROM V$SESSION_WAIT ORDER BY 1;

Below is the output from this query. For the purpose of displaying all information, only column P1 has been displayed. Event ''PX Deq Credit: send blkd'' is a parallel query event and indicates that parallel query execution.

SID  EV                               SEQ#          P1 ---  -------------------------- ----------   --------- 1    pmon timer                     39681          300 2    rdbms ipc message              44068          300 3    rdbms ipc message              46729          300 4    rdbms ipc message              13379          300 5    smon timer                      2285          300 6    rdbms ipc message                 78       180000 7    rdbms ipc message                  1         6000 8    rdbms ipc message                  1         6000 9    rdbms ipc message                  1         6000 10   PX Deq Credit: send blkd        1090    268566527 11   SQL*Net message from client     8105   1111838976 12   PX Deq Credit: send blkd         195    268566527 13   SQL*Net more data to client    11382   1650815232 14   SQL*Net message from client    11600    675562835 15   PX Deq Credit: send blkd         187    268566527 16   rdbms ipc message                  1         6000 17   PX Deq Credit: send blkd         193    268566527
  1. When all of the above methods do not provide sufficient information to help diagnose the problem, the next method is to try and obtain more detailed information about the session activity by doing a trace of the session. This could be done by tracing the current session's activity by enabling event 10046 at level 12.

    ALTER SESSION SET EVENTS '10046 TRACE NAME CONTEXT FOREVER, LEVEL 12';

The trace file is normally generated in the location defined by the user dump destination in the parameter file. The output of the trace file looks like this (please note that PX Deq: Join ACK and other waits that start with PX all indicate that general parallel execution activity):

WAIT #1: nam='process startup' ela= 2 p1=80 p2=1 p3=0 WAIT #1: nam='process startup' ela= 1 p1=80 p2=2 p3=0 WAIT #1: nam='process startup' ela= 1 p1=80 p2=3 p3=0 WAIT #1: nam='PX Deq: Join ACK' ela= 1 p1=268500992 p2=1 p3=0 WAIT #1: nam='PX Deq: Join ACK' ela= 0 p1=268500993 p2=1 p3=0 WAIT #1: nam='PX Deq: Join ACK' ela= 0 p1=268500995 p2=1 p3=0 WAIT #1: nam='PX Deq Credit: send blkd' ela= 0 p1=268500995 p2=1 p3=0 WAIT #1: nam='PX Deq Credit: send blkd' ela= 0 p1=268500995 p2=2 p3=0 WAIT #1: nam='PX Deq: Parse Reply' ela= 0 p1=200 p2=1 p3=0 WAIT #1: nam='PX Deq: Parse Reply' ela= 0 p1=200 p2=2 p3=0 WAIT #1: nam='PX Deq: Parse Reply' ela= 0 p1=200 p2=1 p3=0 WAIT #1: nam='PX Deq: Parse Reply' ela= 0 p1=200 p2=1 p3=0 EXEC #1:c=2,e=10,p=0,cr=0,cu=3,mis=0,r=0,dep=0,og=3, tim=3704363918 WAIT #1: nam='SQL*Net message to client' ela= 0 p1=1650815232 p2=1 p3=0

[2]LoadRunner is a product owned by Mercury Interactive; e-Load is a product owned by Emperix.



 < Day Day Up > 



Oracle Real Application Clusters
Oracle Real Application Clusters
ISBN: 1555582885
EAN: 2147483647
Year: 2004
Pages: 174

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net