0560-0562

Previous Table of Contents Next

Page 560

numbers behind the SQL execution. In addition to an execution plan, SQL*Trace generates factors such as CPU and disk resources, in addition to an execution plan. This is often considered a lower-level view of how a database query is performing, because it shows factors at both the operating system and RDBMS levels.

To use SQL*Trace, you must first set some parameters in the INIT.ORA parameter file:


MAX_DUMP_FILE_SIZE Denotes the maximum size for an Oracle-generated file. This value is the number in operating system blocks (which may differ from the size in database blocks).
SQL_TRACE Causes a trace file to be written for every user who connects to the database when it is set to TRUE. Because of disk space requirements and database overhead, it should be used judiciously.
TIMED_STATISTICS Causes the database to gather database statistics when this value is set to TRUE. It causes overhead of 4_8 percent.
USER_DUMP_DEST The directory path where trace files will be written.

After you have set the INIT.ORA parameters, you can invoke the SQL*Trace utility manually. If the SQL_TRACE parameter is set, it is not necessary to invoke SQL*Trace manually because a trace file will be written automatically; however, it is more common to call it manually. To invoke SQL*Trace, use either SQL or PL/SQL.

Use SQL when there is a specific query to be analyzed . For example:

 % sqlplus SQL> ALTER SESSION SET SQL_TRACE = TRUE; SQL> @/tmp/enter_your_query.sql SQL> ALTER SESSION SET SQL_TRACE = FALSE; SQL> EXIT 

You can either type in the query at the SQL prompt or source it in from an external file that contains the query.

In many cases, especially through applications such as SQL*Forms, it is necessary to invoke the trace utility by using PL/SQL. This is especially helpful when you are dealing with a third-party application for which the SQL syntax is not readily obvious. To invoke SQL*Trace, use the following PL/SQL statement:

 BEGIN    DBMS_SESSION.SET_SQL_TRACE (TRUE);    /* PL/SQL code goes here */ 

As with SQL*Plus, the trace gathers information until the session disconnects or is deactivated.

 /* PL/SQL code goes here */   DBMS_SESSION.SET_SQL_TRACE (FALSE); END; 

Page 561

After the trace file has been generated, it must be converted into a readable format. Oracle provides the TKPROF utility to accomplish this task. Using TKPROF, you can convert the raw trace file into a readable report.

TIP
Locating a trace file in the dump directory can be quite a task, especially if many other files exist. Two tricks speed this process. The first is to use the UNIX command ls -lt to list the files in date order, with the newest file listed first. The other option is to use a SELECT USERNAME FROM DUAL as part of the trace and issue a grep USERNAME *.trc to find the trace file.

After the trace file has been located, it is necessary to run the TKPROF utility against it to produce readable output. This information is statistical and shows how queries perform at the database and operating system level. The report produced by TKPROF contains CPU usage, disk utilization, and the count of rows returned by the query (or queries) enclosed in the trace file output. You can also have TKPROF return EXPLAIN PLAN information from each query in the trace. TKPROF is invoked as follows :

 % tkprof ora_4952.trc ora_4952.log 

This statement takes the trace output from the ORA_4952.TRC SQL*Trace file and generates its output in the file named ORA_4952.LOG. This particular statement does not generate an EXPLAIN PLAN for any of the queries contained in the trace file. Supplemental options enable you to control a certain extent or the information that is produced. They are


EXPLAIN Enables you to specify a username and password that will generate an EXPLAIN PLAN for each query TKPROF analyzes INSERT Specifies where to dump both the SQL statements in the trace file and the data contained in the insert statements PRINT Designates the number of queries in the trace file to examine ”especially useful for trace files that contain many SQL statements RECORD Enables you to specify an output file that will contain all the statements in the trace file SORT Enables you to control the order in which the analyzed queries are displayed SYS Indicates whether to include queries run against the SYS tables (the data dictionary) in the trace output TABLE Specifies the schema.tablename to use when generating a report with the EXPLAIN option

Page 562

When you run the trace file through TKPROF, it generates a report. For example:

 ************************************************************ select o.ordid, p.partid, o.qty, p.cost, (o.qty * p.cost) from part p, order o where o.partid = p.partid call      count   cpu    elapsed  disk  query  current  rows -------   -----  ----   --------  ----  -----  -------  ---- Parse         1  0.02       0.02     0      0        0     0 Execute       1  0.00       0.00     0      0        0     0 Fetch         4  0.03       0.03     1     20       10    50 ------    -----  ----   --------  ----  -----  -------  ---- total         6  0.05       0.05     1     20       10    50 Misses in library cache during parse: 0 Misses in library cache during execute: 1 Optimizer hint: CHOOSE Parsing user id: 22  (MERLIN) ************************************************************ 

As with interpreting the utlbstat /utlestat report and the EXPLAIN PLAN, interpreting the results produced by TKPROF and SQL*Trace is more art than science. The following guidelines are helpful:

  • The totals for the query and current columns represent the logical I/O (in blocks) accumulated by the query. Compare them against the total of the rows column. Except in special cases, there should be a few logical I/O blocks versus a large number of returned rows. The optimal ratio is approximately 1:2.
  • In most cases, the execute value should be higher than the parse value. If the number of parses is the same as the number of executions, you should consider increasing the size of the shared pool.
  • Compare the amount of physical I/O (total of the disk column) to the amount of logical I/O (sums of the totals of the query and current columns). The logical I/O should be much higher than the physical I/O.

Dynamic Performance (V$) Tables

Another useful tool for database tuning is the dynamic performance tables, also called the V$ tables (which are really views despite this name ). The V$ views are views on the Oracle x$ tables, which are SGA-held memory structures created by the database at startup. These tables ”and their views ”are updated in real time as the database runs and provide the DBA a good view of the current status of the database. Several third-party applications use V$ tables to access statistical or performance monitoring data, and the views are used by the monitor component of the Oracle Server*Manager. After database creation, V$ tables can be accessed only by the user called SYS (a privileged user account in Oracle that owns the V$ tables), who can make grants on them to other users.

Previous Table of Contents Next


Oracle Unleashed
Oracle Development Unleashed (3rd Edition)
ISBN: 0672315750
EAN: 2147483647
Year: 1997
Pages: 391

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net