direct path write


direct path write

The direct path write wait event has three parameters: file#, first block#, and block count. In Oracle Database 10 g , this wait event falls under the User I/O wait class. Keep the following key thoughts in mind when dealing with the direct path write wait event.

  • These are waits that are associated with direct write operations that write data from users ‚ PGAs to data files or temporary tablespaces. Oracle direct I/O is not the same as Unix operating system ‚ s direct I/O. Oracle direct I/O bypasses the SGA, while the Unix operating system direct I/O bypasses the file system cache.

  • Direct writes may be performed in synchronous or asynchronous mode, depending on the platform and the value of the DISK_ASYNC_IO parameter. The systemwide direct path write wait event statistics can be very misleading when asynchronous I/O is used.

  • A significant number of direct path write waits is most likely an application issue.

Common Causes, Diagnosis, and Actions

The direct path write waits are driven by SQL statements that perform direct write operations in the temporary or regular tablespaces. This includes SQL statements that create temporary segments such as SORT , HASH, INDEX, DATA, LOB_DATA, and LOB_INDEX in the temporary tablespace, and statements that perform direct load operations such as CTAS (create table as select), INSERT /*+ APPEND */ ‚ SELECT, and SQL loader running in direct load mode. Oracle sessions wait on the direct path write event to confirm that the operating system has completed all outstanding I/Os, or they wait for I/O slots to become available so that more writes can be issued.

Note ‚  

As of Oracle 8.1.7, there is a separate direct write wait event for LOB segments ‚ direct path write (lob). This wait event applies to LOBs that are stored as NOCACHE.

Session-Level Diagnosis

Like the direct path read wait event, the direct path write wait event is also unlikely to show up as the leading bottleneck within a session due to the way the waits are accounted for, especially when asynchronous I/O is used. This means you shouldn ‚ t rely on the V$SESSION_EVENT view for diagnostic data. Instead, you can find current sessions that perform a lot of direct write operations from the V$SESSTAT view using the following query. The downside to this approach is that there is no time element.

 select a.name, b.sid, b.value, 
round((sysdate - c.logon_time) * 24) hours_connected
from v$statname a, v$sesstat b, v$session c
where b.sid = c.sid
and a.statistic# = b.statistic#
and b.value > 0
and a.name = 'physical writes direct'
order by b.value;

NAME SID VALUE HOURS_CONNECTED
------------------------- ---- ---------- ---------------
physical writes direct 4 39 980
physical writes direct 5 445480 980

The next step is to discover if the direct writes are going to the temporary tablespace or data files. You can do this by monitoring the P1 values of the direct path write events. If they mostly go to the temporary tablespace, you have to find out what kind of temporary segments are being created and the SQL statements that initiate them. The solution is the same as with the direct path read : cure it from the application before you decide to increase the SORT_AREA_SIZE, HASH_AREA_SIZE, or PGA_AGGREGATE_TARGET parameters. Tune the application and SQL statements to reduce the number of sort runs or hash partitions. Remember, when you reduce direct writes, you will also reduce the number of direct reads. This is because the temporary segments that are created must be read back.

If the direct writes are mostly going to data files, then one or more SQL statements are performing direct load operations. In this case, you may monitor the I/O operations from the operating system using sar ‚ d or iostat ‚ dxnC . Make sure the disk service time is acceptable and there is no hot spot.

You can discover the direct write I/O size from the P3 parameter of the direct path write event. You can do this by querying the V$SESSION_WAIT view or by tracing the Oracle session that performs direct writes with the trace event 10046 at level 8. Alternatively, you can get the same answer by tracing the Unix session that performs direct writes with truss , tusc , trace , or strace by enabling the debug information for the session that performs direct writes with the trace event 10357 at level 1. Following is a snippet of the event 10046 and 10357 trace file that belongs to a session that performs direct writes. The wait event trace data and the direct write debug information are mixed because both traces are enabled at the same time. From it, you learn that there are two I/O slots available (slot_cnt=2), and an I/O slot size is 8K (slot_size=8192), which limits the write to one Oracle block (also indicated by cnt=1 and p3=1). The direct writes are going to file #51 as indicated by P1=51 and AFN=33HEX.

 kcblin: lbs=fc855d08  flag=10  slot_cnt=2 slot_size=8192  state obj=243a1f54 
kcblin: state objects are: Call=243a1f54,Current Call=243a1f54, Session=243208b4
kcbldio:lbs=fc855d08 slt=fc8858bc typ=1 async=1 afn=33 blk=f409 cnt=1 buf=fc8f2a00
kcbldio:lbs=fc855d08 slt=fc885a40 typ=1 async=1 afn=33 blk=f40a cnt=1 buf=fc90da00
kcblcio: lbs=fc855d08 slt=fc8858bc type=1 afn=33 blk=f409 cnt=1 buf=fc8f2a00
kcblcio: lbs=fc855d08 slt=fc8858bc type=1 afn=33 blk=f409 cnt=1 buf=fc8f2a00
WAIT #1: nam='direct path write' ela= 9702 p1=51 p2=62473 p3=1
kcbldio:lbs=fc855d08 slt=fc8858bc typ=1 async=1 afn=33 blk=f40b cnt=1 buf=fc8f2a00
kcblcio: lbs=fc855d08 slt=fc885a40 type=1 afn=33 blk=f40a cnt=1 buf=fc90da00
kcblcio: lbs=fc855d08 slt=fc885a40 type=1 afn=33 blk=f40a cnt=1 buf=fc90da00
WAIT #1: nam='direct path write' ela= 5577 p1=51 p2=62474 p3=1



Oracle Wait Interface
Oracle Wait Interface: A Practical Guide to Performance Diagnostics & Tuning (Osborne ORACLE Press Series)
ISBN: 007222729X
EAN: 2147483647
Year: 2004
Pages: 114

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net