Page 573
The files are separated so that access to these areas is not in contention with access to other files, such as database files and the control file. It is important to optimize the physical layout so that contention is minimized between the ARCH, DBWR, and LGWR processes. Because of the information generated for each transaction, it is usually best to place rollback segments on their own disk.
TIP |
Some sites that use heavy OLTP create separate tablespaces for each rollback segment and spread them across disks similarly to how control files are distributed. This minimizes the I/O on a single disk and provides for less I/O contention when rollback segments are accessed. |
One of the most important things that you can do to achieve I/O balancing is to put table and index data on separate physical devices, as shown in Figure 23.3. If table and index data exist on the same disk drive, any type of table access that uses indexes doubles the I/O operations on a single disk device. Take, for example, a SELECT operation. The index must be accessed to determine the fastest way to access table information, and then the table itself is accessed. This causes all the operations to wait on the access to the table or the index, and it drastically cuts throughput on all operations, especially those with many users who access data in the same tables simultaneously . By splitting the tables and the indexes across disk drives , the disk drive heads can work in tandem with the database to quickly access table data and return it to users.
Splitting tables and indexes is the first step in setting up efficient throughput and minimizing I/O contention, but it is hardly enough to ensure optimal performance. You must pinpoint which database files are accessed most heavily and spread them across disks to balance the load. By issuing the following query as the SYS user or as another user who has access to the V$ views, you can determine the current I/O load on the database files. It is important to take these readings several times over a span of time to ensure accurate statistics. For example:
SQL> select d.name, f.phyrds, f.phywrts 2 from v$datafile d, v$filestat f 3 where d.file# = f.file# 4 / NAME PHYRDS PHYWRTS ---------------------------------------- ---------- ---------- /u04/oradata/norm/system01.dbf 383336 23257 /u20/oradata/norm/rbs01.dbf 13740 332604 /u05/oradata/norm/temp01.dbf 3037 147963 /u08/oradata/norm/tools01.dbf 5338 243 /u05/oradata/norm/users01.dbf 0 0 /u03/oradata/norm/aold01.dbf 133879 63698
Page 574
/u06/oradata/norm/aolx01.dbf 59108 91757 /u06/oradata/norm/apd01.dbf 68733 8119 /u09/oradata/norm/apx01.dbf 34358 29941 /u06/oradata/norm/ard01.dbf 107335 21018 /u09/oradata/norm/arx01.dbf 28967 13770
Figure 23.3.
Table and index
splitting.
Unfortunately, it is difficult to know what the load on the disks will be before the database is implemented. For this reason, after you determine that a significant degree of contention is occurring on a single disk, move the database file. Use the alter database rename command on a mounted ”but not started ”database instance. That way, you can ensure that the load on the disk drives is optimal.
There are also situations in which key database files take the brunt of the I/O. Moving them to another disk might not be possible or might not provide the best solution. For situations like these, Oracle provides striping ”taking a single, large database file and splitting it into smaller pieces that can be distributed across multiple disks. For example:
SVRMGR> create tablespace dba_ts 2> datafile `/u03/oradata/norm/dbats01.dbf' size 50M, 3> `/u05/oradata/norm/dbats02.dbf' size 50M, 4> `/u07/oradata/norm/dbats03.dbf' size 50M, 5> `/u09/oradata/norm/dbats04.dbf' size 50M 6> / Statement processed.
Page 575
When you distribute a tablespace across several database files on several disks, you stripe it and there is a 50MB stripe of data on each disk. Striping enables the database to distribute its data across the disks, and it speeds I/O access by minimizing contention against disk drives.
One of the features of an Oracle7 database is its capability to undo, or rollback, uncommitted changes to the database. In short, a transaction that physically changes database data ”INSERT, UPDATE, or DELETE SQL statements ”produces information that Oracle writes to its online rollback segments. Many DBAs fail to realize that because Oracle attempts to provide data consistency when a query is issued, SELECT statements use rollback segments when they access data. When a query is issued, if a row has been changed but not committed, the Oracle RDBMS returns information from rollback segments to provide read consistency. Rollback segments are also used when an instance is forced down or ended with an abnormal termination.
Rollback segment contention can occur whenever a transaction accesses a block within a rollback segment that another rollback segment needs. Use the following query to determine the amount of contention being experienced within the rollback segments.
select r.name, s.gets, s.waits from v$rollstat s, v$rollname r where s.usn = r.usn /
The following ratio compares how often a rollback segment was accessed with how often the database waited to access information with a rollback segment:
ratio = ( waits/gets ) * 100
If the result is 2 or greater, there is contention within the rollback segments. Create more rollback segments, which reduces the chance that transactions hit the same rollback segment blocks at the same time. This reduces contention, but it cannot eliminate it entirely. Here are some guidelines for the number of rollback segments that you should use:
There is also the question of how large to make rollback segments. This is less complicated because you need to consider only two environments: OLTP and non-OLTP. OLTP environments (online transaction processing) are those in which a large volume of database transactions is being processed by users, such as with an order entry system. The OLTP environments do better with a large number of smaller rollback segments. For non-OLTP environments, assign larger rollback segments so that data is retained longer for long