20.2 The Most Significant Latches

 < Day Day Up > 



The most significant latches occur with respect to use of the most highly used memory buffer structures in the shared global area (SGA). The SGA includes the database buffers and the shared pool. The shared pool contains various subset buffer caches, structures such as the library cache, the metadata cache, and parsed SQL code.

Tip 

Concurrency requirements for the shared pool are much higher than for the database buffer cache. The database buffer cache is simply a speed bridge between datafiles and data access. The shared pool attempts to minimize processing repetition, which can be substantial. Missing a latch in the shared pool is a much more significant event than missing a latch in the database buffer cache. The concept of sharing is much more important with respect to the shared pool.

As already stated a latch is similar to a lock on a row in a table except that a latch is used to allocate a hold on a section of a buffer in memory. What are the different types of latches and to which buffers do they apply?

  • Database Buffer Cache Latches.   Cache buffers lru chain, cache buffers chains, cache buffer handles, multiblock read objects, cache protection latch.

  • Shared Pool, Library, and MetaData Cache Latches.   Shared pool, sequence cache, library cache, library cache pin, library cache pin allocation, library cache load lock, row cache objects

  • Redo Log Buffer Latches.   Redo allocation, redo copy, redo writing.

  • Network and Database Connection Latches.   Virtual circuit buffers, virtual circuit queues, virtual circuits, session allocation, session switching, session idle bit, session queue latch.

20.2.1 The Database Buffer Cache

Two latches are significant in the database buffer cache.

  • Cache Buffers lru Chain.   This latch is acquired for one of two reasons. Firstly, when copying a new block into the cache: the block must be locked in cache to prevent any changes to it during the I/O process from disk to buffer. Secondly, when writing a dirty block to disk: the block cannot be changed or read into the cache again until I/O from buffer to disk is complete. Excessive waits on this latch could be a result of swapping between cache and disk in order to cope with constant requirements for different blocks.

    Potential culprits:

    • Excessive I/O activity from full scans.

    • The database buffer cache could be too small.

    • Inappropriate, superfluous, or highly degraded index structures.

    • Poorly tuned SQL code.

    Potential solutions:

    • Reduce full scanning by tuning SQL code properly.

    • Reduce the DB_FILE_MULTIBLOCK_READ_COUNT parameter to discourage full scans.

    • Create multiple cache sizes and tablespaces (separate buffer pools in Oracle8i Database) allowing separation in two ways: use the ALTER TABLE table [NO]CACHE command or the KEEP and RECYCLE pools.

      • Separate highly active and less used data.

      • Separate full scans from single-block reads (SQL code dealing with small numbers of rows).

    • Increase database buffer cache size. This is always the easy option but rarely the correct one.

    • More database writer processes.

    Tip 

    A parameter called DB_BLOCK_LRU_LATCHES was used in the past. This parameter is now automated and hidden in Oracle9i Database and beyond, probably with good reason. Changing a parameter such as this is an attempt to tune a latch. A latch wait is the result of a performance problem, and not the problem itself. Thus tuning a latch is not a solution.

In OLTP databases high I/O is usually caused by poorly tuned SQL code, often causing excessive full physical scans. Full table scans and overuse of fast full index scans will cause heavy I/O activity, both of which read information directly from disk and bypass the database buffer cache altogether.

Tip 

Data is read directly from disk not the cache for full physical scans. The database buffer cache is not read but bypassed, even though data read from the disk is loaded into the database buffer cache.

Any required blocks already in the buffer exacerbate the problem because they are read from disk and copied into the buffer even though they already exist in the buffer; latches are required anyway, increasing block contention because the same data block is read. Poorly tuned SQL code often results in repeated full table scanning of the same table, continuously read from disk, copied into the database buffer cache, but never read from the database buffer cache itself. Just imagine! The database buffer cache entries exist but are effectively never reused. In an extreme situation performing the same full table scans over and over again makes the database buffer cache more or less useless if all query plans only ever use full table scans. This is a very unlikely extreme but the database buffer cache might as well be set to zero.

Tip 

Once again full physical scans such as full table scans and fast full index scans read directly from disk and bypass the database buffer cache during the read process. The data is still loaded into the database buffer cache but it simply does not have the option of being read from the cache. This can have the additional undesirable effect of removing frequently used blocks from cache.

  • Caches Buffers Chains.   This latch is acquired when a block is accessed and pinned. A block is pinned because something is changing it. This latch prevents another process from changing the block at the same time, essential to data integrity. A lot of DML activity with very high I/O rates in SQL code can cause waits on this latch to occur too often. Tune the SQL code. Otherwise this latch is commonly caused by what are called hot blocks. A hot block is a block in the buffer cache subject to change by too many sessions simultaneously. Hot block issues are often a consequence of data model design flaws. Some types of data model culprits are as follows:

    • Static or semi-static lookup data tables subject to change. These types of updates should be done in batch mode and avoided during times of peak activity. Summary column updates in parent tables are common causes.

    • System reference tables perhaps similar in function to Oracle Database sequences but storing centrally located information for all data. The Oracle Database configuration parameter file is a perfect example of a system reference file except it is stored in memory and not in a database table. Many applications use tables of this nature. Updates by applications to tables of this nature can cause serious hot block issues. These types of tables were common in single-user databases in the last century and are completely incompatible with any type of multi-user database engine.

    • Primary keys using sequence generators as index values where index leaf blocks are accessed at high concurrency rates. Reverse primary key indexes can help on tables with very high insertion rates, attempting to store consecutive sequence values into the same index leaf block. Appended log or audit trail tables with sequence number generated primary key indexes are typical culprits.

    What can be done about hot blocks?

    • Proper data model and database object design is the best approach but can be expensive.

    • Block concurrency levels can be raised using the INITRANS parameter, amongst other methods. Block structural changes are covered in Chapter 14.

    • Use reverse key indexes on sequential primary keys using sequences for tables with extremely high concurrent insertion rates.

      Tip 

      Changing block structure will only affect new blocks not already-created blocks. Changing existing block storage structure requires table, tablespace, or even database recreation. Using automated segment space management can help with block-level concurrency issues. Setting values for PCTUSED, FREELIST, and FREELIST_GROUPS in database objects such as tables will be ignored if the containing tablespace is set to automatic segment space management. One of the reviewers for this book commented that automatic segment space management realizes up to 35% performance improvement, specifically in Oracle RAC environments.

Hot blocks can be caused by excessively high I/O but are more likely the result of data model design flaws or indexing issues.

20.2.2 The Shared Pool

Shared pool latches can be divided into three areas: the shared pool, the library cache, and the metadata or dictionary cache.

  • Shared Pool.   This latch covers the shared pool as a whole, including library and metadata caches, and usually occurs when the shared pool buffer is sized too large. It can occur when the shared pool is too small. A very large shared pool requires excessive maintenance using long freelists.

  • Sequence Cache.   Waits on this latch implies that cached sequence values are experiencing lack of buffer space. There is very high sequence usage.

Library Cache Latches

Library cache latches generally involve SQL code problems.

Tip 

Tune SQL code properly. Properly tuned SQL code will alleviate and resolve so many other problems. Proper SQL code tuning simply cannot be ignored or avoided.

  • Library Cache.   This latch is acquired to add and parse a new SQL code statement into the library cache, for future sharing. If an application requires frequent re-parsing then it is possible that SQL code is not matching existing SQL code statements already in the library cache. This can be as a result of lack of bind variable use or lack of consistency of SQL coding. SQL code must match exactly for reuse, down to the case of characters and even the number of spaces between an equals sign and column values in a WHERE clause join. Formatting should be standardized because it has to be precise.

    Tip 

    Setting the CURSOR_SHARING parameter to FORCE or SIMILAR can help decrease re-parsing. However, the accuracy of the Optimizer in relation to statistics might be decreased in large databases.

    Note 

     Oracle Database 10 Grid   Case sensitivity of SQL code is less important to performance due to Optimizer improvements.

  • Library Cache Load Lock.   This latch is a library cache or database object lock catering for concurrency.

  • Library Cache Pin.   A pin is acquired after a library cache load lock latch is acquired. A wait on this latch occurs when an SQL code statement already in the library cache is executed again. A re-executed SQL statement implies that an SQL statement is re-parsed because it is not matched. Once again the cause is probably no bind variable use and inconsistent SQL coding standards.

Once again, to reiterate, the biggest cause of latch problems with the library cache, perhaps the shared pool and even an entire Oracle installation in general, is often the result of poorly coded and tuned SQL code. There are various methods of resolving SQL code issues:

  • Increasing shared pool size can help to reduce re-parsing of SQL code. On the contrary, excessive shared pool size can cause other problems.

  • Bind variables, bind variables, bind variables, and then bind variables again! The CURSOR_SHARING parameter can be set to SIMILAR or FORCE. There are possible side effects using the CURSOR_SHARING parameter as well. Let's make sure we understand exactly what bind variable use is. This SQL statement uses a bind variable:

    SELECT * FROM customer WHERE name = :name;

    This SQL statement does not use a bind variable but a literal value:

    SELECT * FROM customer WHERE name = 'Joe';

    This SQL statement will be re-parsed because the string 'Joe' is not the same as the string 'Jim':

    SELECT * FROM customer WHERE name = 'Jim';

If both of the SQL code statements finding Joe and Jim were submitted to the database using a bind variable, as in the first example, then the SQL code SELECT statement would have been parsed once. This seems trivial? Imagine a scenario to get an idea of scale, exemplifying sharing and the shared use of the shared pool library cache, from a realistic perspective.

Let's say you have 10,000 users all executing a statement like this, at exactly the same time:

SELECT * FROM customer where name = :name;

All 10,000 executions will not be re-parsed because a bind variable is used. Now imagine replacing the bind variable with every different one of the 10,000 customer's names. You have 10,000 re-parses and thus the shared pool might need to be 10,000 times bigger. Now imagine a full application with many tables, indexes, a multitude of SQL statements using complex joins, and so on. You can imagine what happens to the shared pool.

Tip 

Data warehouse databases are different where queries and amounts of data are very large. It is sometimes prudent not to use bind variables. Why? Because using bind variables tends to lower Optimizer accuracy in relation to statistics. Statistics are very important when querying very large data sets. Once again setting the CURSOR_SHARING parameter to SIMILAR or FORCE will cause the Optimizer to lose query plan accuracy due to less precise use of statistics.

It is so very important to create properly tuned SQL code from the start.

Metadata Cache Latches

Latch waits on the row cache objects latch implies that there is contention for cache loaded data dictionary metadata definitions. In other words, logical objects such as table or index definitions are not fitting into cache. This type of situation is usually a result of design issues, for instance caused by applications where each end user has a separate schema or perhaps applications with multiple layers of views. Both of these factors are often utilized for ease of applications programming but can be most unfortunate for performance if scalability becomes an important requirement. Most successful businesses require scalability if growth is the intention.

20.2.3 The Redo Log Buffer

The redo log buffer uses three latches.

  • Redo Allocation.   Acquired to allow for space in the buffer prior to writing to the buffer.

  • Redo Copy.   Acquired to make actual buffer entries. Latch free waits on the redo copy latch are usually high since multiple latches are checked before spinning. Additionally parallel CPU platforms allow for multiple redo copy latches and thus parallel copies into the redo log buffer.

  • Redo Writing.   Writing from the log buffer to a redo log file verifying that the log writer process is inactive before allowing the write to occur.

20.2.4 Network and Database Connection Latches

Most of these types of latches are trivial. Sometimes virtual circuit latch problems can indicate issues with shared server and dispatcher configuration parameters and perhaps even shared or large pool configuration issues.

That is enough information about latches. Oracle Database has well over 200 different latches.

Note 

 Oracle Database 10 Grid   Oracle9i Release 2 (9.2) has 239 latches. Oracle Database 10g 316 latches.

In any particular environment such as an OLTP or data warehouse database, different functionality and behavior is likely to make different latch problems more likely. The next chapter will briefly examine various tools used for physical and configuration tuning of Oracle installations.



 < Day Day Up > 



Oracle High Performance Tuning for 9i and 10g
Oracle High Performance Tuning for 9i and 10g
ISBN: 1555583059
EAN: 2147483647
Year: 2003
Pages: 164

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net