|< Day Day Up >|
The most significant latches occur with respect to use of the most highly used memory buffer structures in the shared global area (SGA). The SGA includes the database buffers and the shared pool. The shared pool contains various subset buffer caches, structures such as the library cache, the metadata cache, and parsed SQL code.
Concurrency requirements for the shared pool are much higher than for the database buffer cache. The database buffer cache is simply a speed bridge between datafiles and data access. The shared pool attempts to minimize processing repetition, which can be substantial. Missing a latch in the shared pool is a much more significant event than missing a latch in the database buffer cache. The concept of sharing is much more important with respect to the shared pool.
As already stated a latch is similar to a lock on a row in a table except that a latch is used to allocate a hold on a section of a buffer in memory. What are the different types of latches and to which buffers do they apply?
Database Buffer Cache Latches. Cache buffers lru chain, cache buffers chains, cache buffer handles, multiblock read objects, cache protection latch.
Shared Pool, Library, and MetaData Cache Latches. Shared pool, sequence cache, library cache, library cache pin, library cache pin allocation, library cache load lock, row cache objects
Redo Log Buffer Latches. Redo allocation, redo copy, redo writing.
Network and Database Connection Latches. Virtual circuit buffers, virtual circuit queues, virtual circuits, session allocation, session switching, session idle bit, session queue latch.
Two latches are significant in the database buffer cache.
Cache Buffers lru Chain.
This latch is
Excessive I/O activity from full
The database buffer cache could be too small.
Inappropriate, superfluous, or highly degraded index structures.
Reduce full scanning by tuning SQL code properly.
Reduce the DB_FILE_MULTIBLOCK_READ_COUNT parameter to discourage full scans.
Create multiple cache sizes and tablespaces (separate buffer pools in Oracle8 i Database) allowing separation in two ways: use the ALTER TABLE table [NO]CACHE command or the KEEP and RECYCLE pools.
Separate highly active and less used data.
Separate full scans from single-block reads (SQL code dealing with small
Increase database buffer cache
More database writer processes.
A parameter called DB_BLOCK_LRU_LATCHES was used in the past. This parameter is now automated and hidden in Oracle9 i Database and beyond, probably with good reason. Changing a parameter such as this is an attempt to tune a latch. A latch wait is the result of a performance problem, and not the problem itself. Thus tuning a latch is not a solution.
In OLTP databases high I/O is usually caused by poorly tuned SQL code, often
Data is read directly from disk not the cache for full physical scans. The database buffer cache is not read but bypassed, even though data read from the disk is loaded into the database buffer cache.
Any required blocks already in the buffer exacerbate the problem because they are read from disk and copied into the buffer even though they already exist in the buffer; latches are required anyway, increasing block
Once again full physical scans such as full table scans and fast full index scans read directly from disk and bypass the database buffer cache during the read process. The data is still loaded into the database buffer cache but it simply does not have the option of being read from the cache. This can have the additional undesirable effect of removing frequently used blocks from cache.
Caches Buffers Chains.
This latch is acquired when a block is accessed and pinned. A block is pinned because something is changing it. This latch
Static or semi-static lookup data tables subject to change. These types of updates should be done in batch mode and avoided during times of peak activity. Summary column updates in parent tables are common causes.
System reference tables perhaps similar in function to Oracle Database sequences but storing
Primary keys using sequence generators as index values where index leaf blocks are accessed at high concurrency rates. Reverse primary key indexes can help on tables with very high insertion rates, attempting to store consecutive sequence values into the same index leaf block. Appended log or audit trail tables with sequence number generated primary key indexes are typical culprits.
What can be done about hot blocks?
Proper data model and database object design is the best approach but can be expensive.
Block concurrency levels can be raised using the INITRANS parameter, amongst other
Use reverse key indexes on sequential primary keys using sequences for tables with extremely high concurrent insertion rates.
Changing block structure will only affect new blocks not already-created blocks. Changing existing block storage structure requires table, tablespace, or even database recreation. Using automated segment space management can help with block-level concurrency issues. Setting values for PCTUSED, FREELIST, and FREELIST_GROUPS in database objects such as tables will be ignored if the containing tablespace is set to automatic segment space management. One of the reviewers for this book commented that automatic segment space management realizes up to 35% performance improvement,
Hot blocks can be caused by excessively high I/O but are more likely the result of data model design flaws or indexing issues.
Shared pool latches can be divided into three areas: the shared pool, the library cache, and the metadata or dictionary cache.
This latch covers the shared pool as a whole, including library and metadata caches, and usually occurs when the shared pool buffer is
Sequence Cache. Waits on this latch implies that cached sequence values are experiencing lack of buffer space. There is very high sequence usage.
Library cache latches
Tune SQL code properly. Properly tuned SQL code will alleviate and resolve so many other problems. Proper SQL code tuning simply cannot be ignored or avoided.
This latch is acquired to add and parse a new SQL code statement into the library cache, for future sharing. If an application requires frequent re-parsing then it is possible that SQL code is not matching existing SQL code statements already in the library cache. This can be as a result of lack of bind variable use or lack of consistency of SQL coding. SQL code must match exactly for reuse, down to the case of
Setting the CURSOR_SHARING parameter to FORCE or SIMILAR can help decrease re-parsing. However, the accuracy of the Optimizer in relation to statistics might be decreased in large databases.
Oracle Database 10 Grid Case sensitivity of SQL code is less important to performance due to Optimizer improvements.
Library Cache Load Lock. This latch is a library cache or database object lock catering for concurrency.
Library Cache Pin. A pin is acquired after a library cache load lock latch is acquired. A wait on this latch occurs when an SQL code statement already in the library cache is executed again. A re-executed SQL statement implies that an SQL statement is re-parsed because it is not matched. Once again the cause is probably no bind variable use and inconsistent SQL coding standards.
Once again, to reiterate, the biggest cause of latch problems with the library cache, perhaps the shared pool and even an entire Oracle installation in general, is often the result of poorly coded and tuned SQL code. There are various methods of resolving SQL code issues:
Increasing shared pool size can help to reduce re-parsing of SQL code. On the contrary, excessive shared pool size can cause other problems.
Bind variables, bind variables, bind variables, and then bind
SELECT * FROM customer WHERE name = :
This SQL statement does not use a bind variable but a literal value:
SELECT * FROM customer WHERE name = 'Joe';
This SQL statement will be re-parsed because the string 'Joe' is not the same as the string 'Jim':
SELECT * FROM customer WHERE name = 'Jim';
If both of the SQL code statements finding Joe and Jim were submitted to the database using a bind variable, as in the first example, then the SQL code SELECT statement would have been parsed once. This seems trivial? Imagine a scenario to get an idea of scale, exemplifying sharing and the shared use of the shared pool library cache, from a realistic perspective.
Let's say you have 10,000 users all executing a statement like this, at exactly the same time:
SELECT * FROM customer where name = :name;
All 10,000 executions will not be re-parsed because a bind variable is used. Now imagine replacing the bind variable with every different one of the 10,000 customer's
Data warehouse databases are different where queries and amounts of data are very large. It is sometimes prudent not to use bind variables. Why? Because using bind variables tends to lower Optimizer accuracy in relation to statistics. Statistics are very important when querying very large data sets. Once again setting the CURSOR_SHARING parameter to SIMILAR or FORCE will cause the Optimizer to lose query plan accuracy due to less precise use of statistics.
It is so very important to create properly tuned SQL code from the start.
Latch waits on the row cache objects latch implies that there is contention for cache loaded data dictionary metadata definitions. In other words, logical objects such as table or index definitions are not fitting into cache. This type of situation is usually a result of design issues, for instance caused by applications where each end user has a separate schema or perhaps applications with multiple
The redo log buffer uses three latches.
Redo Allocation . Acquired to allow for space in the buffer prior to writing to the buffer.
Redo Copy . Acquired to make actual buffer entries. Latch free waits on the redo copy latch are usually high since multiple latches are checked before spinning. Additionally parallel CPU platforms allow for multiple redo copy latches and thus parallel copies into the redo log buffer.
Redo Writing . Writing from the log buffer to a redo log file verifying that the log writer process is inactive before allowing the write to occur.
Most of these types of latches are trivial. Sometimes virtual circuit latch problems can
That is enough information about latches. Oracle Database has well over 200 different latches.
Oracle Database 10 Grid Oracle9 i Release 2 (9.2) has 239 latches. Oracle Database 10 g 316 latches.
In any particular environment such as an OLTP or data warehouse database, different functionality and behavior is likely to make different latch problems more likely. The
|< Day Day Up >|