Appendix AH

Overview

The LogMiner packages, DBMS_LOGMNR and DBMS_LOGMNR_D, allow for analysis of Oracle's redo log files. You would make use of this feature for some of the following reasons:

  • You want to find out when a table was 'accidentally' dropped, and by whom.

  • You want to perform some auditing on a given table, or set of tables, to see who has been modifying what pieces of it. You can do this auditing 'after the fact'. Normally, you might use the AUDIT command, but this must be enabled ahead of time, and it only tells you someone modified the table - not what they modified. LogMiner is good for post-facto 'who did that' discovery, and to see what data changed exactly.

  • You would like to 'undo' a given transaction. In order to undo it, we'll need to see what it did, and get the PL/SQL for undoing it.

  • You would like to get some empirical counts of rows modified in an average transaction.

  • You would like to perform a historical analysis of how the database has been used over time.

  • You would like to find out why your database is suddenly generating 10 MB of log every minute. It never used to do this, and now it is all of a sudden. Are there any obvious culprits to be found in a quick review of the logs?

  • You would like to see what is really happening 'under the cover'. The contents of the redo logs show you what actually happened when you did that INSERT on a table with a trigger that does an UPDATE of another table. All of the effects of your transaction are recorded in the log. LogMiner is an excellent exploration tool.

LogMiner provides you the tools to do all this, and more. What I will provide here is a quick overview of how to use LogMiner, and then explain some of the caveats of its use that are not spelled out in the Supplied PL/SQL Packages Reference guide shipped with Oracle. As with all of the other packages, it is recommended that you read the section in the Supplied PL/SQL Packages Reference on DBMS_LOGMNR and DBMS_LOGMNR_D to get an overview of the functions and procedures they contain, and how they are used. Below in the Options and Usage section, we will give an overview of these procedures, and their inputs as well.

LogMiner works best on archived redo log files, although it can be used with online redo log files that are not active. Attempting to use an active online redo log file could lead to an error message or just confusion on your part, as the redo log file will contain a mixture of old and new transaction data. An interesting thing to note about LogMiner is that you do not need to analyze a log file in the database that originally created it. It doesn't even have to be the same exact database version (you can analyze version 8.0 archive files in an 8.1 database). You can move an archived redo log file to another system, and analyze it there instead. This can be quite convenient for auditing and looking at historical usage patterns, without impacting the existing system. In order to do this however, you must use a database that is on the same hardware platform (byte-ordering, word sizes, and so on will be affected by this). Also, you will want to make sure that the database block sizes are the same (or that the database doing the analysis has a block size at least as big as the database originating the redo log), and have the same character set.

Using LogMiner is a two-step process. Step one involves creating a data dictionary for LogMiner to operate with. This is what allows a redo log file from one database to be analyzed on another - LogMiner does not use the existing data dictionary, it uses the data dictionary that was exported to an external file by the DBMS_LOGMNR_D package. LogMiner can be used without this data dictionary, but you will find the resulting output virtually unreadable. We'll take a look at what this would look like later.

Step two involves importing the redo log files and starting LogMiner. Once LogMiner is started, you can review the contents of the redo log files using SQL. There are four V$ views associated with LogMiner. The main view is V$LOGMNR_CONTENTS. This is the view you will use to review the contents of the redo log files you have loaded. We will take a look at this view in more detail in the example and at the end of this section we have a table that defines each column. The other three views are:

  • V$LOGMNR_DICTIONARY - This view contains information about the dictionary file that has been loaded. This is the dictionary you created in step one. In order to make sense of the contents of a redo log file, we need to have a dictionary file that tells us what object name goes with what object ID, what the columns and data types of each table are, and so on. This view contains at most, one row for the currently loaded dictionary only.

  • V$LOGMNR_LOGS - This view contains information about the redo log files you have requested LogMiner to load into the system. The contents of these redo log files will be found in V$LOGMNR_CONTENTS. This view tells you about the redo log file itself. Attributes such as the name of the redo log file, the database name of the database it came from, the SCNs (system change numbers) contained in it and so on, are found here. This view will have an entry per log file you are analyzing.

  • V$LOGMNR_PARAMETERS - This view shows the parameters that were passed to LogMiner during it's start up. This view will have one entry after you call the start up routine for log miner.

An important point to note here is that because LogMiner's memory allocation comes from the PGA, LogMiner cannot be used in an MTS environment. This is because with MTS, you will be assigned to a different shared server (process or thread) each time you make a request into the database. The data you loaded into Process One (Shared Server One) is simply not available to Process Two (Shared Server Two). You must be using a dedicated server configuration for LogMiner to function. Also, the output is only visible in a single session, and only for the life of that session. If further analysis is needed, you must either reload the information, or make it permanent, perhaps using a CREATE TABLE AS SELECT. If you are analyzing a large amount of data, making the data permanent via a CREATE TABLE AS SELECT or INSERT INTO makes even more sense. You would then be able to index this information whereas with the V$LOGMNR_CONTENTS table, you will always be performing a full scan of a V$ table, since it has no indexes. This full scanning of a V$ table can be quite resource-intensive.

Overview

What we'll do now is present an overview of how to use the LogMiner facility. After that, we'll look at all of the inputs to the two LogMiner supplied packages, and what they mean. Then, we will investigate using LogMiner to find out when some operation took place in the database. After that, we'll take a quick look at how LogMiner affects your session's memory usage, and how it caches the redo log files internally. Lastly, we'll look at some of the limitations of LogMiner that are not mentioned in the documentation.

Step 1: Creating the Data Dictionary

In order for LogMiner to map internal object IDs and columns to their appropriate tables, it needs a data dictionary. It will not use the data dictionary already present in the database. Rather, it relies on an external file to provide the data dictionary. LogMiner works this way in order to allow redo log files from other databases to be analyzed in different one. Additionally, the data dictionary that is current today in your database may not support all of the objects that were in the database when the redo log file was generated, hence the need to be able to import a data dictionary.

To see the purpose of this data dictionary file, we'll look at some output from LogMiner without having a data dictionary loaded. We'll do this by loading an archived redo log file and starting LogMiner. Then a quick query in V$LOGMNR_CONTENTS to see what is there:

tkyte@TKYTE816> begin   2     sys.dbms_logmnr.add_logfile   3     ( 'C:\oracle\oradata\tkyte816\archive\TKYTE816T001S01263.ARC',   4        sys.dbms_logmnr.NEW );   5  end;   6  /      PL/SQL procedure successfully completed.      tkyte@TKYTE816> begin   2     sys.dbms_logmnr.start_logmnr;   3  end;   4  /      PL/SQL procedure successfully completed.      tkyte@TKYTE816> column sql_redo format a30 tkyte@TKYTE816> column sql_undo format a30 tkyte@TKYTE816> select scn, sql_redo, sql_undo from v$logmnr_contents   2  /             SCN SQL_REDO                       SQL_UNDO ---------- ------------------------------ ------------------------------ 6.4430E+12 6.4430E+12 set transaction read write; 6.4430E+12 update UNKNOWN.Objn:30551 set  update UNKNOWN.Objn:30551 set            Col[2] = HEXTORAW('787878') wh Col[2] = HEXTORAW('534d495448'            ere ROWID = 'AAAHdXAAGAAAAJKAA ) where ROWID = 'AAAHdXAAGAAAA            A';                            JKAAA';      6.4430E+12 6.4430E+12 commit; tkyte@TKYTE816> select utl_raw.cast_to_varchar2(hextoraw('787878')) from dual;      UTL_RAW.CAST_TO_VARCHAR2(HEXTORAW('787878')) ------------------------------------------------------------------ xxx      tkyte@TKYTE816> select utl_raw.cast_to_varchar2(hextoraw('534d495448')) from dual;      UTL_RAW.CAST_TO_VARCHAR2(HEXTORAW('534D495448')) --------------------------------------------------------------------- SMITH      

This output is fairly unreadable. We know that object number 30551 was updated and column 2 was modified. Further, we can turn the HEXTORAW('787878') into a character string. We could go to the data dictionary and find out that object 30551 is:

tkyte@TKYTE816> select object_name   2  from all_objects   3  where data_object_id = 30551;      OBJECT_NAME ------------------------------ EMP 

but only if we are in the same database in which the redo log file was originally generated, and only if that object still exists. Further we could DESCRIBE EMP and discover that column 2 is ENAME. Therefore, the SQL_REDO column from LogMiner is really UPDATE EMP SET ENAME = 'XXX' WHERE ROWID = .... Fortunately, we do not need to go through this laborious conversion each and every time we use log miner. We'll find that by building and then loading a dictionary, we'll get much better results. The following example shows what output we could expect if we build a dictionary file for LogMiner to work with, and then load it.

We start by creating the dictionary file. Creating this data dictionary file is rather straightforward. The prerequisites for doing this are:

  • UTL_FILE has been configured in your init.ora file so that there is at least one directory that can be written to. See the section on UTL_FILE for information on setting this up. DBMS_LOGMNR_D, the package that builds the data dictionary file, relies on UTL_FILE to perform I/O.

  • The schema that will execute the DBMS_LOGMNR_D package has been granted EXECUTE ON SYS.DBMS_LOGMNR_D, or has a role that is able to execute this package. By default, the EXECUTE_CATALOG_ROLE has the privilege to run this package.

Once you have UTL_FILE set up and EXECUTE ON DBMS_LOGMNR_D, creating the data dictionary file is trivial. There is only one call inside of DBMS_LOGMNR_D, and this is called BUILD. You would simply execute something along the lines of:

tkyte@TKYTE816> set serveroutput on      tkyte@TKYTE816> begin   2     sys.dbms_logmnr_d.build( 'miner_dictionary.dat',   3                              'c:\temp' );   4  end;   5  / LogMnr Dictionary Procedure started LogMnr Dictionary File Opened TABLE: OBJ$ recorded in LogMnr Dictionary File TABLE: TAB$ recorded in LogMnr Dictionary File TABLE: COL$ recorded in LogMnr Dictionary File TABLE: SEG$ recorded in LogMnr Dictionary File TABLE: UNDO$ recorded in LogMnr Dictionary File TABLE: UGROUP$ recorded in LogMnr Dictionary File TABLE: TS$ recorded in LogMnr Dictionary File TABLE: CLU$ recorded in LogMnr Dictionary File TABLE: IND$ recorded in LogMnr Dictionary File TABLE: ICOL$ recorded in LogMnr Dictionary File TABLE: LOB$ recorded in LogMnr Dictionary File TABLE: USER$ recorded in LogMnr Dictionary File TABLE: FILE$ recorded in LogMnr Dictionary File TABLE: PARTOBJ$ recorded in LogMnr Dictionary File TABLE: PARTCOL$ recorded in LogMnr Dictionary File TABLE: TABPART$ recorded in LogMnr Dictionary File TABLE: INDPART$ recorded in LogMnr Dictionary File TABLE: SUBPARTCOL$ recorded in LogMnr Dictionary File TABLE: TABSUBPART$ recorded in LogMnr Dictionary File TABLE: INDSUBPART$ recorded in LogMnr Dictionary File TABLE: TABCOMPART$ recorded in LogMnr Dictionary File TABLE: INDCOMPART$ recorded in LogMnr Dictionary File Procedure executed successfully - LogMnr Dictionary Created      PL/SQL procedure successfully completed. 

It is recommended that you issue a SET SERVEROUTPUT ON prior to executing DBMS_LOGMNR_D, as this will allow informational messages from DBMS_LOGMNR_D to be printed. This can be extremely useful when trying to diagnose an error from DBMS_LOGMNR_D. What the above command did was to create a file C:\TEMP\MINER_DICTIONARY.DAT. This is a plain text, ASCII file that you may edit to see what is in there. This file consists of a lot of SQL-like statements that are parsed and executed by the LogMiner start routine. Now that we have a dictionary file on hand, we are ready to see what the contents of V$LOGMNR_CONTENTS might look like now:

tkyte@TKYTE816> begin   2     sys.dbms_logmnr.add_logfile   3     ( 'C:\oracle\oradata\tkyte816\archive\TKYTE816T001S01263.ARC',   4        sys.dbms_logmnr.NEW );   5  end;   6  /      PL/SQL procedure successfully completed.      tkyte@TKYTE816> begin   2     sys.dbms_logmnr.start_logmnr   3     ( dictFileName => 'c:\temp\miner_dictionary.dat' );   4  end;   5  /      PL/SQL procedure successfully completed. tkyte@TKYTE816> column sql_redo format a30 tkyte@TKYTE816> column sql_undo format a30 tkyte@TKYTE816> select scn, sql_redo, sql_undo from v$logmnr_contents   2  /             SCN SQL_REDO                       SQL_UNDO ---------- ------------------------------ ------------------------------ 6.4430E+12 6.4430E+12 set transaction read write; 6.4430E+12 update TKYTE.EMP set ENAME = ' update TKYTE.EMP set ENAME = '            xxx' where ROWID = 'AAAHdXAAGA SMITH' where ROWID = 'AAAHdXAA            AAAJKAAA';                     GAAAAJKAAA';      6.4430E+12 6.4430E+12 commit; 

Now that's more like it - we can actually read the SQL that LogMiner generates for us, which would 'replay' (or undo) the transaction we are looking at. Now we are ready to go into Step 2 - Using LogMiner.

Step 2: Using Log Miner

Here, we will take the dictionary file we just generated, and use it to review the contents of some archived redo log files. Before we load a redo log file, we will generate one with some known transactions in it. For the first time around, this will make it easier to see what we have. We'll be able to correlate what we find in the V$LOGMNR_CONTENTS view with what we just did. For this to work, it is important to have a 'test' database, one where you can be ensured you are the only one logged in. This allows us to artificially constrain just exactly what gets put into the redo log. Also, it would be necessary to have the ALTER SYSTEM privilege in this database so we can force a log file archive. Lastly, this is easiest to do if the database is in archive log mode with automatic archiving. In this fashion, finding the redo log file is trivial (it will be the one just archived - we'll see below how to find this). If you are using a NOARCHIVELOGMODE database, you will need to find the active log, and determine which log file was active just prior to it. So, to generate our sample transaction we could:

tkyte@TKYTE816> alter system archive log current;      System altered.      tkyte@TKYTE816> update emp set ename = lower(ename);      14 rows updated.      tkyte@TKYTE816> update dept set dname = lower(dname);      4 rows updated.      tkyte@TKYTE816> commit;      Commit complete.      tkyte@TKYTE816> alter system archive log current;      System altered. tkyte@TKYTE816> column name format a80 tkyte@TKYTE816> select name   2    from v$archived_log   3   where completion_time = ( select max(completion_time)   4                               from v$archived_log )   5  /      NAME -------------------------------------------------------------- C:\ORACLE\ORADATA\TKYTE816\ARCHIVE\TKYTE816T001S01267.ARC 

Now, given that we were the only user logged in doing work, the archive redo log we just generated will have our two updates in it and nothing else. This last query against V$ARCHIVED_LOG shows us the name of the archive redo log file we actually want to analyze. We can load this into LogMiner and get started by using the following SQL. It will add the last archive redo log file to the LogMiner list, and then start LogMiner:

tkyte@TKYTE816> declare   2      l_name v$archived_log.name%type;   3  begin   4   5      select name into l_name   6        from v$archived_log   7       where completion_time = ( select max(completion_time)   8                                   from v$archived_log );   9  10      sys.dbms_logmnr.add_logfile( l_name, sys.dbms_logmnr.NEW );  11  end;  12  /      PL/SQL procedure successfully completed.      tkyte@TKYTE816> begin   2     sys.dbms_logmnr.start_logmnr   3     ( dictFileName => 'c:\temp\miner_dictionary.dat' );   4  end;   5  /      PL/SQL procedure successfully completed. 

The first call, to DBMS_LOGMNR.ADD_LOGFILE, loaded an archive redo log file into LogMiner. I passed in the name of the archived redo log file, as well as the option of DBMS_LOGMNR.NEW. Since this is the first log file I am adding in, I used DBMS_LOGMNR.NEW. The other options are ADDFILE to add another log file to an existing list of files and REMOVEFILE to remove a file from consideration. After we load the log files we are interested in, we can call DBMS_LOGMNR.START_LOGMNR and tell it the name of the dictionary file we created. We used a minimal call to START_LOGMNR here, passing just the name of the dictionary file. We will look at some of the other options to START_LOGMNR in the next section Options and Usage.

Now that we've loaded a log file and started LogMiner, we are ready to take our first look at the contents of V$LOGMNR_CONTENTS. V$LOGMNR_CONTENTS has lots of information in it and for now, we'll look at a very small slice of the data available. Specifically, we'll investigate the SCN, SQL_REDO, and SQL_UNDO columns. In case you are not familiar with it, the SCN is a simple timing mechanism that Oracle uses to guarantee ordering of transactions, and to enable recovery from failure. They are also used to guarantee read-consistency, and checkpointing in the database. Think of the SCN as a ticker - every time someone commits, the SCN is incremented by one. Here is a sample query from our example above where we lowercased the names in EMP and DEPT tables:

tkyte@TKYTE816> column sql_redo format a20 word_wrapped tkyte@TKYTE816> column sql_undo format a20 word_wrapped      tkyte@TKYTE816> select scn, sql_redo, sql_undo from v$logmnr_contents   2  /             SCN SQL_REDO             SQL_UNDO ---------- -------------------- -------------------- 6.4430E+12 set transaction read            write;      6.4430E+12 update TKYTE.EMP set update TKYTE.EMP set            ENAME = 'smith'      ENAME = 'SMITH'            where ROWID =        where ROWID =            'AAAHdYAAGAAAAJKAAA' 'AAAHdYAAGAAAAJKAAA'            ;                    ;      6.4430E+12 6.4430E+12 update TKYTE.EMP set update TKYTE.EMP set            ENAME = 'allen'      ENAME = 'ALLEN'            where ROWID =        where ROWID =            'AAAHdYAAGAAAAJKAAB' 'AAAHdYAAGAAAAJKAAB'            ;                    ;      ...(many similar rows snipped out)...      6.4430E+12 update TKYTE.DEPT    update TKYTE.DEPT            set DNAME = 'sales'  set DNAME = 'SALES'            where ROWID =        where ROWID =            'AAAHdZAAGAAAAKKAAC' 'AAAHdZAAGAAAAKKAAC'            ;                    ;      6.4430E+12 update TKYTE.DEPT    update TKYTE.DEPT            set DNAME =          set DNAME =            'operations' where   'OPERATIONS' where            ROWID =              ROWID =            'AAAHdZAAGAAAAKKAAD' 'AAAHdZAAGAAAAKKAAD'            ;                    ;      6.4430E+12 commit;      22 rows selected. 

As you can see, our two SQL statements generated many more than two SQL statements from the redo log. The redo log contains the bits and bytes that we changed - not SQL. Therefore our multi-row statement UPDATE EMP SET ENAME = LOWER(ENAME) is presented by LogMiner as a series of single row updates. LogMiner currently cannot be used to retrieve the actual SQL performed at run-time. It can only reproduce equivalent SQL, SQL that does the same thing but in many individual statements.

Now, we'll go one step further with this example. The V$LOGMNR_CONTENTS view has 'placeholder' columns. These placeholder columns are useful for finding particular updates for up to five columns in your table. The placeholder columns can tell us the name of the changed column, and show us the 'before' value of the column, and the 'after' column value. Since these columns are broken out from the SQL, it would be very easy to find the transaction such that the ENAME column was updated (the name placeholder column would have ENAME) from KING (the before image placeholder column would have KING in it) to king. We'll do another quick UPDATE example, and set up the necessary column mapping file to demonstrate this. The column mapping file (colmap for short) is used to tell LogMiner which columns are of interest to you by table. We can map up to five columns per table to be mapped, into these placeholder columns. The format of a colmap file is simply:

colmap = TKYTE DEPT (1, DEPTNO, 2, DNAME, 3, LOC); colmap = TKYTE EMP (1, EMPNO, 2, ENAME, 3, JOB, 4, MGR, 5, HIREDATE); 

This will map the DEPT DEPTNO column to the first placeholder column when we are looking at a row for the DEPT table. It will map the EMP EMPNO column to this placeholder column when we are looking at an EMP row.

The column mapping file in general has lines that consist of the following (items in bold are constants, <sp> represent a single, mandatory space)

colmap<sp>=<sp>OWNER<sp>TABLE_NAME<sp>(1,<sp>CNAME[,<sp>2,<sp>CNAME]...); 

The case of everything is important - the OWNER must be uppercase, the table name must be the 'correct' case (uppercase is usually the correct case unless you've used quoted identifiers to create objects). The spaces are mandatory as well. In order to make using the column mapping file a little easier, I use a script such as:

set linesize 500 set trimspool on set feedback off set heading off set embedded on spool logmnr.opt select     'colmap = ' || user || ' ' || table_name || ' (' ||    max( decode( column_id, 1,       column_id  , null ) ) ||    max( decode( column_id, 1, ', '||column_name, null ) ) ||    max( decode( column_id, 2, ', '||column_id  , null ) ) ||    max( decode( column_id, 2, ', '||column_name, null ) ) ||    max( decode( column_id, 3, ', '||column_id  , null ) ) ||    max( decode( column_id, 3, ', '||column_name, null ) ) ||    max( decode( column_id, 4, ', '||column_id  , null ) ) ||    max( decode( column_id, 4, ', '||column_name, null ) ) ||    max( decode( column_id, 5, ', '||column_id  , null ) ) ||    max( decode( column_id, 5, ', '||column_name, null ) ) || ');' colmap  from user_tab_columns group by user, table_name / spool off 

in SQL*PLUS to generate the logmnr.opt file for me. For example, if I execute this script in a schema that contains only the EMP and DEPT tables from the SCOTT/TIGER account, I will see:

tkyte@TKYTE816> @colmap colmap = TKYTE DEPT (1, DEPTNO, 2, DNAME, 3, LOC); colmap = TKYTE EMP (1, EMPNO, 2, ENAME, 3, JOB, 4, MGR, 5, HIREDATE); 

It always picks the first five columns of the table. If you desire a different set of five columns, just edit the resulting logmnr.opt file this creates, and change the column names. For example, the EMP table has three more columns that are not shown in the above colmap - SAL, COMM, and DEPTNO. If you wanted to see the SAL column instead of the JOB column, the colmap would simply be:

tkyte@TKYTE816> @colmap colmap = TKYTE DEPT (1, DEPTNO, 2, DNAME, 3, LOC); colmap = TKYTE EMP (1, EMPNO, 2, ENAME, 3, SAL, 4, MGR, 5, HIREDATE); 

Important considerations for the colmap file, beyond its needs to have the correct case and whitespace, are

So, we will now modify all of the columns in the DEPT table. I am using four different UPDATEs, each against a different row and set of columns. This is so we'll see better the effect of the placeholder columns:

tkyte@TKYTE816> alter system archive log current;      tkyte@TKYTE816> update dept set deptno = 11   2   where deptno = 40   3  /      tkyte@TKYTE816> update dept set dname = initcap(dname)   2   where deptno = 10   3  /      tkyte@TKYTE816> update dept set loc = initcap(loc)   2   where deptno = 20   3  /      tkyte@TKYTE816> update dept set dname = initcap(dname),   2                    loc = initcap(loc)   3   where deptno = 30   4  /      tkyte@TKYTE816> commit;      tkyte@TKYTE816> alter system archive log current; 

We can review the column-by-column changes now by loading the newly generated archived redo log file and starting Log Miner with the option USE_COLMAP. Note that I did generate the logmnr.opt file using the script above and I placed that file in the same directory with my dictionary file:

tkyte@TKYTE816> declare   2      l_name v$archived_log.name%type;   3  begin   4   5      select name into l_name   6        from v$archived_log   7       where completion_time = ( select max(completion_time)   8                                   from v$archived_log );   9  10      sys.dbms_logmnr.add_logfile( l_name, sys.dbms_logmnr.NEW );  11  end;  12  /      PL/SQL procedure successfully completed. tkyte@TKYTE816> begin   2     sys.dbms_logmnr.start_logmnr   3     ( dictFileName => 'c:\temp\miner_dictionary.dat',   4       options => sys.dbms_logmnr.USE_COLMAP );   5  end;   6  /      PL/SQL procedure successfully completed.      tkyte@TKYTE816> select scn, ph1_name, ph1_undo, ph1_redo,   2              ph2_name, ph2_undo, ph2_redo,   3              ph3_name, ph3_undo, ph3_redo   4    from v$logmnr_contents   5   where seg_name = 'DEPT'   6  /             SCN PH1_NA PH1 PH1 PH2_N PH2_UNDO   PH2_REDO   PH3 PH3_UNDO PH3_REDO ---------- ------ --- --- ----- ---------- ---------- --- -------- -------- 6.4430E+12 DEPTNO 40  11 6.4430E+12                DNAME accounting Accounting 6.4430E+12                                            LOC DALLAS   Dallas 6.4430E+12                DNAME sales      Sales      LOC CHICAGO  Chicago 

So, this output clearly shows us (from line 1 for example) that DEPTNO had a before image value of 40 (PH1) and became 11. This makes sense, since we did SET DEPTNO = 11 WHERE DEPTNO = 40. Notice that the remaining columns in that first row of output are Null. This is because Oracle logs changed bytes only; there is no before/after image of the DNAME and LOC columns for that row. The second row shows the update of the DNAME column from accounting to Accounting, and no changes for DEPTNO or LOC, as these columns were not affected. The last row shows that when we modified two columns with our UPDATE statement, they both show up in the placeholder columns.

As you can see, using the placeholder columns can be very convenient if you are trying to locate a specific transaction in a large set of redo. If you know that the transaction updated the X table, and changed the Y column from a to b, finding this transaction will be a breeze.

Options and Usage

These are the two packages that implement the LogMiner functionality - DBMS_LOGMNR and DBMS_LOGMNR_D. The DBMS_LOGMNR_D (the _D stands for 'dictionary') package has exactly one procedure in it, which is BUILD. This is used the build the data dictionary used by the DBMS_LOGMNR package when loading a redo log file. It will map object IDs to table names, determine data types, map column positions to column name, and so on. Using the DBMS_LOGMNR_D.BUILD routine is very straightforward. It takes two parameters:

That's it for BUILD. Neither parameter is optional, so both must be supplied. If you receive an error from this routine similar to the following:

tkyte@TKYTE816> exec sys.dbms_logmnr_d.build( 'x.dat', 'c:\not_valid\' ); BEGIN sys.dbms_logmnr_d.build( 'x.dat', 'c:\not_valid\' ); END;      * ERROR at line 1: ORA-01309: specified dictionary file cannot be opened ORA-06510: PL/SQL: unhandled user-defined exception ORA-06512: at "SYS.DBMS_LOGMNR_D", line 793 ORA-06512: at line 1 

it will mean that the directory you are attempting to use is not set in the utl_file_dir init.ora parameter.

The DBMS_LOGMNR package itself has only three routines:

The ADD_LOGFILE routine, as we have seen above, is called before actually starting LogMiner. It simply builds a list of log files that START_LOGMNR will process, and populate into the V$LOGMNR_CONTENTS view for us. The inputs to ADD_LOGFILE are:

If we wanted to analyze the last two archive redo log files, we would call ADD_LOGFILE twice. For example:

tkyte@TKYTE816> declare   2      l_cnt  number default 0;   3  begin   4      for x in (select name   5                  from v$archived_log   6                 order by completion_time desc )   7      loop   8          l_cnt := l_cnt+1;   9          exit when ( l_cnt > 2 );  10  11          sys.dbms_logmnr.add_logfile( x.name );  12      end loop;  13  14      sys.dbms_logmnr.start_logmnr  15      ( dictFileName => 'c:\temp\miner_dictionary.dat',  16        options => sys.dbms_logmnr.USE_COLMAP );  17  end;  18  /      PL/SQL procedure successfully completed. 

Within that same session, after we've started LogMiner, we may call ADD_LOGFILE to add more log files, remove uninteresting ones, or if you use DBMS_LOGMNR.NEW, reset the list of log files to be just that one new file. Calling DBMS_LOGMNR.START_LOGMNR after making changes to the list will effectively flush the V$LOGMNR_CONTENTS view, and repopulate it with the contents of the log files that are on the list now.

Moving onto DBMS_LOGMNR.START_LOGMNR, we see that we have many inputs. In the above examples, we have only been using two out of the six available to us. We've used the dictionary file name and the options (to specify we wanted to use a colmap file). The available inputs in full are:

The last procedure in DBMS_LOGMNR is simply the DBMS_LOGMNR.END_LOGMNR routine. This terminates the LogMiner session, and empties out the V$LOGMNR_CONTENTS view. After you call DBMS_LOGMNR.END_LOGMNR, any attempt to query the view will result in:

tkyte@TKYTE816> exec dbms_logmnr.end_logmnr;      PL/SQL procedure successfully completed.      tkyte@TKYTE816> select count(*) from v$logmnr_contents; select count(*) from v$logmnr_contents                      * ERROR at line 1: ORA-01306: dbms_logmnr.start_logmnr() must be invoked before selecting from v$logmnr_contents 

Using Log Miner to Find Out When...

This is one of the more common uses of LogMiner I've seen. Someone 'accidentally' dropped a table. You would like to get it back, or just find out who did it. Or maybe someone updated an important table and you don't know whom, but no one will own up to it. In any case, something happened, you do not have auditing enabled, but you have been running in archive log mode, and have the backups. You would like to restore your backups and do a point-in-time recovery to the point in time immediately prior to the DROP TABLE. In this fashion, you can restore and recover that table, stop recovery (hence not dropping the table again), and then export this table from the restored database and import it into the correct database. This allows you to restore the table with all of the changes intact.

In order to do this, we'll need to know either the exact time or the SCN of the DROP TABLE. Since clocks are out of sync, and people are probably panicked, they may give bad information in this case. What we can do is load up the archived log files from around the time of the DROP TABLE, and find the exact SCN of the point we want to recover.

We'll do another quick example here to isolate the statements you might see in LogMiner when a table is dropped. I'm using locally managed tablespaces, so if you are using dictionary-managed tablespaces, you may see more SQL than I do below. This extra SQL you see with a dictionary-managed tablespace will be the SQL executed to return the extents back to the system, and free up the space allocated to the table. So, here we go to drop the table:

tkyte@TKYTE816> alter system archive log current;      System altered.      tkyte@TKYTE816> drop table dept;      Table dropped.      tkyte@TKYTE816> alter system archive log current;      System altered. 

Now, we want to locate the SQL_REDO that represents the DROP TABLE. If you recall, the actual SQL executed at run-time is not reported by LogMiner. Rather, the equivalent SQL is reported. We will not see a DROP TABLE statement - we will see data dictionary modifications. The one we are looking for will be a DELETE against SYS.OBJ$, which is the base table for holding all objects. Part of dropping a table involves deleting a row from SYS.OBJ$. Fortunately, when LogMiner builds the SQL_REDO to process a DELETE, it includes the column values in the SQL along with the row ID. We can use this fact to search for the DELETE of DEPT from OBJ$. Here is how:

tkyte@TKYTE816> declare   2      l_name v$archived_log.name%type;   3  begin   4      select name into l_name   5        from v$archived_log   6       where completion_time = ( select max(completion_time)   7                                   from v$archived_log );   8   9      sys.dbms_logmnr.add_logfile( l_name, sys.dbms_logmnr.NEW );  10  end;  11  / PL/SQL procedure successfully completed.      tkyte@TKYTE816> begin   2     sys.dbms_logmnr.start_logmnr   3     ( dictFileName => 'c:\temp\miner_dictionary.dat',   4       options => sys.dbms_logmnr.USE_COLMAP );   5  end;   6  /      PL/SQL procedure successfully completed.      tkyte@TKYTE816> select scn, sql_redo   2    from v$logmnr_contents   3   where sql_redo like 'delete from SYS.OBJ$ %''DEPT''%'   4  /      SCN SQL_REDO ----------------- ----------------------------------------     6442991097246 delete from SYS.OBJ$ where OBJ# = 30553                   and DATAOBJ# = 30553 and OWNER# = 337 an                   d NAME = 'DEPT' and NAMESPACE = 1 and SU                   BNAME IS NULL and TYPE# = 2 and CTIME =                   TO_DATE('29-APR-2001 12:32:11', 'DD-MON-                   YYYY HH24:MI:SS') and MTIME = TO_DATE('2                   9-APR-2001 12:32:11', 'DD-MON-YYYY HH24:                   MI:SS') and STIME = TO_DATE('29-APR-2001                    12:32:11', 'DD-MON-YYYY HH24:MI:SS') an                   d STATUS = 1 and REMOTEOWNER IS NULL and                    LINKNAME IS NULL and FLAGS = 0 and OID$                    IS NULL and SPARE1 = 6 and ROWID = 'AAA                   AASAABAAAFz3AAZ'; 

That is all there is to it. Now that we have found that the SCN was 6442991097246, we can do a point in time recovery elsewhere to recover this table, and restore it to our system. We can recover it to the very point in time immediately prior to it being dropped.

PGA Usage

LogMiner uses PGA memory in order to perform its task. We have mentioned previously, that this implies you cannot use DBMS_LOGMNR with MTS. What we haven't looked at is how much PGA memory LogMiner might actually use.

The log files on my system are 100 MB each. I loaded up two of them for analysis, measuring the before and after PGA memory use:

tkyte@TKYTE816> select a.name, b.value   2    from v$statname a, v$mystat b   3   where a.statistic# = b.statistic#   4     and lower(a.name) like '%pga%'   5  / NAME                                VALUE ------------------------------ ---------- session pga memory                 454768 session pga memory max             454768      tkyte@TKYTE816> declare   2          l_name varchar2(255) default   3                'C:\oracle\ORADATA\tkyte816\archive\TKYTE816T001S012';   4  begin   5          for i in 49 .. 50   6      loop   7          sys.dbms_logmnr.add_logfile( l_name || i || '.ARC' );   8      end loop;   9  10      sys.dbms_logmnr.start_logmnr  11      ( dictFileName => 'c:\temp\miner_dictionary.dat',  12        options => sys.dbms_logmnr.USE_COLMAP );  13  end;  14  /      PL/SQL procedure successfully completed.      tkyte@TKYTE816> select a.name, b.value   2    from v$statname a, v$mystat b   3   where a.statistic# = b.statistic#   4     and lower(a.name) like '%pga%'   5  /      NAME                                VALUE ------------------------------ ---------- session pga memory               11748180 session pga memory max           11748180 

So, 200 MB of archive redo log is currently taking about 11.5 MB of PGA memory. This means that either the archived redo was mostly 'fluff', or that Oracle doesn't actually cache the entire redo log file in RAM. The answer is that Oracle doesn't actually cache the entire redo log file in RAM. Rather, it reads it from disk as needed. Only some information in cached in RAM.

If we actually query the V$LOGMNR_CONTENTS view right now and measure the amount of PGA memory in use after this operation, we'll see that the memory requirements for this will go up as we access it:

tkyte@TKYTE816> create table tmp_logmnr_contents unrecoverable   2  as   3  select * from v$logmnr_contents   4  / Table created.      tkyte@TKYTE816> select a.name, b.value   2    from v$statname a, v$mystat b   3   where a.statistic# = b.statistic#   4     and lower(a.name) like '%pga%'   5  / NAME                                VALUE ------------------------------ ---------- session pga memory               19965696 session pga memory max           19965696 

So, as you can see, our session now needs almost 20 MB of PGA memory.

Log Miner Limits

LogMiner has some serious limits that you need to be aware of. These limits are in regards to using Oracle object types and chained rows.

Oracle Object Types

Object types are somewhat supported by LogMiner. LogMiner is not able to rebuild the SQL you would typically use to access object types, and it is not able to support all object types. A quick example demonstrates best some of the limitations in this area. We'll start with a small schema with some common object types such as VARRAYs and nested tables in it:

tkyte@TKYTE816> create or replace type myScalarType   2  as object   3  ( x int, y date, z varchar2(25) );   4  /      Type created.      tkyte@TKYTE816> create or replace type myArrayType   2  as varray(25) of myScalarType   3  /      Type created.      tkyte@TKYTE816> create or replace type myTableType   2  as table of myScalarType   3  /      Type created.      tkyte@TKYTE816> drop table t;      Table dropped.      tkyte@TKYTE816> create table t ( a int, b myArrayType, c myTableType )   2  nested table c store as c_tbl   3  /      Table created.      tkyte@TKYTE816> begin   2     sys.dbms_logmnr_d.build( 'miner_dictionary.dat',   3                              'c:\temp' );   4  end;   5  / PL/SQL procedure successfully completed.      tkyte@TKYTE816> alter system switch logfile;      System altered.      tkyte@TKYTE816> insert into t values ( 1,   2                  myArrayType( myScalarType( 2, sysdate, 'hello' ) ),   3                  myTableType( myScalarType( 3, sysdate+1, 'GoodBye' ) )   4                                   );      1 row created.      tkyte@TKYTE816> alter system switch logfile;      System altered. 

So, in the above example, we created some object types, added a table that utilizes these types, re-exported our data dictionary, and then finally did some isolated DML on this object. Now we are ready to see what LogMiner is able to tell us about these operations:

tkyte@TKYTE816> begin   2     sys.dbms_logmnr.add_logfile( 'C:\oracle\rdbms\ARC00028.001',   3                                   dbms_logmnr.NEW );   4  end;   5  /      PL/SQL procedure successfully completed.      tkyte@TKYTE816> begin   2     sys.dbms_logmnr.start_logmnr   3     ( dictFileName => 'c:\temp\miner_dictionary.dat' );   4  end;   5  /      PL/SQL procedure successfully completed.      tkyte@TKYTE816> select scn, sql_redo, sql_undo   2    from v$logmnr_contents   3  /             SCN SQL_REDO             SQL_UNDO ---------- -------------------- --------------------     824288     824288     824288     824288 set transaction read            write;          824288 insert into          delete from            TKYTE.C_TBL(NESTED_T TKYTE.C_TBL where            ABLE_ID,X,Y,Z)       NESTED_TABLE_ID =            values               HEXTORAW('252cb5fad8            (HEXTORAW('252cb5fad 784e2ca93eb432c2d35b            8784e2ca93eb432c2d35 7c') and X = 3 and Y            b7c'),3,TO_DATE('23- =            JAN-2001 16:21:44',  TO_DATE('23-JAN-2001            'DD-MON-YYYY         16:21:44',            HH24:MI:SS'),'GoodBy 'DD-MON-YYYY            e');                 HH24:MI:SS') and Z =                                 'GoodBye' and ROWID                                 =                                 'AAAFaqAADAAAAGzAAA'                                 ;          824288     824288     824288     824288 insert into          delete from TKYTE.T            TKYTE.T(A,B,SYS_NC00 where A = 1 and B =            00300004$) values    Unsupported Type and            (1,Unsupported       SYS_NC0000300004$ =            Type,HEXTORAW('252cb HEXTORAW('252cb5fad8            5fad8784e2ca93eb432c 784e2ca93eb432c2d35b            2d35b7c'));          7c') and ROWID =                                 'AAAFapAADAAAARjAAA'                                 ;          824288      10 rows selected. 

As you can see, our original single INSERT:

tkyte@TKYTE816> insert into t values ( 1,   2                  myArrayType( myScalarType( 2, sysdate, 'hello' ) ),   3                  myTableType( myScalarType( 3, sysdate+1, 'GoodBye' ) )   4                                   ); 1 row created. 

was turned into two INSERTs. One for the child table (the nested table), and one for the parent able T. LogMiner does not reproduce the single INSERT - equivalent SQL was produced. As we look closer however, we will notice that in the INSERT INTO T, we see Unsupported Type as one of the column values. Looking back at the original INSERT, we can see that the unsupported type is in fact our VARRAY. LogMiner is not capable of reproducing this particular construct.

This does not remove all of the usefulness of LogMiner with regards to objects. It does prevent us from using it to undo or redo transactions, since it cannot faithfully reproduce the necessary SQL. However, we can still use it to analyze historical trends, perform auditing, and the like. Perhaps of more interest is that it gives us the ability to see how Oracle physically implements object types under the covers. For example, look at the insert into T:

insert into tkyte.t ( a, b, SYS_NC0000300004$) values ... 

It is pretty clear to us what A and B are. They are our INT and MyArrayType (VARRAY) columns. However, where is C and what is this SYS_NC0000300004$ column? Well, C is our nested table, and nested tables are actually physically stored as a parent/child table. C is not stored in T; it is stored in a wholly separate table. The column SYS_NC0000300004$ is actually a surrogate primary key on T, and is used as a foreign key in C_TBL - the nested table. If we look at the INSERT into the nested table:

insert into tkyte.c_tbl( nested_table_id, x, y, z ) values ... 

we can see that the NESTED_TABLE_ID was added to our nested table, and this column is, in fact, used to join to the T.SYS_NC0000300004$ column. Further, looking at the value that is put into both of these columns:

HEXTORAW('252cb5fad8784e2ca93eb432c2d35b7c') 

we can see that Oracle is, by default, using a system-generated 16 byte RAW value to join C_TBL with T. Therefore, through LogMiner analysis, we can gain a better understanding of how various features in Oracle are implemented. Here, we have seen how a nested table type is really nothing more than a parent/child table with a surrogate key in the parent table, and a foreign key in the child table.

Chained or Migrated Rows

LogMiner currently does not handle chained or migrated rows. A chained row is a row that spans more than one Oracle block. A migrated row is a row that started on one block when it was inserted, but due to an UPDATE, grew too large to fit on this block with the other rows that were there, and was therefore 'moved' to a new block. A migrated row retains its original row ID - the block it was originally on has a pointer to the new location for the row. A migrated row is a special type of chained row. It is a chained row where no data will be found on the first block, and all of the data will be found on the second block.

In order to see what LogMiner does with chained rows, we will artificially create one. We'll start with a table that has nine CHAR(2000) columns. I am using an 8 KB block size for my database so if all nine columns have values, they will be 18,000 bytes in size, which is too big to fit on a block. This row will have to be chained onto at least three blocks. The table we'll use to demonstrate this limitation is as follows:

tkyte@TKYTE816> create table t ( x int primary key,   2                   a char(2000),   3                   b char(2000),   4                   c char(2000),   5                   d char(2000),   6                   e char(2000),   7                   f char(2000),   8                   g char(2000),   9                   h char(2000),  10                   i char(2000) );      Table created. 

Now, to demonstrate the issue, we'll insert a row into T with a value for only column X and column A. The size of this row will be about 2,000 plus bytes. Since B, C, D, and so on are Null, they will consume no space whatsoever. This row will fit on a block. We will then update this row, and supply values for B, C, D, and E. Since CHARs are always blank padded, this will cause the size of the row to increase from 2,000 plus bytes to 10,000 plus bytes, forcing it to chain onto two blocks. We'll then update every column in the row, growing the row to 18 KB, forcing it to span three blocks. We can then dump the redo with LogMiner, and see what it does with it:

tkyte@TKYTE816> begin   2     sys.dbms_logmnr_d.build( 'miner_dictionary.dat',   3                              'c:\temp' );   4  end;   5  /      PL/SQL procedure successfully completed. tkyte@TKYTE816> alter system archive log current;      System altered.      tkyte@TKYTE816> insert into t ( x, a ) values ( 1, 'non-chained' );      1 row created.      tkyte@TKYTE816> commit;      Commit complete.      tkyte@TKYTE816> update t set a = 'chained row',   2               b = 'x', c = 'x',   3               d = 'x', e = 'x'   4   where x = 1;      1 row updated.      tkyte@TKYTE816> commit;      Commit complete.      tkyte@TKYTE816> update t set a = 'chained row',   2               b = 'x', c = 'x',   3               d = 'x', e = 'x',   4               f = 'x', g = 'x',   5               h = 'x', i = 'x'   6   where x = 1;      1 row updated.      tkyte@TKYTE816> commit;      Commit complete.      tkyte@TKYTE816> alter system archive log current;      System altered. 

Now that we've created the exact case we want to analyze, we can dump it via LogMiner. Don't forget, we need to rebuild your data dictionary file after we create table T, or the output will be unreadable!

tkyte@TKYTE816> declare   2      l_name v$archived_log.name%type;   3  begin   4   5      select name into l_name   6        from v$archived_log   7       where completion_time = ( select max(completion_time)   8                                   from v$archived_log );   9  10      sys.dbms_logmnr.add_logfile( l_name, dbms_logmnr.NEW );  11  end;  12  / PL/SQL procedure successfully completed.      tkyte@TKYTE816> begin   2     sys.dbms_logmnr.start_logmnr   3     ( dictFileName => 'c:\temp\miner_dictionary.dat' );   4  end;   5  /      PL/SQL procedure successfully completed.      tkyte@TKYTE816> select scn, sql_redo, sql_undo   2    from v$logmnr_contents   3   where sql_redo is not null or sql_undo is not null   4  /                   SCN SQL_REDO                    SQL_UNDO ---------------- --------------------------- ---------------------------    6442991118354 set transaction read write;    6442991118354 insert into TKYTE.T(X,A) va delete from TKYTE.T where X                  lues (1,'non-chained         = 1 and A = 'non-chained                            ');                       ' and ROWID                                               = 'AAAHdgAAGAAAACKAAA';         6442991118355 commit;    6442991118356 set transaction read write;    6442991118356 Unsupported (Chained Row)   Unsupported (Chained Row)    6442991118356 Unsupported (Chained Row)   Unsupported (Chained Row)    6442991118357 commit;    6442991118358 set transaction read write;    6442991118358 Unsupported (Chained Row)   Unsupported (Chained Row)    6442991118358 Unsupported (Chained Row)   Unsupported (Chained Row)    6442991118358 Unsupported (Chained Row)   Unsupported (Chained Row)    6442991118359 commit;      12 rows selected. 

As you can see, the original INSERT we did was reported by Log Miner as you would expect. The UPDATE however, since it caused the row to be chained, is not reported by LogMiner. Instead it reports Unsupported (Chained Row). It is interesting to note that it reports this twice for our first UPDATE, and three times for the second. LogMiner is reporting changes by database block. If your row is on two blocks, there will be two change entries in V$LOGMNR_CONTENTS. If your database block is on three blocks, then there will be three entries. You just need to be aware of the fact that LogMiner cannot faithfully reproduce the SQL to redo or undo actions against chained and migrated rows.

Other limits

LogMiner has some other limitations similar to the above. In addition to the above it does not currently support:

V$LOGMNR_CONTENTS

The V$LOGMNR_CONTENTS table contains a row for every logical change to the database retrieved from the processed redo log files. We have already used this view many times, but have utilized a small fraction of the columns within it. The following table describes all of the columns available in this view with a more detailed description of what is available in them, than is available in the Oracle documentation:

PRIVATE COLUMN

DESCRIPTION

SCN

System Change Numbers associated with the transaction that made this change.

TIMESTAMP

Date when redo record was generated. Timestamps cannot be used to infer ordering of redo records. Since the SCN is assigned upon COMMIT, only the SCN can be used to infer ordering of redo records. Ordering by timestamp in a multi-user system will result in the wrong order.

THREAD#

Identifies the thread that generated the redo record.

LOG_ID

Identifies the log file within the V$LOGMNR_FILES table that contains the redo record. This is a foreign key to the V$LOGMNR_FILES view.

XIDUSN

Transaction ID (XID) Undo Segment Number (USN). The transaction identifier is constructed from the XIDUSN, XIDSLOT, and XIDSQN, and is used to identify the transaction that generated the change. These three fields taken together uniquely identify the transaction.

XIDSLOT

Transaction ID slot number. Identifies the transaction table entry number.

XIDSQN

Transaction ID sequence number.

RBASQN

Uniquely identifies the log that contained this redo record, among a group of redo logs. A RBA (Redo Block Address) is composed of the RBASQN, RBABLK, and RBABYTE fields.

RBABLK

The block number within the log file.

RBABYTE

The byte offset within the block.

UBAFIL

UBA (Undo Block Address) file number identifying the file containing the undo block. The UBA is constructed from the UBAFIL, UBABLK, UBASQN, and UBAREC, and is used to identify the undo generated for the change.

UBABLK

UBA block number.

UBAREC

UBA record index.

UBASQN

UBA undo block sequence number.

ABS_FILE#

Data block absolute file number. The ABS_FILE#, together with the REL_FILE#, DATA_BLOCK#, DATA_OBJ#, DATA_DOBJ, identify the block changed by the transaction.

REL_FILE#

Data block relative file number. The file number is relative to the tablespace of the object.

DATA_BLOCK#

Data block number.

DATA_OBJ#

Data block object number.

DATA_DOBJ#

Data block data object number identifying the object within the tablespace.

SEG_OWNER

Name of the user owning the object.

SEG_NAME

Name of the structure the segment was allocated for (in other words, table name, cluster name, and so on). Partitioned tables will have a segment name constructed of two parts; the table name followed by a comma-separated partition name (for example, (TableName,PartitionName)).

SEG_TYPE

The type of the segment in numeric form.

SEG_TYPE_NAME

The type of segment in string form (in other words, TABLE, INDEX, and so on) Only the type, TABLE, will be supported in the initial release. Other segment types will be reported as UNSUPPORTED.

TABLE_SPACE_NAME

Name of the tablespace.

ROW_ID

Row ID.

SESSION#

Identifies session that generated the redo. A Null value will be reported if the session number is not available from the redo log.

SERIAL#

Serial number of the session, which generated the redo. The SESSION# and SERIAL# can be used to uniquely identify the Oracle session. A Null value will be reported if the session number is not available from the redo log.

USERNAME

Name of the user initiating the operation that generated the redo record. The user name will always be Null if the archive auditing option is not enabled. This auditing is enabled via the init.ora parameter TRANSACTION_AUDITING.

SESSION_INFO

String containing login username, client information, OS username, machine name, OS terminal, OS PID, OS program name.

ROLLBACK

A value of 1 (True) identifies operations and SQL statements that were generated as a result of a rollback request, 0 (False) otherwise.

OPERATION

Type of SQL operation. Only INSERT, DELETE, UPDATE, COMMIT, and BEGIN_TRANSACTION will be reported. All other operations will be reported as UNSUPPORTED or INTERNAL_OPERATION.

SQL_REDO, SQL_UNDO

The SQL_REDO and SQL_UNDO columns contain SQL-compliant statements that represent the logical redo and undo operations decoded from one or more archive log records. A Null value indicates that no valid SQL statement can be generated for this redo record. Some redo records may not be translatable. In this case, the SQL_REDO and SQL_UNDO will be Null, and the STATUS column will contain the string UNSUPPORTED.

RS_ID

RS_ID (Record Set ID). uniquely identifies the set of records used to generate a SQL statement (a set may be a single record). It can be used to determine when multiple records generate a single SQL statement. The RS_ID will be identical for all records within the set. The SQL statement will appear only in the last row the record set. The SQL_REDO and SQL_UNDO columns for all other rows, within the set, will be Null. Note that the RS_ID/SSN pair together provide a unique SQL identifier for every SQL statement generated (see SSN).

SSN

The SSN (SQL Sequence Number) can be used to identify multiple rows, with valid SQL_REDO statements, that are generated from a single redo record (in other words, array inserts, direct loads). All such rows will have the same RS_ID, but a unique SSN. The SSN is an incrementing value starting at 1 for each new RS_ID.

CSF

CSF (Continuation SQL Flag) set to 1 (True) indicates that either a LogMiner-generated REDO_SQL or UNDO_SQL statement is larger than the maximum size of the VARCHAR2 (currently 4000 characters) data type. SQL statements exceeding this limit will span multiple rows.

The next row entry will contain the remainder of the SQL statement. The RS_ID, SSN pair will be identical for all continued rows corresponding to the same SQL statement. The last of the continued rows will have CSF set to 0 (False) to indicate the end of the SQL continuation.

STATUS

Indicates the status of the translation. Null value indicates a successful translation, UNSUPPORTED will indicate that this version of LogMiner does not support the SQL translation, READ_FAILURE will indicate an internal operating system failure to read from the log file, TRANSLATION_ERROR will indicate that LogMiner was unable to complete the translation (this may be due to a corrupted log or an out of date dictionary file).

PH1_NAME

Placeholder column name. Placeholder columns are generic columns that can be assigned to specify database table columns, via an optional LogMiner mapping file.

PH1_REDO

Placeholder column redo value.

PH1_UNDO

Placeholder column undo value.

PH2_NAME

Placeholder column name.

PH2_REDO

Placeholder column redo value.

PH2_UNDO

Placeholder column undo value.

PH3_NAME

Placeholder column name.

PH3_REDO

Placeholder column redo value.

PH3_UNDO

Placeholder column undo value.

PH4_NAME

Placeholder column name.

PH4_REDO

Placeholder column redo value.

PH4_UNDO

Placeholder column undo value.

PH5_NAME

Placeholder column name.

PH5_REDO

Placeholder column redo value.

PH5_UNDO

Placeholder column undo value.

Summary

LogMiner is not a tool you will use every day - I cannot anticipate any application that would actually use it as part of its processing. It is an easy way to see what the database does however, and is an excellent exploration tool in this regard. We have seen how LogMiner can be useful in finding out 'who did what and when' after the fact - this is the use of LogMiner I've seen more often than others. You have the errant program that is doing something it wasn't supposed to, or you have a privileged person doing things they shouldn't be (and not owning up to it). If auditing wasn't turned on, you have no other way to go back in time, and see what happened. In a pinch, this tool can be used to undo an errant transaction as well, given that it supplies you the SQL UNDO and REDO statements. In general, you'll find that LogMiner is a good 'stealth' tool. It won't be on the top ten list of executed procedures, but when you need it, it's good to know it is there.



Expert One on One Oracle
Full Frontal PR: Getting People Talking about You, Your Business, or Your Product
ISBN: 1590595254
EAN: 2147483647
Year: 2005
Pages: 41
Authors: Richard Laermer, Michael Prichinello
BUY ON AMAZON

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net