All databases have logs associated with them. These logs keep records of database changes. If a database needs to be restored and the transactions recovered to a point beyond the time of the last full, offline backup, the logs are required to roll the data forward to that point in time. There are two types of DB2 logging, and each provides a different level of recovery capability: circular logging and archive logging. Circular logging is the default method when a new database is created.
db2 get db cfg for sample grep i "log retain" Log retain for recovery status = NO Log retain for recovery enabled (LOGRETAIN) = OFF db2 get db cfg for sample grep i "user exit" User exit for logging status = NO User exit for logging enabled (USEREXIT) = OFF You can combine the above commands if egrep is compatible on Windows: db2 get db cfg for sample egrep i "log retainuser exit" Archive logging is used to enable rollforward recovery of a database to ensure that transactions performed after the backup images were taken are not lost in the event that the database needs to be restored.
db2 update db cfg for sample using logretain ON userexit ON db2 get db cfg for sample egrep i "log retainuser exit" Log retain for recovery status = NO Log retain for recovery enabled (LOGRETAIN) = RECOVERY User exit for logging status = NO User exit for logging enabled (USEREXIT) = ON NOTE You must take a full database backup right after enabling LOGRETAIN and USEREXIT to avoid the following error message on the subsequent connection to the database. [View full width]
After the database has been successfully backed up, the "Log retain for recovery status" will change from NO to RECOVERY. db2 get db cfg for sample egrep i "log retainuser exit" Log retain for recovery status = RECOVERY Log retain for recovery enabled (LOGRETAIN) = RECOVERY User exit for logging status = YES User exit for logging enabled (USEREXIT) = ON Without infinite active log files enabled, active logs are:
NOTE We will discuss infinite active logs later. The RESTART DATABASE command uses the active logs to apply or undo the necessary transactions to change the database to a consistent and usable state. During crash recovery, all uncommitted changes recorded in the logs are rolled back, and changes that were committed but not yet written from the buffer pool to table space containers are redone. These actions ensure the integrity of the database and guarantee that no transactions will be lost. Online archival log files contain information related to completed transactions no longer required for crash recovery. They reside in the same subdirectory path as the active log files. Offline archival log files have been moved from the active log file directory. The method of moving these files could be either manual or a process that is invoked through a user exit program or a shell script. Archived log files can be placed offline simply by moving them to another directory, storing them on tape or elsewhere. Taking online backups is supported only if the database is configured for archive logging. During an online backup operation, all activities against the database are logged. When an online backup image is restored, the logs must be rolled forward at least to the point in time at which the backup operation completed to guarantee the integrity of the data. For this to happen, the logs must be available when the database is restored. After an online backup is complete, DB2 forces the currently active log to be closed, and as a result, it will be archived (if USEREXIT is set to YES or ON). This ensures that your online backup has a complete set of archived logs available for recovery. db2 backup database sample online to c:\dbbackup Backup successful. The timestamp for this backup image is : 20020714115137 The USEREXIT database configuration parameter allows you to change where archived logs are stored. Changing the NEWLOGPATH parameter affects only where active logs are stored. To determine which log files in the database log path directory are archived logs, check the value of the "First active log file" in the database configuration. This field indicates the lowest numbered log file that is active. Those logs with sequence numbers less than the value displayed in the "First active log file" are archived logs and can be moved. You can check the value of this parameter by using the Control Center or by using the following command: db2 get db cfg for sample grep i "first active log" First active log file = S0000000.LOG Percent of max active log space by transaction (MAX_LOG) = 0 Num. of active log files for 1 active UOW (NUM_LOG_SPAN) = 0 Primary Logs (LOGPRIMARY)This parameter specifies the number of primary logs of size LOGFILSIZ that will be created. As you select the number of logs to configure, you must consider the size you make each log and whether your application can handle a log-full condition.
db2 get db cfg for sample grep i logprimary Number of primary log files (LOGPRIMARY)= 3 Secondary Logs (LOGSECOND)This parameter specifies the maximum number of secondary log files that can be created and used for recovery, if needed.
db2 get db cfg for sample grep i logsecond Number of secondary log files (LOGSECOND)= 2 NOTE
Log File Size (LOGFILSIZ)This parameter specifies the size of each configured log, in number of 4-KB pages. The maximum log file size is 262,144 pages. db2 get db cfg for sample grep i logfilsiz Log file size (4KB) (LOGFILSIZ)= 250 NOTE Prior to Version 7, there was a 4-GB logical limit on the total active log space that you can configure. In Version 7, the upper limit for the active log space is 32 GB. In Version 8, there is no limit on the active log space. The size of the log file has an impact on performance because there is a cost for switching from one log to another. So, from a pure performance perspective, the larger the log file size, the better. This parameter also indicates the log file size for archiving. In this case, a larger log file size is not necessarily better, because a larger log file size may increase the chance of failure or cause a delay in log shipping scenarios. When considering the active log space, it may be better to have a larger number of smaller log files. For example, if there are two very large log files and a transaction starts close to the end of one log file, only half of the log space remains available. Assuming that you have an application that keeps the database open or that the DBA has activated the database, to minimize processing time when opening the database, the log file size should be determined by the amount of time it takes to make offline archived log copies. When LOGRETAIN is set to YES or ON, minimizing log file loss is also an important consideration when setting the log size. Log archiving will copy an entire log file. If you use a single large log file, you increase the time between archiving, and if the disk(s) containing the log fails, some transaction information will probably be lost. Decreasing the log file size increases the frequency of archiving but can reduce the amount of information loss in case of a media failure because the logs will be archived more frequently and will contain less work. Log Buffer Size (LOGBUFSZ)This parameter allows you to specify the amount of memory to use as a buffer for log records before writing these records to disk. The log buffer is written to disk when the first of the following events occurs:
db2 get db cfg for sample grep i logbufsz Log buffer size (4KB) (LOGBUFSZ)= 8 Increasing the log buffer size results in more efficient I/O activity associated with logging, because the log records are written to disk less frequently, and more records are written each time. Number of Commits to Group (MINCOMMIT)This parameter allows you to delay the writing of the log buffer log records to disk until a minimum number of commits have been performed, when the buffer is filled, or at one second intervals. This can
NOTE This can slow down very short transactions, because each one will likely take up to 1 second to return. The grouping of commits occurs only if the value of this parameter is greater than 1 and if the number of applications connected to the database is greater than the value of this parameter. When commit grouping is in effect, application commit requests are held until either one second has elapsed or the number of commit requests equals the value of this parameter. db2 get db cfg for sample grep i mincommit Group commit count (MINCOMMIT) = 1 New Log Path (NEWLOGPATH)The database logs are initially created in SQLOGDIR, which is a subdirectory under the database directory.
db2 get db cfg for sample grep i path Changed path to log files (NEWLOGPATH) = Path to log files = C:\DB2\NODE0000\SQL00002\SQLOGDIR\ Overflow log path (OVERFLOWLOGPATH) = Mirror log path (MIRRORLOGPATH) = db2 update db cfg for sample using newlogpath c:\dblog\SAMPLE db2 get db cfg for sample grep i path Changed path to log files (NEWLOGPATH) = c:\dblog\SAMPLE\NODE0000\ Path to log files = C:\DB2\NODE0000\SQL00002\SQLOGDIR\ Overflow log path (OVERFLOWLOGPATH) = Mirror log path (MIRRORLOGPATH) = Because you can change the log path location, the logs needed for rollforward recovery may exist in different directories or on different devices. The OVERFLOWLOGPATH can be used to specify the location of the logs during a rollforward operation to allow you to access logs in multiple locations. Overflow Log Path (OVERFLOWLOGPATH)This parameter can be used for several functions, depending on your logging requirements. You can specify a location for DB2 to find log files that are needed for a rollforward operation. It is similar to the OVERFLOWLOGPATH option of the ROLLFORWARD command. db2 update db cfg for sample using overflowlogpath e:\dblogo\SAMPLE db2 get db cfg for sample grep i path Changed path to log files (NEWLOGPATH) = c:\dblog\SAMPLE\NODE0000\ Path to log files = C:\DB2\NODE0000\SQL00002\SQLOGDIR\ Overflow log path (OVERFLOWLOGPATH) = e:\dblogo\SAMPLE\NODE0000\ Mirror log path (MIRRORLOGPATH) = d:\dblogm\SAMPLE\NODE0000\ However, instead of specifying the OVERFLOWLOGPATH option for every ROLLFORWARD command issued, you can set this configuration parameter once. If both are used, the OVERFLOWLOGPATH option on the rollforward command will overwrite the OVERFLOWLOGPATH configuration parameter for that rollforward operation. If LOGSECOND is set to “1, you can specify a directory for DB2 to store active log files retrieved from the archive. Active log files must be retrieved for rollback operations if they are no longer in the active log path. If OVERFLOWLOGPATH is not specified, DB2 will retrieve the log files into the active log path. By specifying this parameter, you can provide additional resource for DB2 to store the retrieved log files. The benefit includes spreading the I/O cost to different disks and allowing more log files to be stored in the active log path. If you have configured a raw device for the active log path, OVERFLOWLOGPATH must be configured if you want to enable infinite active log space. NOTE In a partitioned database environment, the database partition number is automatically appended to the path. This is done to maintain the uniqueness of the path in a multi-partitioned database configuration. Log MirroringAt the database level, mirroring log files helps protect a database from:
Since the logs for a database can be a single point of failure, you should consider using the DB2 database configuration parameter, MIRRORLOGPATH, to specify a secondary path for the database to manage copies of the active log file; or you can use the operating system or hardware to mirror the volumes on which the logs are stored. The MIRRORLOGPATH configuration parameter allows the database to write an identical second copy of log files to a different path. It is recommended that you place the secondary log path on a physically separate disk (preferably one that is also on a different disk controller). That way, the disk controller cannot be a single point of failure. When MIRRORLOGPATH is first enabled, it will not actually be used until the database is stopped and reactivated. This is similar to the NEWLOGPATH configuration parameter. If there is an error writing to either the active log path or the mirror log path, the database will mark the failing path as bad , write a message to the administration notification log, and write subsequent log records only to the remaining good log path. DB2 will not attempt to use the bad path again until the current log file is completed. When DB2 needs to open the next log file, it will again try writing to both locations, and if there are no errors, it will continue to use both log paths. If not, DB2 will not attempt to use the path again until the next log file is accessed for the first time. There is no attempt to synchronize the log paths, but DB2 keeps information about access errors that occur, so that the correct paths are used when log files are archived. If a failure occurs while writing to the remaining good path, the database will be shut down.
db2 update db cfg for sample using mirrorlogpath d:\dblogm\SAMPLE db2 get db cfg for sample grep i path Changed path to log files (NEWLOGPATH) = c:\dblog\SAMPLE\NODE0000\ Path to log files = C:\DB2\NODE0000\SQL00002\SQLOGDIR\ Overflow log path (OVERFLOWLOGPATH) = Mirror log path (MIRRORLOGPATH) = d:\dblogm\SAMPLE\NODE0000\ NOTE
Reducing Logging with the NOT LOGGED INITIALLY Parameter OptionIf your application creates and populates work tables from master tables and you are not concerned about the recoverability of these work tables because they can be easily recreated from the master tables, you may want to create the work tables specifying the NOT LOGGED INITIALLY parameter on the CREATE TABLE statement. You can achieve the same result for existing tables by using the ALTER TABLE statement with the NOT LOGGED INITIALLY parameter (if and only if they were created with the NOT LOGGED INITIALLY option). Advantages of using the NOT LOGGED INITIALLY:
db2 connect to sample db2 "create table tab_1 (col1 int, col2 char(20) not logged initially" db2 "alter table tab_1 activate not logged initially" db2 "insert into tab_1 values (1,'MICHAEL')" db2 "insert into tab_1 values (2,'TIFFANY')" db2 "insert into tab_1 values (3,'JONATHAN')" db2 "insert into tab_1 values (4,'JENNIFER')" db2 "insert into tab_1 values (5,'DWAINE')" db2 "insert into tab_1 values (6,'TOM')" db2 COMMIT db2 "delete from tab_1 where col1=6" db2 ROLLBACK NOTE
Because changes to the table are not logged, you should consider the following when deciding whether to use the NOT LOGGED INITIALLY table attribute:
Reducing Logging with Declared Global Temporary Tables (DGTTs)DGTTs can now be logged. It is the default. If you plan to use DGTTs as work tables, note the following:
db2 connect to sample db2 "create user temporary tablespace global_temp_ts managed by system using ('c:\data\global_temp_ts') extentsize 16 prefetchsize 64" Application ... declared global temporary table gtab_1 (col1 int, col2 char(20)) with replace in global_temp_ts NOT LOGGED; insert into gtab_1 values (1,'SAN FRANCISCO'); update gtab_1 set col2='TIBURON' where col1=1; ... Application. Managing Log FilesConsider the following when managing database logs:
Archiving Log Files with a User Exit ProgramThe USEREXIT database parameter causes the database manager to call a user exit program to archive and retrieve log files. The log files are archived in a location that is different from the active log path. If USEREXIT is set to ON, rollforward recovery is automatically enabled. db2 get db cfg for sample grep i "user exit" User exit for logging status = YES User exit for logging enabled (USEREXIT) = ON Consider the following when deciding whether to enable user exits:
The following considerations apply to calling a user exit program for archiving and retrieving log files:
An archive request to the user exit program occurs each time an active log file is filled, only if USEREXIT is set to YES or ON. It is possible that an active log file is not full when the last disconnection from the database occurs and the user exit program is also called for the last truncated log file.
To prevent either situation from occurring, you can ensure that no other databases on the server that calls the user exit program are open during the rollforward operation or write a user exit program to handle this situation. When a database is restarted, the minimum number of logs in the database log directory will equal the number of primary logs that can be configured using the LOGPRIMARY database configuration parameter. More logs than the number of primary logs might be found in the log directory if, at the time of shutdown, the number of empty logs in the log directory is greater than the primary log configuration parameter at the time of database restart. This will happen if, between shutdown and restart, the primary log configuration parameter was changed or if secondary logs were allocated and never used. Each time that a database is restarted, DB2 examines the database log directory. If fewer logs than the number of primary logs are found, the difference will be allocated. If more empty logs than primary logs are available in the database directory, DB2 will allow the database to be restarted with as many available empty logs as are found in the database directory. As well, after database shutdown, secondary log files that have been created will remain in the active log path at restart time. How Truncated Logs Are HandledIf a log file is truncated as a result of an online backup or force archive command (ARCHIVE LOG), DB2 does not rename the log file. Instead, a truncated log file is deleted when it becomes inactive. A new log file will be created when the application needs one. As a result, it may appear as though there are fewer log files in the log directory than LOGPRIMARY. You can avoid generating truncated logs altogether by setting the DB2 registry variable DB2_DISABLE_FLUSH_LOG registry variable to ON. However, this may cause online backups to be unusable if the log is not flushed after the backup completes. DB2 checks whenever the first active log changes. As a result, information is recorded to disk earlier and more often. The benefit of this change is that if the system crashes, the information stored on disk (related to which log files are successfully archived) is more accurate, and DB2 does not have to reissue the archive request for log files that are already archived. There is no change to what DB2 does after detecting the successful archive of a particular log file. DB2 now detects the completion of log archives earlier and will rename them earlier. Inactive truncated log files are deleted. As a result, the number of log files remaining in the active log path can be fewer than the LOGPRIMARY database configuration value. In this case, DB2 will create new log files when needed. Before this change, restarting the database reduced the number of logs to equal the value of LOGPRIMARY. Now, when you restart a database, DB2 first examines the database log directory. If the number of empty logs is fewer than the number of primary logs, DB2 will allocate new logs to make up the difference. If more empty logs are available than there are primary logs in the database directory, DB2 will allow the database to be restarted with all the available empty logs in the database directory. After database shutdown, any secondary log files in existence will remain in the active log path at restart time. To clear out the active log path, the ARCHIVE LOG command may be used. Log File Allocation and RemovalIf the LOGRETAIN is set to YES or ON, archived logs are kept in the database log path directory, and the database is considered to be recoverable, meaning that rollforward recovery is enabled.
db2 get db cfg for sample grep i "log retain" Log retain for recovery status = RECOVERY Log retain for recovery enabled (LOGRETAIN) = RECOVERY The process of allocating new log files and removing old log files is dependent on the settings of the USEREXIT and LOGRETAIN database configuration parameters. When both LOGRETAIN and USEREXIT are set to OFF:
When LOGRETAIN is set to ON and USEREXIT is set to OFF:
When USEREXIT is set to ON:
If an error is encountered while archiving a log file, archiving of log files will be suspended for 5 minutes before being attempted again. DB2 will then continue archiving log files as they become full. Log files that became full during the 5-minute waiting period will not be archived immediately after the delay; DB2 will spread the archive of these files over time. Blocking Transactions When the Log Directory File Is FullThis configuration parameter can be set to prevent disk full errors from causing DB2 to stop processing when it cannot create a new log file in the active log path.
db2 get db cfg for sample grep i "block log" Block log on disk full (BLK_LOG_DSK_FUL) = NO db2 update db cfg for sample using blk_log_dsk_ful on db2 get db cfg for sample grep i "block log" Block log on disk full (BLK_LOG_DSK_FUL) = ON On Demand Log ArchiveDB2 now supports closing the active log for a recoverable database at any time. This allows you to collect a complete set of log files up to a known point, then to use these log files to update a standby database. You can initiate on demand log archiving by invoking the ARCHIVE LOG command. db2 archive log for database sample NOTE
Using Raw LogsThere are some advantages and disadvantages when a raw device is used for the database logs. Advantages:
Disadvantages:
You can easily wipe out the file system on an existing drive if you specify the wrong physical drive number. You can configure a raw log with the NEWLOGPATH database configuration parameter. Before doing so, however, consider the advantages and disadvantages listed above and the additional considerations listed below:
In this situation, DB2 will attempt to use all pages, up to the supported limit. Information about the size of the device is used to indicate the size of the device (in 4-KB pages) available to DB2 under the support of the operating system. The amount of disk space that DB2 can write to is referred to as the device size available . The first 4-KB page of the device is not used by DB2 (it is generally used by the operating system). This means that the total space available to DB2 is device size = device size available “ 1. When using raw devices for the logs, secondary logs are not used. The size of active log space is the number of 4-KB pages that result from (LOGPRIMARY x LOGFILSIZ). Log records are still grouped into log extents, each with a log file size of 4-KB pages. Log extents are placed in the raw device, one after another. Each extent also consists of an extra two pages for the extent header. This means that the number of available log extents on the device is device-size / (LOGFILSIZ + 2). The device must be large enough to support the active log space. That is, the number of available log extents must be greater than (or equal to) the value specified for the LOGPRIMARY database configuration parameter. If the USEREXIT database configuration parameter is set to ON, ensure that the raw device can contain more logs than the value specified for the LOGPRIMARY database configuration parameter. This will compensate for the delay incurred when the user exit program is archiving a log file. If you are using circular logging, the LOGPRIMARY database configuration parameter will determine the number of log extents that are written to the device. This may result in unused space on the device. If you are using LOGRETAIN without a user exit program, after the number of available log extents are all used up, all operations that result in an update will receive a log full error. At this time, you must shut down the database and take an offline backup of it to ensure recoverability. After the database backup operation, the log records written to the device are lost. This means that you cannot use an earlier database backup image to restore the database, then roll it forward. If you take a database backup before the number of available log extents are all used up, you can restore and roll the database forward. If you are using LOGRETAIN with a user exit program, the user exit program is called for each log extent as it is filled with log records. The user exit program must be able to read the device and to store the archived log as a file. DB2 will not call a user exit program to retrieve log files to a raw device. Instead, during rollforward recovery, DB2 will read the extent headers to determine when the raw device contains the required log file. If the required log file is not found in the raw device, DB2 will search the overflow log path. If the log file is still not found, DB2 will call the user exit program to retrieve the log file into the overflow log path. If you do not specify an overflow log path for the rollforward operation, DB2 will not call the user exit program to retrieve the log file. If you have configured a raw device for logging and are using DataPropagator or another application that calls the db2ReadLog API , the OVERFLOWLOGPATH database configuration parameter must be configured. DB2 may call a user exit program to retrieve the log file and return the log data requested by the db2ReadLog API . The retrieved log file will be placed in the path specified by the OVERFLOWLOGPATH database configuration parameter. How to Prevent Losing Log FilesWhen a database is restored and rolled forward to a point in time prior to the end of logs, a new set of logs is generated. Therefore, the currently archived logs will no longer be useable for this new database. Consider the following scenarios: Scenario #1: Drop a DatabaseIf you plan to drop a database prior to a restore operation, you need to save the log files in the active log path before issuing the DROP DATABASE command. After the database has been restored, these log files may be required for rollforward recovery because some of them may not have been archived before the database was dropped. Normally, you are not required to drop a database prior to issuing the RESTORE command. Scenario #2: Rollforward a DatabaseIf you are rolling a database forward to a specific point in time, log data after the timestamp you specify will be overwritten. If, after you have completed the point-in-time rollforward operation and reconnected to the database, you determine that you actually needed to roll the database forward to a later point in time, you will not be able to because the logs may already have been overwritten, if they were not saved to a different path or using a utility such as TSM. It is possible that the original set of log files may have been archived; however, DB2 may be calling a user exit program to automatically archive the newly generated log files automatically. Depending on how the user exit program is written, this could cause the original set of log files in the archive log directory to be overwritten. Even if both the original and new set of log files exist in the archive log directory, you may have to determine which set of logs should be used for future recovery operations. Review SessionLet's manage log files for the database SAMPLE. First, you must create SAMPLE database using the db2sampl program. After this database has been successfully created, the default primary log files (3) with the size of 250 4-KB pages will be created in the directory: Path to log files = C:\DB2\NODE0000\SQL00002\SQLOGDIR\ By default, all databases use circular logging. We need to enable archive logging by updating the LOGRETAIN and/or USEREXIT database configuration parameters. Then we will need to modify the DB2 user exit program to archive and retrieve logs. The log files are archived in a location that is different from the active log path. How to Enable Archive Log for the DatabaseStep 1Create sample database and get the database configuration db2sampl db2 connect to sample db2 get db cfg for sample grep i log Log retain for recovery status = NO User exit for logging status = Catalog cache size (4KB) (CATALOGCACHE_SZ) = (MAXAPPLS*4) Log buffer size (4KB) (LOGBUFSZ) = 8 Log file size (4KB) (LOGFILSIZ) = 250 Number of primary log files (LOGPRIMARY) = 3 Number of secondary log files(LOGSECOND) = 2 Changed path to log files (NEWLOGPATH) = Path to log files = C:\DB2\NODE0000\SQL00002\SQLOGDIR\ Overflow log path (OVERFLOWLOGPATH) = Mirror log path (MIRRORLOGPATH) = First active log file = Block log on disk full (BLK_LOG_DSK_FUL) = NO Percent of max active log space by transaction (MAX_LOG) = 0 Num. of active log files for 1 active UOW (NUM_LOG_SPAN) = 0 Percent log file reclaimed before soft chckpt (SOFTMAX) = 100 Log retain for recovery enabled (LOGRETAIN) = OFF User exit for logging enabled (USEREXIT) = OFF For this example, we will change the log file size to a smaller size in order to make the log fill up quickly. By doing that, we will need to reduce the logfilsiz to 16 4-KB pages. For recovery reasons, we will need to change the active log directory to c:\dblog\SAMPLE. This directory structure must match with the definition defined in the db2uext2.c program. Step 2Reduce log file size db2 update db cfg for sample using logfilsiz 16 logprimary 2 logsecond 10 db2 get db cfg for sample grep i log Log retain for recovery status = NO User exit for logging status = Catalog cache size (4KB) (CATALOGCACHE_SZ) = (MAXAPPLS*4) Log buffer size (4KB) (LOGBUFSZ) = 8 Log file size (4KB) (LOGFILSIZ) = 16 Number of primary log files (LOGPRIMARY) = 2 Number of secondary log files (LOGSECOND) = 10 Changed path to log files (NEWLOGPATH) = Path to log files = C:\DB2\NODE0000\SQL00002\SQLOGDIR\ Overflow log path (OVERFLOWLOGPATH) = Mirror log path (MIRRORLOGPATH) = First active log file = Block log on disk full (BLK_LOG_DSK_FUL) = NO Percent of max active log space by transaction (MAX_LOG) = 0 Num. of active log files for 1 active UOW (NUM_LOG_SPAN) = 0 Percent log file reclaimed before soft chckpt (SOFTMAX) = 100 Log retain for recovery enabled (LOGRETAIN) = OFF User exit for logging enabled (USEREXIT) = OFF By default, the first database is SQL00001, and the active log files reside under C:\DB2\NODE0000\SQL00002\SQLOGDIR\. After the database was updated with the NEWLOGPATH parameter, the active log files will be allocated under the new log path directory when last users disconnect from the database and the first user connects to the database. ls -altr c:\DB2\NODE0000\SQL00002\SQLOGDIR\ total 6048 drwxrwxrwx 1 Administrators None 0 Jul 14 22:12 .. drwxrwxrwx 1 Administrators None 0 Jul 14 22:12 . -rwxrwxrwa 1 Administrators None 1032192 Jul 14 22:15 S0000002.LOG -rwxrwxrwa 1 Administrators None 1032192 Jul 14 22:15 S0000001.LOG -rwxrwxrwa 1 Administrators None 1032192 Jul 14 22:16 S0000000.LOG db2 update db cfg for sample using newlogpath c:\dblog\SAMPLE db2 connect to sample db2 get db cfg for sample grep i log Log retain for recovery status = NO User exit for logging status = Catalog cache size (4KB) (CATALOGCACHE_SZ) = (MAXAPPLS*4) Log buffer size (4KB) (LOGBUFSZ) = 8 Log file size (4KB) (LOGFILSIZ) = 16 Number of primary log files (LOGPRIMARY) = 2 Number of secondary log files (LOGSECOND) = 10 Changed path to log files (NEWLOGPATH) = Path to log files = c:\dblog\SAMPLE\NODE0000\ Overflow log path (OVERFLOWLOGPATH) = Mirror log path (MIRRORLOGPATH) = First active log file = Block log on disk full (BLK_LOG_DSK_FUL) = NO Percent of max active log space by (MAX_LOG) = 0 transaction Num. of active log files for 1 active (NUM_LOG_SPAN) = 0 UOW Percent log file reclaimed before soft (SOFTMAX) = 100 chckpt Log retain for recovery enabled (LOGRETAIN) = OFF User exit for logging enabled (USEREXIT) = OFF ls altr c:\DB2\NODE0000\SQL00002\SQLOGDIR\ total 0 drwxrwxrwx 1 Administrators None 0 Jul 14 22:28 .. drwxrwxrwx 1 Administrators None 0 Jul 14 22:28 . ls -altr c:\dblog\sample\NODE0000\ total 289 -rwxrwxrwa 1 Administrators None 512 Jul 14 22:28 SQLLPATH.TAG -rwxrwxrwa 1 Administrators None 73728 Jul 14 22:28 S0000001.LOG -rwxrwxrwa 1 Administrators None 73728 Jul 14 22:28 S0000000.LOG drwxrwxrwx 1 Administrators None 0 Jul 14 22:28 .. drwxrwxrwx 1 Administrators None 0 Jul 14 22:28 . Now, we need to enable archive logging by updating the LOGRETAIN and USEREXIT database configuration parameters. Step 3Enable archive logging Right after the database changed from circular log to archive log, we must take a full database backup to reset backup pending state. [View full width]
NOTE We need to modify the user exit program, db2uext2.cdisk, to archive inactive log files from the active log directory. See the next section on how to set up user exits. Step 4Take a full database backup db2 backup database sample to c:\dbbackup Backup successful. The timestamp for this backup image is: 20020714224710 db2 connect to sample Database Connection Information Database server = DB2/NT 8.1.0 SQL authorization ID = TPHAN Local database alias = SAMPLE db2 get db cfg for sample grep i log Log retain for recovery status = RECOVERY User exit for logging status = Catalog cache size (4KB) (CATALOGCACHE_SZ) = (MAXAPPLS*4) Log buffer size (4KB) (LOGBUFSZ) = 8 Log file size (4KB) (LOGFILSIZ) = 16 Number of primary log files (LOGPRIMARY) = 2 Number of secondary log files (LOGSECOND) = 10 Changed path to log files (NEWLOGPATH) = Path to log files = c:\dblog\SAMPLE\NODE0000\ Overflow log path (OVERFLOWLOGPATH) = Mirror log path (MIRRORLOGPATH) = First active log file = S0000000.LOG Block log on disk full (BLK_LOG_DSK_FUL) = NO Percent of max active log space by transaction (MAX_LOG) = 0 Num. of active log files for 1 active UOW (NUM_LOG_SPAN) = 0 Percent log file reclaimed before soft chckpt (SOFTMAX) = 100 Log retain for recovery enabled (LOGRETAIN) = RECOVERY User exit for logging enabled (USEREXIT) = ON Step 5Create a test table db2 connect to sample Database Connection Information Database server = DB2/NT 8.1.0 SQL authorization ID = TPHAN Local database alias = SAMPLE db2 "create table tab_1 (c1 int, c2 int)" DB20000I The SQL command completed successfully. db2 terminate DB20000I The SQL command completed successfully. Step 6Populate test data From the command line, we will write a short interactive command to insert 5,000 rows into tab_1 table. Or we can write a short script to generate 5,000 rows into a data_file and invoke db2 “tvf data_file to insert them. We will need to open four sessions (or command prompts): The first session (or command prompt) will list all files in the active log directory; the second session will list all files in the archive log directory; the third session will tell which is the first active log; and the fourth session will insert data into the tab_1 table. NOTE The DB2 user exit program must be enabled. Session #1: List all active log filesThe following command will list all active log files in the current directory indefinitely, then will go to sleep for 10 seconds. When the test is complete, press Ctrl+C to terminate. cd c:\dblog\SAMPLE\NODE0000\ while true do ls altr *LOG sleep 10 echo done Output from the above command: -rwxrwxrwa 1 Administrators None 73728 Jul 14 22:46 S0000001.LOG -rwxrwxrwa 1 Administrators None 73728 Jul 14 22:47 S0000002.LOG Session #2: List all archived log filesThe following command will list all archived log files in the current directory indefinitely, then will go to sleep for 10 seconds. When the test is complete, press Ctrl+C to terminate. cd c:\dbarch\SAMPLE\NODE0000\ while true do ls altr *LOG sleep 10 echo done Output from the above command: -rwxrwxrwa 1 Administrators None 88 Jul 14 22:49 S0000000.LOG Session #3: Display first active log fileOn UNIX or Linux, the following command will display the first active log file. When the test is complete, press Ctrl+C to terminate. while true do db2 get db cfg for sample grep i "first active log" sleep 10 echo done Output from the above command: First active log file = S0000001.LOG Session #4: Insert test dataThe following command will insert 5,000 rows into table tab_1. When the test is complete, press Ctrl+C to terminate. db2 connect to sample i=0 while [ $i lt 5000 ] do print "db2 insert into tab_1 values ($i, $RANDOM)" db2 "insert into tab_1 values ($i, $RANDOM)" let i=i+1 done Output from the above command: insert into tab_1 values(0, 18362); DB20000I The SQL command completed successfully. insert into tab_1 values(1, 18171); DB20000I The SQL command completed successfully. insert into tab_1 values(2, 32302); ... insert into tab_1 values(4998, 5103); DB20000I The SQL command completed successfully. insert into tab_1 values(4999, 32514); DB20000I The SQL command completed successfully. db2 terminate DB20000I The SQL command completed successfully. You can generate the data into a file and invoke db2 “tvf data_file to insert data into tab_1 table: db2 connect to sample i=0 while [ $i lt 5000 ] do print "db2 insert into tab_1 values ($i, $RANDOM)" >> tab_1.dat let i=i+1 done db2 tvf tab_1.dat z tab_1.out db2 terminate ObservationIt will take approximately six log files to insert 5,000 rows into tab_1 table. During insert rows into table tab_1, the files are archived to the dbarch directory as follows : num_rec first_active_log active_log_dir archive_log_dir 0000 - 2000 S0000001.LOG S0000001.LOG S0000000.LOG S0000002.LOG S0000003.LOG 2000 2800 S0000002.LOG S0000002.LOG S0000000.LOG S0000003.LOG S0000001.LOG S0000004.LOG 2800 3600 S0000003.LOG S0000003.LOG S0000000.LOG S0000004.LOG S0000001.LOG S0000005.LOG S0000002.LOG 3600 4500 S0000004.LOG S0000004.LOG S0000000.LOG S0000005.LOG S0000001.LOG S0000006.LOG S0000002.LOG S0000003.LOG 4500 5000 S0000005.LOG S0000005.LOG S0000000.LOG S0000006.LOG S0000001.LOG S0000007.LOG S0000002.LOG S0000003.LOG S0000004.LOG How to Set Up DB2 User ExitsStep 1Copy sample user exit program cd c:\work cp c:\sqllib\sample\c\db2uext2.cdisk db2uext2.c chmod 755 db2uext2.c Step 2Edit db2uext2.c cd c:\work vi db2uext2.c #define ARCHIVE_PATH "c:\dbarch\" #define RETRIEVE_PATH "c:\dblog\" #define AUDIT_ERROR_PATH "c:\dblog\" Step 3Create subdirectories cd c:\dbarch mkdir p SAMPLE\NODE0000 cd c:\dblog mkdir SAMPLE Step 4Compile db2uext2.c program cd c:\work cp c:\work\db2uext2.c c:\sqllib\bin cd c:\sqllib\bin cl o db2uext2.exe db2uext2.c Microsoft (R) 32-bit C/C++ Optimizing Compiler Version 12.00.8168 for 80x86 Copyright (C) Microsoft Corp 1984-1998. All rights reserved. db2uext2.c Microsoft (R) Incremental Linker Version 6.00.8168 Copyright (C) Microsoft Corp 1992-1998. All rights reserved. /out:db2uext2.exe /out:db2uext2.exe db2uext2.obj For UNIX or Linux: cc o db2uext2 db2uext2.c chmod +s db2uext2 DebugIf, for some reason, the db2uext2 user exit program did not work properly, we would need to examine the c:\dblog\ARCHIVE.LOG or USEREXIT.ERR. Here is the content of the ARCHIVE.LOG file: ********************************************************************** Time Started: Sun Jul 14 22:49:19 2002 Parameter Count: 8 Parameters Passed: Database name: SAMPLE Logfile name: S0000000.LOG Logfile path: c:\dblog\SAMPLE\NODE0000\ Node number: NODE0000 Operating NT system: Release: SQL08010 Request: ARCHIVE System Action: ARCHIVE c:\dblog\SAMPLE\NODE0000\S0000000.LOG to c:\dbarch\SAMPLE Media Type: disk User Exit RC: 0 Time Completed: Sun Jul 14 22:49:19 2002 ********************************************************************** Time Started: Sun Jul 14 23:14:42 2002 Parameter Count: 8 Parameters Passed: Database name: SAMPLE Logfile name: S0000001.LOG Logfile path: c:\dblog\SAMPLE\NODE0000\ Node number: NODE0000 Operating NT system: Release: SQL08010 Request: ARCHIVE System Action: ARCHIVE c:\dblog\SAMPLE\NODE0000\S0000001.LOG to c:\dbarch\SAMPLE Media Type: disk User Exit RC: 0 Time Completed: Sun Jul 14 23:14:42 2002 ********************************************************************** Time Started: Sun Jul 14 23:16:37 2002 Parameter Count: 8 Parameters Passed: Database name: SAMPLE Logfile name: S0000002.LOG Logfile path: c:\dblog\SAMPLE\NODE0000\ Node number: NODE0000 Operating NT system: Release: SQL08010 Request: ARCHIVE System Action: ARCHIVE c:\dblog\SAMPLE\NODE0000\S0000002.LOG to c:\dbarch\SAMPLE Media Type: disk User Exit RC: 0 Time Completed: Sun Jul 14 23:16:37 2002 ********************************************************************** Time Started: Sun Jul 14 23:18:32 2002 Parameter Count: 8 Parameters Passed: Database name: SAMPLE Logfile name: S0000003.LOG Logfile path: c:\dblog\SAMPLE\NODE0000\ Node number: NODE0000 Operating NT system: Release: SQL08010 Request: ARCHIVE System Action: ARCHIVE c:\dblog\SAMPLE\NODE0000\S0000003.LOG to c:\dbarch\SAMPLE Media Type: disk User Exit RC: 0 Time Completed: Sun Jul 14 23:18:32 2002 ********************************************************************** Time Started: Sun Jul 14 23:20:34 2002 Parameter Count: 8 Parameters Passed: Database name: SAMPLE Logfile name: S0000004.LOG Logfile path: c:\dblog\SAMPLE\NODE0000\ Node number: NODE0000 Operating NT system: Release: SQL08010 Request: ARCHIVE System Action: ARCHIVE c:\dblog\SAMPLE\NODE0000\S0000004.LOG to c:\dbarch\SAMPLE Media Type: disk User Exit RC: 0 Time Completed: Sun Jul 14 23:20:34 2002 ARCHIVE c:\dblog\SAMPLE\NODE0000\S0000004.LOG to c:\dbarch\SAMPLE ********************************************************************** Time Started: Sun Jul 14 23:22:17 2002 Parameter Count: 8 Parameters Passed: Database name: SAMPLE Logfile name: S0000005.LOG Logfile path: c:\dblog\SAMPLE\NODE0000\ Node number: NODE0000 Operating NT system: Release: SQL08010 Request: ARCHIVE System Action: ARCHIVE c:\dblog\SAMPLE\NODE0000\S0000005.LOG to c:\dbarch\SAMPLE Media Type: disk User Exit RC: 0 Time Completed: Sun Jul 14 23:22:17 2002 Check list:
|