9.5 Initial tuning

 < Day Day Up > 



9.5 Initial tuning

Performance of a DB2 database application can be influenced by many factors such as the type of workload, application design, database design, capacity planning, and instance and database configuration. Since databases created with default values are suited for computers with relatively small memory and disk storage, you may need to modify them to fit your environment. This section focuses on a number of DB2 UDB performance tuning tips that may be used for initial configuration.

9.5.1 Table spaces

At database creation time, three table spaces are created:

  • SYSCATSPACE - Catalog table space for storing information about all the objects in the database,

  • TEMPSPACE1 - System temporary table space for storing internal temporary data required during SQL operations such as sorting, reorganizing tables, creating indexes, and joining tables,

  • USERSPACE1 - For storing application data

By default all the three table spaces are created as System Managed Spaces (SMS) which means that the regular operating system functions will be used for handling I/O operations.

Reading and writing data from tables will be buffered by the operating system, and space will be allocated according to the operating system conventions: files with DAT extension for tables and INX files for table indexes. When the table is initially created only one page is allocated on disk. When records are inserted to a table, DB2 will extend the files one page at a time.

On heavy inserts, extending file by only one page at a time can be very expensive. To minimize internal overhead for the table space extension, you can enable database multi-page file allocation. With multi-page file allocation enabled for SMS table spaces, disk space is allocated one extent at a time (contiguous groups of pages defined for the table space).

To check whether the feature is enabled, look at the database configuration and search for the Multi-page word.

In Example 9-29 multi page was not enabled. This can be changed by running the db2empfa program on the target database. Since db2empfa connects to the database in exclusive mode, all other users should be disconnected from the database. After db2empfa execution against the target database, check the multi-page file allocation parameter for the status.

Example 9-29: Checking for current page allocation status

start example
 $db2 get db cfg for sample  Rollforward pending                                 = NO Restore pending                                     = NO Multi-page file allocation enabled                  = NO Log retain for recovery status                      = NO User exit for logging status                        = NO  
end example

Example 9-30: Enabling multi page allocation

start example
 $db2empfa sample $db2 get db cfg for sample  Rollforward pending                                   = NO Restore pending                                       = NO Multi-page file allocation enabled                    = YES Log retain for recovery status                        = NO User exit for logging status                          = NO  
end example

Better insert performance can be achieved with database managed table spaces (DMS) because containers are pre-allocated and management of the I/O operations is shifted to the database engine. In DB2 UDB version 8 you can easily add new containers, drop, or modify the size of existing containers. Data is rebalanced to other containers automatically unless instructed. The administrative overhead is not so significant when compared to the SMS table spaces.

For optimal performance, large volume data and indexes should be placed on DMS table spaces, if possible, split to separate raw devices. Initially, system catalogs and system temporary table spaces should stay on the SMS table spaces. System catalogs contain large objects, which are not cached by the DB2 UDB engine, and can be cached by the operating system cache. In an OLTP-like environment, there is no need for creating large temporary objects to process SQL queries, so the SMS system temporary table space is a good starting point.

9.5.2 Physical placement of database objects

When creating a database, the first important decision is the storage architecture. The ideal situation is to have the fastest disks as possible and at least 5 to 10 disks per processor (for high I/O OLTP workload, use even more). The reality is the hardware is often chosen based on other considerations, so in order to achieve optimal performance, the placement of database objects should be carefully planned.

As shown in Figure 9-12, all data modifications are not only written to table space containers, but are also logged to ensure recoverability. Because every insert, update, or delete is replicated in the transactional log, the flushing speed of the logical log buffer can be crucial for the entire database performance. To understand the importance of logical log placement, you should keep in mind that the time necessary to write data to disk depends on the physical data distribution on disk. The more random reads or writes are performed, the more disk head movements are required, and therefore, the slowest is the writing speed. Flushing logical log buffer to disk by its nature is sequential and should not be interfered by other operations. Locating logical log files on separate devices isolates them from other processes, and ensures uninterrupted sequential writes.

click to expand
Figure 9-12: Explaining logical log

To change logical log files to a new location you need to modify NEWLOGPATH database parameter as shown in Example 9-31. The logs will be relocated to the new path on the next database activation (this can take some time to create the files).

Example 9-31: Relocation of logical logs

start example
 db2 update db cfg for sample using NEWLOGPATH /db2/logs 
end example

When creating a DMS table space with many containers, DB2 UDB automatically distributes the data across them in a round-robin fashion, similar to the striping method available in disk arrays. To achieve the best possible performance, each table space container should be placed on a dedicated physical device. To parallel asynchronous writes and reads from multiple devices, the number of database page cleaners (NUM_IO_CLEANERS) and I/O servers (NUM_IOSERVERS) should be adjusted. The best values for these two parameters depends on the type of workload and available resources. You can start your configuration with following values:

  • NUM_IOSERVERS = Number of physical devices, but not less that three and no more than five times the number of CPUs.

  • NUM_IO_CLEANERS = number of CPUs

Example 9-32 shows how to set initial values of the parameters for two processor machines with six disks available to DB2.

Example 9-32: Updating IO related processes

start example
 db2 update db cfg for sample using NUM_IOSERVERS 6 db2 update db cfg for sample using NUM_IOCLEANERS 2 
end example

If there is relatively small number of disks available, it can be difficult to keep database logical logs, data, indexes, system temporary table spaces (more important for processing large queries in warehousing environment), backup files or operating system paging file on separate physical devices. A compromise solution is to have one large file system striped by disk array (RAID device) and create table spaces with only one container. The load balancing is shifted to hardware, and you don't have to worry about space utilization. In that case to parallel I/O operations on a single container. DB2_PARALLEL_IO registry variable should be set before starting DB2 UDB engine.

By performing the following command, I/O parallelism will be enabled within a single container for all table spaces:

 db2set DB2_PARALLEL_IO="*" 

The following example enables parallel I/O only for two table spaces: DATASP1 and INDEXSP1:

 db2set DB2_PARALLEL_IO="DATASP1,INDEXSP1" 

To check the current value for the parameter issue:

 db2set DB2_PARALLEL_IO 

9.5.3 Buffer pools

The default size for buffer pools is very small: only 250 pages (~ 1 MB) for Windows and 1000 pages (~ 4 MB) for UNIX platforms. The overall buffer size has a great effect on DB2 UDB performance, since it can significantly reduce I/O, which is the most time consuming operation. We recommend to increase the default values. However, the total buffer pool size should not be set too high, because there might be not enough memory to allocate them. To calculate the maximum buffer size, all other DB2 memory related parameters like database heap, the agent's memory, storage for locks, as well as the operating system, and any other applications should be considered.

Initially set the total size of buffer pools to 10% to 20% of available memory. You can monitor the system later and correct it. DB2 version 8 allows changing buffer pool sizes without shutting down the database. The ALTER BUFFERPOOL statement with the IMMEDIATE option will take effect right away, except when there is not enough reserved space in the database-shared memory to allocate new space. This feature can be used to tune database performance according to periodical changes in use, for example, switching from daytime interactive use to nighttime batch work.

Once the total available size is determined, this area can be divided into different buffer pools to improve utilization. Having more than one buffer pool can preserve data in the buffers. For example, let us suppose that a database has many very frequently used small tables, which would normally be in the buffer in their entirety, and thus would be accessible very fast. Now let us suppose that there is a query which runs against a very large table, which uses the same buffer pool and involves reading more pages than the total buffer size. When this query runs, the pages from the small, very frequently used tables will be lost, making it necessary to re-read them when they are needed again.

At the start you can create additional buffer pool for caching data and leave the IBMDEFAULTBP for system catalogs. Creating an extra buffer pool for system temporary data also can be valuable for the system performance, especially in OLTP environment, where the temporary objects are relatively small. Isolated temporary buffer pools are not influenced by the current workload, so it should take less time to find free pages for temporary structures, and it is likely that the modified pages will not be swapped out to disk. In a warehousing environment, the operation on temporary table spaces are considerably more intensive, so the buffer pools should be larger, or combined with other buffer pools if there is not enough memory in the system (one pool for caching data and temporary operations).

Example 9-33 shows how to create buffer pools assuming that additional table space DATASPACE for storing data and indexes was already created, and there is enough memory in the system. You can take this as a starting buffer pool configuration for a 2 GB RAM system.

Example 9-33: Increasing buffer pools

start example
 connect to sample;    -- creating two buffer pools 256 MB and 64 MB create bufferpool DATA_BP immediate size 65536 pagesize 4k; create bufferpool TEMP_BP immediate size 16384 pagesize 4k;    -- changing size of the default buffer pool alter bufferpool IBMDEFAULTBP immediate size 16384;    -- binding the table spaces to buffer pools alter tablespace DATASPACE bufferpool DATA_BP; alter tablespace TEMPSPACE1 bufferpool TEMP_BP;    -- checking the results select    substr(bs.bpname,1,20) as BPNAME    ,bs.npages    ,bs.pagesize    ,substr(ts.tbspace,1,20) as TBSPACE from syscat.bufferpools bs join syscat.tablespaces ts on      bs.bufferpoolid = ts. bufferpoolid; 
end example

The results:

 BPNAME               NPAGES      PAGESIZE    TBSPACE -------------------- ----------- ----------- -------------------- IBMDEFAULTBP               16384        4096 SYSCATSPACE IBMDEFAULTBP               16384        4096 USERSPACE1 DATA_BP                    65536        4096 DATASPACE TEMP_BP                    16384        4096 TEMPSPACE1 

The CHNGPGS_THRESH parameter specifies the percentage of changed pages at which the asynchronous page cleaners will be started. Asynchronous page cleaners will write changed pages from the buffer pool to disk. The default value for the parameter is 60%. When that threshold is reached, some users may experience a slower response time. Having larger buffer pools means more modified pages in memory and more work to be done by page cleaners, as shown on Figure 9-13. To guarantee more consistent response time and also shorter recovery phase, lower the value to 50 or 40 using the following command:

 db2 update db cfg for sample using CHNGPGS_THRESH 40 

click to expand
Figure 9-13: Visualizing CHNGPGS_THRESH parameter

9.5.4 Large transactions

By default, databases are created with relatively small space for transactional logs, only three log files each 250 pages on Windows and 1000 pages on UNIX. A single transaction should fit available log space to be completed, if not the transaction is rolled back by system (SQL0964C The transaction log for the database is full). To process transactions which are modifying large numbers of rows, adequate log space is needed. Current total log space available for transactions can be calculated by multiplying the size of log file (database parameter LOGFILSIZ) and the number of logs (database parameter LOGPRIMARY). From the performance perspective, it is better to have a larger log file size, because of the cost for switching from one log to another. When log archiving is switched on, the log size also indicates the amount of data for archiving. In this case, a larger log file size is not necessarily better, since a larger log file size may increase the chance of failure, or cause a delay in archiving or log shipping scenarios. The log size and the number of logs should be balanced.

The following example allocates 400 MB of total log space.

Example 9-34: Resizing the transactional log

start example
 db2 update db cfg for sample using LOGFILSIZ 5120 db2 update db cfg for sample using LOGPRIMARY 20 
end example

Locking is the mechanism that the database manager uses to control concurrent access to data in the database by multiple applications. Each database has its own list of locks (structure stored in memory which contains the locks held by all applications concurrently connected to the database). The size of the lock list is controlled by LOCKLIST database parameter. The default storage for LOCKLIST is 50 pages (200 KB) for Windows and 100 pages (400 KB) for UNIX. On 32-bit platforms, each lock requires 36 or 72 bytes of the lock list, depending on whether other locks are held on the object or not. For the default values, the maximum of 5688 (Windows) or 11377 (UNIX) locks can be allocated as shown in Figure 9-14.

click to expand
Figure 9-14: Maximum number of locks available for default settings on UNIX

When the maximum number of lock requests has been reached the database manager will replace existing row level locks with table locks (lock escalation). This operation will reduce the requirements for locks space, because transactions will hold only one lock on the entire table instead of many locks on every row. Lock escalation has a negative performance impact because it reduces concurrency on shared objects. Other transactions must wait until the transaction holding the table lock commits or rollbacks work.

The lock escalation can also be forced by the MAXLOCKS database parameter, which defines a limit for maximum percentage of lock locks storage held by one application. The default value for UNIX is 10 (22 for Windows), which means that if one application requests more that 10% of total locks space (LOCKLIST), an escalation will occur for the locks held by that application. As an example, on UNIX inserting 1137 rows with one transaction will result in lock escalation, because the transaction requests 1138 locks (one per each inserted row plus one internal lock), which requires at least 1138*36 = 40968 bytes - more than 10% of global lock memory defined by default LOCKLIST parameter.

Initial values for LOCKLIST and MAXLOCKS should be based on maximum number of applications and average number of locks requested by transaction (for OLTP systems start with 512 locks for every application). When setting MAXLOCKS, you should take into account lock consuming batch processes that are run during daytime hours. To check current usage of locks use snapshots such as in Example 9-35.

Example 9-35: Invoking snapshot for locks on database sample

start example
 db2 get snapshot for locks on sample 
end example

The snapshot will collect the requested information at the time the command was issued. On Figure 9-15 you can find sample lock snapshot output. For the time the snapshot was run there were two applications connected to the database SAMPLE, and in total 1151 locks were acquired on the database. Issuing the get snapshot command later can produce different results because in the meantime the applications may commit the transaction and release locks.

click to expand
Figure 9-15: Explaining lock snapshot information

To check lock escalations occurrences look at the db2diag.log file. The lock escalation message should look like Example 9-36.

Example 9-36: Lock escalation message in db2diag.log file

start example
 2003-07-21-19.05.05.888741   Instance:db2inst1   Node:000 PID:56408(db2agent (SAMPLE) 0)   TID:1   Appid:*LOCAL.db2inst1.0DB5F2004313 data management  sqldEscalateLocks Probe:3   Database:SAMPLE ADM5502W  The escalation of "1136" locks on table "DB2INST1.TABLE01" to lock intent "X" was successful. 
end example

Logical log buffer

The default size for logical log buffer is eight pages (32 KB) - often too small for an OLTP database, and not big enough for long running batch processes. In most cases the log records are written to disk when one of the transactions issue a commit, or the log buffer is full. Increasing the size of the log buffer may result in more efficient I/O operations, especially when the buffer is flushed to disk because of the second reason. The log records are written to disk less frequently and more log records are written each time. Initially, set LOGBUFSZ to 128, or 256 4 KB pages. The log buffer area uses space controlled by the DBHEAP database parameter, so consider increasing this parameter also.

Later use the snapshot for applications to check current usage of log spaces by transactions as presented on Example 9-37.

Example 9-37: Current usage of log space by applications

start example
 $db2 update monitor switches using uow on $db2 get snapshot for applications on sample | grep "UOW log" UOW log space used (Bytes)                 = 478 UOW log space used (Bytes)                 = 21324 UOW log space used (Bytes)                 = 110865 
end example

Before running the application snapshot, the Unit Of Work monitor should be switched on. At the time the snapshot was issued, only three applications were run on the system. The first transaction used 478 bytes of log space, the second 21324, and the last used 110865, which is roughly 28 pages, more than the default log buffer size. The snapshot gives only current values from the moment the command was issued. To get more valuable information about the usage of log space by transactions, run the snapshot many times.

The Example 9-38 shows how to get information about log I/O activity.

Example 9-38: Checking log I/O activity

start example
 db2 reset monitor for database sample    # let the transactions run for a while db2 get snapshot for database on sample > db_snap.txt egrep  -i "commit|rollback" db_snap.txt Commit statements attempted                = 23 Rollback statements attempted              = 2 Internal commits                           = 1 Internal rollbacks                         = 0 Internal rollbacks due to deadlock         = 0 grep "Log pages" db_snap.txt Log pages read                             = 12 Log pages written                          = 630 
end example

Before running the database snapshot, you may have to reset the monitors. The values gathered by snapshot and presented here are cumulated since the last monitor reset or database activation, so wait for certain period after resetting the counters. For convenience, the snapshot output was directed into a file, and then analyzed using the UNIX grep tool. In the example, 630 pages were written for the period, which gives about 630 / (23+2+1) = 25 pages per transaction. Looking at the value "Log pages written" it is not possible to tell what was the average size of transactions, because the basic DB2 read or write unit is one page (4 KB). Issuing only one small insert will force a flush of 4 KB from log buffer to disk. The partially filled log page remains in the log buffer, and can be overwritten to disk more than once, until it is full; this guaranties that the log files are contiguous.

When setting the value for log buffer, also look at the ratio between log pages read and log pages written. An ideal value is zero log pages read, while seeing a large number of log pages written. When there are too many log pages read, it means a bigger LOGBUFSZ can improve performance.

9.5.5 SQL execution plan

When a query is issued against a database, DB2 prepares an execution plan. The execution plan defines the necessary steps that should be done to get the requested data. In order to prepare an optimal execution plan, the DB2 optimizer considers many elements such as configuration parameters, available hardware resources, or the characteristics of the database objects (available indexes, table relationships, number of records, data distribution). The database characteristics are collected manually with the RUNSTATS utility, and are stored in special system catalog tables. The RUNSTATS command should be executed:

  • When a table has been loaded with new data

    Recommendation: 

    After loading data to DB2 tables, run the RUNSTATS before starting testing.

  • When the appropriate indexes have been created

  • When there have been extensive updates, deletions, and insertions that affect a table and its indexes (for example, 10% to 20% of the table and index data has been affected).

  • When a data has been physically reorganized (by running the RORG utility, or adding new containers)

The RUNSTAT command should be executed against each table in the database. DB2 Control Center can be very helpful with running statistics on a group of tables.

To run statistics using Control Center, select desired tables (to select more than one table, press the Ctrl or Shift key while clicking the table names; to select all tables, click any table name and then press Control + A), right-click the selection, and choose the Run Statistics option, as shown in Figure 9-16.

click to expand
Figure 9-16: Running RUNSTATS on multiple tables

On the first Tables tab move all items from the Available list to the Selected list by clicking the >> button. Figure 9-17 presents the sample result of the operation.

click to expand
Figure 9-17: Selecting tables for RUNSTAT command.

On the Statistics tab, you can specify options for the RUNSTAT command. You can start with collecting basic statistics on all columns and indexes, and distribution of values only for key columns, like presented on Figure 9-18. After setting the RUNSTAT option, you can execute the commands by clicking Apply button.

click to expand
Figure 9-18: RUNSTAT command options

DB2 comes with a very powerful query optimization algorithm. This cost-based algorithm will attempt to determine the cheapest way to perform a query against a database. Items such as the database configuration, database physical layout, table relationships, and data distribution are all considered when finding the optimal access plan for a query. To check the current execution plan, you can use the Explain SQL function (described in "Visual Explain" on page 312).

9.5.6 Configuration advisor

The Configuration Advisor wizard is a GUI tool that can be helpful in preparing DB2 initial configuration. The wizard requests information about the database, its data, and the purpose of the system, and then recommends new configuration parameters for the database and the instance.

To invoke this wizard from the DB2 Control Center, expand the object tree until you find the database that you want to tune. Select the icon for the database, right-click and select Configuration Advisor. The wizard through several dialog widows collects information about percentage of memory dedicated to DB2, type of workload, number of statements per transaction, transaction throughput, trade-off between recovery and database performance, number of applications, and isolation level of applications connected to the database. Based on the supplied answers, the wizard proposes configuration changes, and gives the option to apply the recommendations, or save in them to the Task Center for later execution, as shown in Figure 9-19. The result window is presented in Figure 9-20.

click to expand
Figure 9-19: Scheduling Configuration Advisor recommendations

click to expand
Figure 9-20: Configuration Advisor recommendations

Initial configuration recommendations can be also acquired through the text based AUTOCONFIGURE command. Example 9-39 shows sample executions of the command.

Example 9-39: Autoconfigure command

start example
 db2 autoconfigure using mem_percent 40 tpm 300 num_local_apps 80 isolation CS apply none [...] Current and Recommended Values for Database Configuration Description                            Parameter  Current Value  Recommended Value ---------------------------------------------------------------------------------------  Max appl. control heap size (4KB)     (APP_CTL_HEAP_SZ) = 4096                128  Max size of appl. group mem set (4KB) (APPGROUP_MEM_SZ) = 30000               9908  Default application heap (4KB)             (APPLHEAPSZ) = 256                 256  Catalog cache size (4KB)              (CATALOGCACHE_SZ) = (MAXAPPLS*4)        404  Changed pages threshold                (CHNGPGS_THRESH) = 40                  60  Database heap (4KB)                            (DBHEAP) = 600                 1461  Degree of parallelism                      (DFT_DEGREE) = 1                   1  Default tablespace extentsize (pages)   (DFT_EXTENT_SZ) = 32                  32 [...] 
end example

Table 9-4 lists all AUTOCONFIGURE command parameters.

Table 9-4: Parameters for AUTOCONFIGURE command

Keyword

Values

Explanation

mem_percent

1–100

default: 80

Percentage of memory to dedicate to DB2

workload_type

simple, mixed, complex default: mixed

Type of workload: simple for transaction processing, complex for warehousing

num_stmts

1-1 000 000

default: 10

Number of statements per unit of work

tpm

1-200 000

default: 60

Transactions per minute

admin_priority

performance, recovery, both default: both

Optimize for better performance or better recovery time

is_populated

yes, no

default: yes

Is the database populated with data?

num_local_apps

0–5 000

default: 0

Number of connected local applications

num_remote_apps

0–5 000

default: 10

Number of connected remote applications

isolation

RR, RS, CS, UR default: RR

Isolation levels: Repeatable Read, Read Stability, Cursor Stability, Uncommitted Read

bp_resizeable

yes, no

default: yes

Are buffer pools re-sizeable?

9.5.7 Index Advisor

Well designed indexes are essential to database performance. DB2 UDB comes with utility called the Index Advisor, which can recommend indexes for specific SQL queries. Index Advisor can be invoked either using the DB2ADVIS command or using Design Advisor wizard from the Command Center or Control Center. This utility accepts one or more SQL statements and their relative frequency, known as a workload.

The Index Advisor is good for:

  • Finding the best indexes for a problem query

  • Finding the best indexes for a specified workload. When specifying the workload, you can use the frequency parameter to prioritize the queries. You can also limit disk space for the target indexes.

  • Testing an index on a workload without having to create the index

Example 9-40 shows a simple db2advis call against the single SQL query; for more options type db2advis -h from the command line.

Example 9-40: Finding indexes for particular query

start example
 db2advis -d db2_emp -s "select first_name, last_name, dept_name from departments d, employees e where d.dept_code = e.dept_code and e.last_name like 'W%'" execution started at timestamp 2003-08-01-14.15.00.408000 recommending indexes... Initial set of proposed indexes is ready. Found maximum set of [2] recommended indexes Cost of workload with all indexes included [0.155868] timerons total disk space needed for initial set [   0.018] MB total disk space constrained to         [  -1.000] MB   2  indexes in current solution  [ 50.3188] timerons  (without indexes)  [  0.1559] timerons  (with current solution)  [%99.69] improvement Trying variations of the solution set.   2  indexes in current solution  [ 50.3188] timerons  (without indexes)  [  0.1559] timerons  (with current solution)  [%99.69] improvement -- -- -- LIST OF RECOMMENDED INDEXES -- =========================== -- index[1],    0.009MB    CREATE UNIQUE INDEX IDX030801141500000 ON "DB2INST1"."DEPARTMENTS" ("DEPT_CODE" ASC) INCLUDE ("DEPT_NAME") ALLOW REVERSE SCANS ;    COMMIT WORK ;    --RUNSTATS ON TABLE "DEPARTMENTS" FOR INDEX "IDX030801141500000" ;    COMMIT WORK ; -- index[2],    0.009MB    CREATE INDEX IDX030801141500000 ON "DB2INST1"."EMPLOYEES" ("LAST_NAME" ASC, "FIRST_NAME" ASC, "DEPT_CODE" ASC) ALLOW REVERSE SCANS ;    COMMIT WORK ;    --RUNSTATS ON TABLE "EMPLOYEES" FOR INDEX "IDX030801141500000" ;    COMMIT WORK ; -- =========================== -- DB2 Workload Performance Advisor tool is finished. 
end example

Launching Index Advisor in a GUI environment

The Index Advisor can also be invoked as a GUI tool. From the Control Center, expand the object tree to find the Database folder. Right-click the desired database and select Design Advisor. The wizard guides you through all the necessary steps, and also helps construct a workload by looking for recently executed SQL queries, or looking through the recently used packages. In order to get accurate recommendations, it is important to have the current catalog statistics. With the Design Advisor there is an option to collect the required basic statistics, however, this increases the total calculation time. Figure 9-21 presents sample Design Advisor window.

click to expand
Figure 9-21: The Design Advisor

The detailed usage of Design Advisor can be found in the following Redbooks:

  • DB2 UDB Evaluation Guide for Linux and Windows, SG24-6934

  • DB2 UDB Exploitation of the Windows Environment, SG24-6893

  • Up and Running with DB2 for Linux, SG24-6899



 < Day Day Up > 



Oracle to DB2 UDB Conversion Guide2003
Oracle to DB2 UDB Conversion Guide2003
ISBN: N/A
EAN: N/A
Year: 2004
Pages: 132

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net