DB2 Tools

 <  Day Day Up  >  

The sheer number of DB2 add-on tools that are available in the marketplace validates the need for these tools. Most DB2 shops implement one or more add-on tools for DB2. Of these, IBM's QMF is among the most popular. Many more tools from other vendors fill the void for optimizing performance, implementing database changes, and ensuring database recoverability, just to name a few of their capabilities. Table 39.1 provides a rundown of the categories of products.

Table 39.1. Categories of DB2 Products

Abbrev

Tool Category Definition

ALT

Tools that administer the SQL necessary to change DB2 objects without losing either authorization or other, dependent objects

AUD

Tools that read the DB2 logs and report on data modification and database changes. May also re-apply SQL from log images.

CAT

Tools that enable panel-driven (or GUI-based) access to the DB2 Catalog without having to code actual SQL queries.

COM

Tools that reduce data storage requirements using compression algorithms.

C/S

DB2- related client/server tools for building applications, connecting databases, or enabling remote access. Includes middleware and gateways.

DBA

Database administration and analysis tools that enable a DBA to determine when to reorganize table spaces and indexes. Useful for implementing proactive tuning.

DES

Database modeling and design tools such as upper CASE tools, entity-relationships diagramming tools, and tools to enable logical to physical model translation.

DSD

Tools that monitor and manage DB2 DASD and space management.

EDT

Tools that provide an ISPF (or GUI-based) editor for accessing, manipulating, and modifying data in DB2 tables. Data is typically displayed using a spreadsheet-like interface and can be modified simply by over-typing (instead of issuing SQL statements).

ETL

Tools that extract, transform, and load data from environment to environment (such as from IMS to DB2). Frequently used in data warehousing projects.

IDX

Tools that analyze your index usage to rate index effectiveness based on SQL statement usage.

INT

Tools that manage and implement data integrity (check constraints) and referential integrity (RI).

MIG

Tools that create and administer the requisite SQL to migrate DB2 objects from one DB2 subsystem to another.

MSC

Miscellaneous tools (do not fit into one of the other categories).

NET

Tools that enable DB2 databases to be connected to the Internet, intranet, and the World Wide Web.

OPR

Operational support tools, such as on-line DB2 standards manuals, change control systems, and schedulers .

PC

PC- and workstation-based DBMSs that mimic DB2 execution such that application development chores can be offloaded from the mainframe.

PLN

Tools that analyze and evaluate the access paths for individual SQL statements and SQL in plans and packages. May also provide suggestions for how to improve the SQL.

PM

DB2 performance monitors .

PRF

DB2 and SQL optimization products that enhance database and application performance.

PRG

Tools that assist the application developer, such as lower CASE tools, 4GLs, SQL generation tools, SQL formatting tools, and application testing tools.

QMF

Tools that augment the functionality and/or enhance the performance of QMF. Examples include query compilers and QMF object administration tools.

QRY

Tools that provide an integrated environment for developing and issuing queries against DB2 tables. May be ISPF- or GUI-based. OLAP tools fall under this category.

REP

Tools that store, manage, and enable access to metadata (such as repositories and data dictionaries).

SEC

Security and authorization tools.

UTL

Tools that generate DB2 utility JCL or enhance DB2 utility functions by providing faster, more efficient execution.


These types of add-on tools can significantly improve the efficiency of DB2 application development. Even IBM is beginning to understand the need for better management tools. IBM provides several DBA tools for free with DB2, including:

  • DB2 Control Center : A tool for viewing and managing DB2 databases

  • DB2 Development Center : A tool for creating, installing, and testing DB2 stored procedures (previously called DB2 Stored Procedure Builder)

  • DB2 Installer : A GUI-based installation assistant

  • DB2 Visual Explain : Graphically presents DB2 EXPLAIN output

  • DB2 Estimator : A tool for estimating the performance of DB2 applications

  • DB2 Warehouse Center : Provides a GUI-driven control center for creating and managing DB2-based data warehoused

Of course, IBM has sold other DB2 tools for a fee such as QMF and DB2-PM for quite some time now. Recently, IBM augmented its portfolio of add-on DB2 tools, and they now compete with the primary DB2 tool vendors such as BMC Software and Computer Associates.

In the following sections, each tool category is described, along with a discussion of desired features. In evaluating products, look for features important to your organization. These lists are not comprehensive, but they provide a good starting point for the evaluation process.

Table Altering Tools

DB2 provides the capability to modify the structure of existing objects using the ALTER DDL statement. The ALTER statement, however, is a functionally crippled statement. Many times DBAs find themselves in a position where it would be nice to be able to alter all the parameters that can be specified for an object; but DB2 does not support this. For example, you can add columns to an existing table (only at the end), but you cannot remove columns from a table. The table must be dropped and then re-created without the columns you want to remove.

graphics/v8_icon.gif

Of course, IBM is making in- roads into removing some of these obstacles. DB2 Version 8 provides online schema evolution, whereby more types of changes can be implemented with minimal impact to data availability. For more details on online schema changes refer to Chapter 7, "Database Change Management and Schema Evolution."


Another problem that DBAs encounter in modifying DB2 objects is the cascading drop effect. If a change to a table space mandates its being dropped and re-created (for example, changing the limit keys of a partitioned table space), all dependent objects are dropped when the table space is dropped. This includes the following:

All tables in the table space

All information in SYSCOPY (including image copy information)

All indexes on the tables

Primary and foreign keys

Check constraints

Synonyms and views

Labels and comments

FIELDPROC and EDITPROC assignments

RUNSTATS values

All authorization below the table space level statistics

Ensuring that DDL is issued after the modification to reverse the effects of cascading drops can be a tedious , complex, and error-prone procedure.

Many types of DB2 object alteration cannot be performed using the generic DB2 ALTER statement. Several examples follow:

You cannot change the name of a database, alias, view, column, constraint, table space, or index.

You cannot create a database based on the attributes of an existing database.

You cannot create a table space based on the attributes of an existing table space.

You cannot change the database in which the table space exists.

You cannot change the table space type (for example, changing a simple table space to a segmented or partitioned table space).

You cannot change the SEGSIZE of a segmented table space.

You cannot copy primary and foreign keys using CREATE LIKE ; this command creates a new table based on the columns of another table.

You cannot move a table from one table space to another.

You cannot rearrange column ordering.

You cannot add a column into the middle of other columns; only at the end of the table.

You cannot decrease the length of a column (you can increase length).

You cannot remove columns from a table.

You cannot change the primary key without dropping and re-creating the primary key.

You cannot add to a table a column specified as NOT NULL .

You cannot add any columns to a table defined with an EDITPROC .

You cannot add columns to a table defined with an EDITPROC .

You cannot change a table's EDITPROC or a column's VALIDPROC .

You cannot create a view based on another view.

You cannot add columns to, or remove columns from, a view.

You cannot change the SELECT statement on which the view is based.

You cannot create an index based on another index.

You cannot change the index order ( ascending or descending).

You cannot create an alias based on another alias.

You cannot change the location of the alias.

You cannot change the table on which the alias is based.

This list provides the justification needed to obtain an alter tool. Such a tool provides an integrated environment for altering DB2 objects. The burden of ensuring that a change to a DB2 object does not cause other implicit changes is moved from the DBA to the tool.

At a minimum, an alter tool should perform the following functions:

  • Maintain tables easily without manually coding SQL.

  • Retain or reapply all dependent objects and security affected by the requested alter if a drop is required.

  • Retain or reapply all statistical information for dropped objects.

  • Navigate hierarchically from object to object, making alterations as it goes.

  • Provide panel-driven, or point-and-click modification showing before and after definitions of the DB2 objects before the changes are applied.

  • Batch requested changes into a work list that can be executing in the foreground or the background.

  • Analyze changes to ensure that the requested alterations do not violate any DB2 DDL rules. For example, if a series of changes is requested and one change causes a subsequent change to be invalid (an object is dropped, for instance), this should be flagged before execution.

  • Control the environment in which changes are made.

  • Provide for rapid restarts with minimal or no manual intervention required in the event the alter script fails.

  • Be capable of monitoring changes as they are applied.

Auditing Tools

An audit is the examination of a practice to determine its correctness. DB2 auditing software therefore should help in monitoring the data control, data definition, and data integrity in the DB2 environment. Several mechanisms provided by DB2 enable the creation of an audit trail, but this trail can be difficult to follow.

The primary vehicle provided by DB2 for auditing is the audit trace. This feature enables DB2 to trace and record auditable activity initiated by specific users. When the DB2 audit trace is activated, the following type of information can be captured to the trace destination:

Authorization failures

Grant and revoke SQL statements

DDL issued against auditable tables

DML issued against auditable tables

Bind requests involving auditable tables

Authorization ID changes requested by the SET CURRENT SQLID statement

Utility executions

An auditable table is any table defined to DB2 with the AUDIT clause of the CREATE TABLE statement. There are three options for table auditing: NONE , CHANGES , and ALL . Specifying AUDIT NONE , which is the default, disables table auditing so that the audit trace does not track that table. Specifying AUDIT CHANGES indicates that the first DELETE , INSERT , or UPDATE statement issued against that table in every application unit of work ( COMMIT scope) is recorded. AUDIT ALL records the first DML statement of any type accessing this table in each application unit of work. Note, however, that this information is tracked only if the appropriate audit trace is activated. Refer to Chapter 24, "Traditional DB2 Performance Monitoring," for more information on DB2 audit traces.

This information is written to the output trace destination specified for the audit trace. DB2 trace records can be written to GTF, SMF, or an OP buffer. After the information has been written to the specified destination, the problem of how to read this information still exists. If you have DB2-PM, you can run the appropriate audit reports , but even these can be insufficient for true auditing.

An audit tool should provide five important features that DB2's audit tracing capability does not. DB2 auditing requires a trace to be activated, and this can quickly become expensive if many tables must be audited . The first feature an auditing tool should provide is the capability to read the DB2 logs, which are always produced, and report on update activity as needed. This reduces overhead because it uses the regular processing features of DB2 rather than an additional tracing feature, which increases overhead.

The DB2 audit trace records a trace record only for the first statement in a unit of work. The second feature of the auditing tool is reporting all data modification from the DB2 logs.

The DB2 audit trace facility does not record the specifics of the data modification. The third feature of an auditing tool is reporting who (by authorization ID) makes each change, and also showing a before and after image of the changed data.

The fourth feature the auditing tool should provide is the capability to report on the DB2 audit trace data if so desired.

A fifth feature of a DB2 auditing tool is to access the DB2 logs to create redo (or undo) SQL scripts that can be run to re-apply data modifications that occurred during a specific timespan. Although this feature does not provide a true auditing function, it is a common feature since auditing tools by their very nature must access the DB2 logs.

NOTE

Another name for generating undo and redo SQL from the log is transaction recovery. An undo transaction recovery involves generating undo SQL statements to reverse the effect of the transactions in error. To generate undo SQL, the database log is read to find the data modifications that were applied during a given timeframe and

  • INSERT s are turned into DELETE s

  • DELETE s are turned into INSERT s

  • UPDATE s are turned around to UPDATE to the prior value

Redo is a similar form of transaction recovery. But instead of generating SQL for the bad transaction that we want to eliminate, we generate the SQL for the transactions we want to save. Then, we do a standard point in time recovery eliminating all the transactions since the recovery point. Finally, we reapply the good transactions captured in the first step.

Unlike the undo process, which creates SQL statements that are designed to back out all of the problem transactions, the redo process creates SQL statements that are designed to reapply only the valid transactions from a consistent point of recovery to the current time. Because the redo process does not generate SQL for the problem transactions, performing a recovery and then executing the redo SQL can restore the table space to a current state that does not include the problem transactions.


Finally, the auditing tool should provide both standard reports and the capability to create site-specific reports (either from the log or from the DB2 audit trace data).

If your shop has strict auditing requirements, an auditing tool is almost mandatory because of DB2's weak inherent auditing capabilities. Additional things to look for in an auditing tool include the following:

  • The tool should consume minimal resources. Before acquiring an auditing tool, determine the estimated I/O and CPU overhead of running the auditing tool to make sure that it will not disrupt your service level agreements for DB2 applications or consume excessive resources.

  • The tool should be able to understand DB2 audit trace data and provide formatted reports of the audit trace data, including reports on DDL, DML, and DCL executions.

  • Online and batch reporting options should be provided.

  • The tool should be able to capture additional information from the DB2 transaction logs.

  • Of high importance, the tool should be able to identify the authid of any user that modifies and DB2 data in any table being audited.

  • Finally, the tool should be able to read the transaction log and generate undo and redo SQL for transaction recovery.

DB2 Catalog Query and Analysis Tools

The DB2 Catalog contains a wealth of information essential to the operation of DB2. Information about all DB2 objects, authority, and recovery is stored and maintained in the DB2 Catalog. This system catalog is composed of DB2 tables and can be queried using SQL. The data returned by these queries provides a base of information for many DB2 monitoring and administrative tasks .

Coding SQL can be a time-consuming process. Often, you must combine information from multiple DB2 Catalog tables to provide the user with facts relevant for a particular task. This can be verified by reexamining the DB2 Catalog queries presented in Chapter 26, "DB2 Object Monitoring Using the DB2 Catalog and RTS."

Add-on tools can ease the burden of developing DB2 Catalog queries. The basic feature common to all DB2 Catalog tools is the capability to request DB2 Catalog information using a screen-driven interface without coding SQL statements. Analysts can obtain rapid access to specific facts stored in the DB2 Catalog without the burden of coding (sometimes quite complex) SQL. Furthermore, procedural logic is sometimes required to adequately query specific types of catalog information.

Instead of merely enabling data access, many DB2 Catalog tools can do one or more of the following:

  • Create syntactically correct DDL statements for all DB2 objects by reading the appropriate DB2 Catalog tables. These statements are generally executed immediately or saved in a sequential data set for future reference or use.

  • Modify the "updateable" DB2 Catalog statistical columns using an editor interface (for example, a non-SQL interface).

  • Create syntactically correct DCL statements from the DB2 Catalog in the same way that DDL is generated.

  • Perform "drop analysis" on an SQL DROP statement. This analysis determines the effect of the cascading drop by detailing all dependent objects and security that will be deleted as a result of executing the DROP .

  • Provide a hierarchical listing of DB2 objects. For example, if a specific table is chosen , the tool can migrate quickly up the hierarchy to show its table space and database, or down the hierarchy to show all dependent indexes, views, synonyms, aliases, referentially connected tables, and plans.

  • Create and drop DB2 objects, and grant and revoke DB2 security from a screen without coding SQL. Additionally, some tools log all drops and revokes so that they can be undone in the event of an inadvertent drop or revoke execution.

  • Specify the ISOLATION clause to access DB2 data using cursor stability, repeatable read, read stability, or uncommitted read processing.

  • Serve as a launching point for utilities and commands to be executed.

  • Operate on the DB2 Catalog or on a copy of the DB2 Catalog to reduce system-wide contention .

These features aid the DBA in performing his day-to-day duties . Furthermore, a catalog query tool can greatly diminish the amount of time required for a junior DBA to become a productive member of the DBA team.

Compression Tools

A standard tool for reducing DASD costs is the compression utility. This type of tool operates by applying an algorithm to the data in a table so that the data is encoded in a more compact area. By reducing the amount of area needed to store data, DASD costs are decreased. Compression tools must compress the data when it is added to the table and subsequently modified, and then expand the data when it is later retrieved (see Figure 39.1).

Figure 39.1. A DB2 table compression routine at work.

graphics/39fig01.gif


Third-party compression routines are specified for DB2 tables using the EDITPROC clause of the CREATE TABLE statement. The load module name for the compression routine is supplied as the parameter to the EDITPROC clause. A table must be dropped and re-created to apply an EDITPROC .

In general, a compression algorithm increases CPU costs while providing benefits in the areas of decreased DASD utilization and sometimes decreased I/O costs. This trade-off is not beneficial for all tables. For example, if a compression routine saves 30% on DASD costs but increases CPU without decreasing I/O, the trade-off is probably not beneficial.

A compression tool can decrease DASD by reducing the size of the rows to be stored. CPU use usually increases because additional processing is required to compress and expand the row. I/O costs, however, could decrease.

Enhancements to DB2 since V2.3 have made most third-party compression tools of little added value. DB2 provides a basic compression routine called DSN8HUFF . Still most third-party compression tools provide more efficient compression algorithms and advanced analysis to determine the costs and benefits of compression for a specific table. This changed dramatically with DB2 V3. The internal compression capabilities of DB2 since V3 are such that DB2 compression outperforms third-party compression tools. Even when a third-party compression tool can provide benefit to an organization (perhaps because it offers multiple compression routines geared for different types of data), the return on investment is such that most shops typically stick with internal DB2 compression for new tables. The third-party compression tools, however, have not become obsolete. Many organizations refuse to remove software that is working for any number of reasons (from an "if it ain't broken don't fix it" mentality to "not enough cycles to implement such a vast change"). Indeed, most shops are too busy with production work to support the additional effort of removing the third-party EDITPROC s and replacing them with internal DB2 compression.

There are other types of compression tools than those that simply compress DB2 table space data. Some tools compress DB2 image copy backup data sets. These are divided into two camps: those that compress DASD backups and those that compress cartridge backups . This type of compression tool can provide the following benefits:

  • Reduced backup storage costs

  • Reduced elapsed time for taking backups because fewer tapes must be loaded

  • Fewer physical cartridges required (for local and offsite storage)

Another type of compression tool is available from several vendors to compress DB2's archive log data sets. By compressing archive logs, you might be able to fit more archive data sets on DASD, thereby improving the performance of a recovery situation.

DB2-Related Client/Server Tools

Many applications these days span multiple computing environments. This phenomenon is known as client/server processing, and it has proven to be quite successful because it provides a flexible, distributed computing environment. DB2 is a large participant in the client/server plans for many shops. Providing efficient access to large amounts of data, DB2 for z/OS can function as the ultimate database server in a client/server environment.

This being the case, there are many tools on the market that can ease the burden of implementing and administering DB2 in a client/server environment. Middleware products and database gateways that sit between the client workstation and the mainframe enable access to DB2 as a server. These products can provide access to DB2 for z/OS, to DB2 on other platforms, and also to other DBMS server products (Oracle, Microsoft SQL Server, Sybase Adaptive Server Enterprise, Informix, and so on). Additionally, third-party ODBC and JDBC drivers are available to ease workstation access to mainframe DB2 data.

Another valid type of client/server tool is a 4GL programming environment that provides seamless access to DB2. These types of products typically split the application workload between the workstation and the server aiding the programmer to rapidly develop DB2 client/server applications.

Database Analysis Tools

DB2 provides only minimal intelligence for database analysis. Usually, a database administrator or performance analyst must keep a vigilant watch over DB2 objects using DB2 Catalog queries or a DB2 Catalog tool. This is not an optimal solution, because it relies on human intervention for efficient database organization, opening up the possibility of human error, forgetting to monitor, and misinterpreting analyzed data.

Fortunately, database analysis tools can proactively and automatically monitor your DB2 environment. This monitoring can perform the following functions:

  • Collect statistics for DB2 table spaces and indexes. These statistics can be standard DB2 RUNSTATS information, extended statistics capturing more information (for example, data set extents), or a combination of both.

  • Read the VSAM data sets for the DB2 objects to capture current statistics, read RUNSTATS from the DB2 Catalog, read tables unique to the tool that captured the enhanced statistics, or any combination of these three.

  • Set thresholds, whereby the automatic scheduling of a REORG utility is invoked based on current statistics. Additional alarm capabilities can be available to take corrective action when database problems are encountered or merely to page the DBA on-call when a problem arises.

  • Database analysis tools also can provide a "Swiss-army knife " toolkit for DBAs. Some features to look for include integrity checking, page zapping to fix problems, compression analysis, historical statistics, and interfaces to DB2 utilities and commands.

  • Provide a series of canned reports detailing the potential problems for specific DB2 objects.

NOTE

graphics/v7_icon.gif

IBM has embarked on a campaign to add intelligence into the DB2 engine and its utilities. For example, with V6 the REORG utility was modified such that it can examine statistics to determine whether or not the reorganization should proceed. Furthermore, a subsequent V7 fix provided the capability for DB2 to collect real-time statistics without manual intervention. This trend will likely continue, and DB2 will become more manageable and self-healing.


Database Modeling and Design Tools

Database modeling and design tools do not have to be unique to DB2 design, although some are. Application development should be based on sound data and process models. The use of a tool to ensure this is a good practice.

Database modeling and design tools may be referred to as CASE tools. CASE, or computer-aided software engineering, is the process of automating the application development life cycle. A CASE tool, such as a data modeling tool, supports portions of that life cycle. A comprehensive checklist of features to look for in a CASE tool is presented in Chapter 14, "Alternative DB2 Application Development Methods ." Although CASE tools were very popular in the late 1980s and early 1990s, they have not been in vogue since (though some DB2 applications that were developed in a CASE tool environment still run in production).

Many excellent database design and modeling tools are not specifically designed for DB2 but can be used to develop DB2 applications. Tools developed specifically to support DB2 development, however, add another dimension to the application development effort. They can significantly reduce the development timeframe by automating repetitive tasks and validating the models. If your organization decides to obtain a CASE tool that specifically supports DB2, look for one that can do the following:

  • Provide standard features of logical data modeling (such as entity-relationship diagramming and normalization).

  • Create a physical data model geared to DB2. This model should support all features of DB2, such as the capability to depict all DB2 objects, referential integrity, VCAT and STOGROUP-defined table spaces, and capacity planning.

  • Provide an expert system to verify the accuracy of the physical data model and to suggest alternative solutions.

  • Cross-reference the logical model to the physical model, capturing text that supports physical design decisions such as denormalization and the choice of table space type.

  • Automatically generate DB2-standard DDL to fully implement the database defined in the physical data model.

  • Interface with application development tools and data dictionaries available to the organization.

DASD and Space Management Tools

DB2 provides basic statistics for space utilization in the DB2 Catalog, but the in-depth statistics required for both space management and performance tuning are woefully inadequate. The queries presented in Chapter 26 form a basis for DB2 DASD management, but critical elements are missing.

Monitoring the space requirements of the underlying VSAM data sets to maintain historical growth information can be difficult without a space management tool. When data sets go into secondary extents, performance suffers. Without a DASD management tool, you would have to monitor Real Time Stats or periodically examine LISTCAT output to monitor secondary extents. Both can be time-consuming.

Additionally, the manner in which DB2 allocates space can result in the inefficient use of DASD. Often space is allocated but DB2 does not use it. Although the STOSPACE utility, combined with DB2 queries, provides limited out-of-the-box DASD management, this capability is far from robust. A DASD management tool is the only answer for ferreting out the amount of allocated space versus the amount of used space.

DASD management tools often interface with other DB2 and DASD support tools such as standard MVS space management tools, database analysis tools, DB2 Catalog query and management tools, and DB2 utility JCL generators.

DB2 Table Editors

The only method of updating DB2 data is with the SQL data manipulation language statements DELETE , INSERT , and UPDATE . Because these SQL statements operate on data a set at a time, multiple rows ”or even all of the rows ”can be affected by a single SQL statement. Coding SQL statements for every data modification required during the application development and testing phase can be time-consuming.

A DB2 table editing tool reduces the time needed to make simple data alterations by providing full-screen edit capability for DB2 tables. The user specifies the table to edit and is placed in an edit session that resembles the ISPF editor. The data is presented to the user as a series of rows, with the columns separated by spaces. A header line indicates the column names . The data can be scrolled up and down, as well as left and right. To change data, the user simply types over the current data.

This type of tool is ideal for supporting the application development process. A programmer can make quick changes without coding SQL. Also, if properly implemented, a table editor can reduce the number of erroneous data modifications made by beginning SQL users.

CAUTION

Remember that the table editor is issuing SQL in the background to implement the requested changes. This can cause a lag between the time the user updates the data and the time the data is committed. Table editor updates usually are committed only when the user requests that the data be saved or when the user backs out of the edit session without canceling.


Remember too that table editors can consume a vast amount of resources. Ensure that the tool can limit the number of rows to be read into the editing session. For example, can the tool set a filter such that only the rows meeting certain search criteria are read? Can a limit be set on the number of rows to be read into any one edit session? Without this capability, large table space scans can result.

A DB2 table editor should be used only in the testing environment. End users or programmers might request that a table editor be made available for production data modification. This should be avoided at all costs. The data in production tables is critical to the success of your organization and should be treated with great care. Production data modification should be accomplished only with thoroughly tested SQL or production plans.

When a table editor is used, all columns are available for update. Thus, if a table editor is used to change production data, a simple mis-keying can cause unwanted updates. Native SQL should be used if you must ensure that only certain columns are updated.

Tested SQL statements and application plans are characterized by their planned nature. The modification requests were well thought out and tested. This is not true for changes implemented through a table editor.

Additionally, tested SQL statements and application plans are characterized by their planned nature. The modification requests were well thought out and tested. This is not true for changes implemented through a table editor.

In addition to simple online browsing and editing of DB2 data using ISPF (or a GUI), the table-editing tool should be able to

  • Mimic the functionality of the ISPF editor.

  • Provide both single row and multiple row-at-a-time editing options.

  • Optionally prompt the user before actually applying any changes to the actual data.

  • Cancel accumulated changes of any editing session before exiting.

  • Periodically save the changes without exiting the table editor (as an option).

  • Propagate referential integrity changes.

  • Provide the capability to interface with utilities to copy, load, or unload tables.

  • Interface with your DB2 application testing tools.

  • Apply filters to rows before displaying them in an editing session.

  • Display and save SQL UPDATE , INSERT , and DELETE statements for the accumulated changes of an editing session.

  • Issue SQL within an editing session.

  • Interface with your program editor.

  • Compare data in two tables and show any differences.

Sometimes other DB2 tools, such as a DB2 Catalog query tool, will come with an integrated table editor.

Extract/Transformation/Load (Data Movement) Tools

At times, multiple database management systems coexist in data processing shops. This is increasingly true as shops embark on client/server initiatives. Additionally, the same data (or a subset thereof) might need to be stored in each of the databases. In a multiple DBMS environment, the movement of data from DBMS to DBMS is a tedious task. The need to move data from one environment to another is increasing with the overwhelming acceptance and implementation of data warehouses.

ETL tools ease the burden because the tool understands the data format and environment of each DBMS it works with. The data movement and warehousing tool(s) that a shop chooses depends on the following factors:

  • How many DBMS products need to be supported?

  • To what extent is the data replicated across the DBMS products?

  • What transformations need to occur as the data is moved from one environment to another? For example, how are data types converted for DBMSs that do not support date, time, and timestamp date (or support these data types using a different format)? Or, more simply, do codes in one system need to be transformed into text in another (for example, "A" becomes "Active", and so on).

  • Does the data have to be synchronized across DBMS products?

  • Is the data static or dynamic?

  • If it is dynamic, is it updated online, in batch, or both?

The answers to these questions help determine the type of data conversion tool necessary.

Two basic types of conversion tools are popular in the market today:

Replication tools

These tools extract data from external application systems and other databases for population into DB2 tables. This type of tool can extract data from VSAM, IMS, Sybase, Oracle, flat files, or other structures and insert the data into DB2.

Propagation tools

Inserts data from external applications and other database products into DB2 tables. A propagation tool is similar in function to an extract tool, but propagation tools are active. They constantly capture updates made in the external system, either for immediate application to DB2 tables or for subsequent batch updating. This differs from the extract tool, which captures entire data structures, not data modifications.


Integrity Tools

Referential integrity has been available on DB2 since the early days of DB2 but it can be difficult to administer and implement properly. RI tools eliminate the difficulty by performing one of the following functions:

  • Analyzing data for both system- and user-managed referential integrity constraint violations

  • Executing faster than the IBM CHECK utility

  • Enabling additional types of RI to be supported; for example, analyzing primary keys for which no foreign keys exist and deleting the primary key row

Check constraints for data integrity have been available with DB2 since V4. Tools can help implement and maintain check constraints in the following ways:

  • Analyzing data for both system- and user-managed data integrity constraint violations

  • Executing faster than the IBM CHECK utility

  • Enabling additional types of data integrity to be supported; for example, analyzing the compatibility of check constraints and user-defined DEFAULT clauses

DB2 Object Migration Tools

DB2 does not provide a feature to migrate DB2 objects from one subsystem to another. This can be accomplished only by manually storing the CREATE DDL statements (and all subsequent ALTER statements) for future application in another system. Manual processes such as this are error-prone. Also, this process does not take into account the migration of table data, DB2 security, plans, packages, statistics, and so on.

DB2 object migration tools facilitate the quick migration of DB2 objects from one DB2 subsystem to another. They are similar to a table altering tool but have minimal altering capability (some interface directly with an alter tool or are integrated into a single tool). The migration procedure is usually driven by ISPF panels that prompt the user for the objects to migrate.

Migration typically can be specified at any level. For example, if you request the migration of a specific database, you also could migrate all dependent objects and security. A renaming capability typically is provided so that database names, authorization IDs, and other objects are renamed according to the standards of the receiving subsystem. When the parameters of the migration have been specified completely, the tool creates a job stream to implement the requested DB2 objects in the requested DB2 subsystem.

A migration tool reduces the time required by database administrators to move DB2 databases from environment to environment (for example, from test to production). Quicker turnaround results in a more rapid response to user needs, thereby increasing the efficiency of your business.

Typically, migration tools are the second DB2 tool that an organization acquires (right after a DB2 Catalog query product).

Miscellaneous Tools

Many types of DB2 tools are available. The categories in this chapter cover the major types of DB2 tools, but not all tools can be easily pigeonholed. For example, consider a DB2 table space calculator. It reads table DDL and information on the number of rows in the table to estimate space requirements. A space calculator is often provided with another tool, such as a DASD management tool or a database design and modeling tool.

The number and types of DB2 tools that are available in the market are constantly growing and are limited only by the imagination and ingenuity of the programmers and vendors offering these solutions.

Internet Enabling Tools

The Internet is the hottest technology trend of the past decade . Every organization is looking for ways to increase their competitive advantage by making corporate data available to customers, partners , and employees over the Internet, intranet, and extranets.

A specialized category of tools is available to hook DB2 data to the Web. These tools are referred to as Internet-enabling tools. For more information on the Internet and IBM's tools for connecting the Web to DB2, refer to Chapter 17, "DB2 and the Internet."

Operational Support Tools

Many avenues encompass operational support in a DB2 environment, ranging from standards and procedures to tools that guarantee smoother operation. This section describes tools from several operational support categories.

One type of product delivers online access to DB2 documentation. With this tool, you avoid the cost of purchasing DB2 manuals for all programmers, and DB2 information and error messages are always available online. In addition, analysts and DBAs who dial in to the mainframe from home can reference DB2 manuals online rather than keep printed copies at home. IBM's Book Manager is an example of this type of tool.

Another similar type of operational support tool provides online access to DB2 standards and procedures. These tools are commonly populated with model DB2 standards and procedures that can be modified or extended. Tools of this nature are ideal for a shop with little DB2 experience that wants to launch a DB2 project. As the shop grows, the standards and procedures can grow with it.

Products that provide "canned" standards for implementing, accessing, and administering DB2 databases are particularly useful for shops new to DB2. By purchasing an online standards manual, these shops can quickly come up-to-speed with DB2. However, mature DB2 shops can also benefit from these types of products if the third-party vendor automatically ships updates whenever IBM ships a new release of DB2. This can function as cheap training in the new DB2 release. A product containing DB2 standards should fulfill the following requirements:

  • Provide online access via the mainframe or a networked PC environment, so all developers and DBAs can access the manual

  • Be extensible, so additional standards can be added

  • Be modifiable, so the provided standards can be altered to suit prior shop standards (naming conventions, programming standards, and so on)

Tools also exist to enable a better batch interface to DB2. Standard batch DB2 programs run under the control of the TSO terminal monitor program, IKJEFT01 . Another operational support tool provides a call-attach interface that enables DB2 batch programs to run as a standard MVS batch job without the TSO TMP.

DB2, unlike IMS, provides no inherent capability for storing checkpoint information. Tools that store checkpoint information that can be used by the program during a subsequent restart are useful for large batch DB2 applications issuing many COMMIT s.

One final type of operational support tool assists in managing changes. These tools are typically integrated into a change control tool that manages program changes. Change control implemented for DB2 can involve version control, plan and package management, and ensure that timestamp mismatches ( SQLCODE -818 ) are avoided. Some tools can even control changes to DB2 objects.

PC-Based DB2 Emulation Products

Personal computers are pervasive and most data processing professionals have one on their desk. Most end users do, too. As such, the need to access DB2 from the PC is a viable one. However, not everyone needs to do this in a client/server environment.

Sometimes, just simple access from a PC will suffice. For this, a PC query tool can be used. Data requests originate from the PC workstation. The tool sends the requests to the mainframe for processing.

When processing is finished, the data is returned to the PC and formatted. These types of tools typically use a graphical user interface with pull-down menus and point-and-click functionality. These features are not available on mainframe products.

Another popular approach to developing DB2 applications is to create a similar environment on the PC. This can be done using a PC DBMS that works like DB2 and other similar PC products that mimic the mainframe (COBOL, IMS/TM, CICS, JCL, and so on). DB2 for Linux, Unix, and Windows is, of course, the most popular PC DBMS in this category, but there are others.

Quite often, tools that can be used in a straight PC environment also can be used in a client/server environment.

Plan Analysis Tools

The development of SQL to access DB2 tables is the responsibility of an application development team. With SQL's flexibility, the same request can be made in different ways. Because some of these ways are inefficient, the performance of an application's SQL could fluctuate wildly unless the SQL is analyzed in-depth by an expert before implementation.

The DB2 EXPLAIN command provides information about the access paths used by SQL queries by parsing SQL in application programs and placing encoded output into a DB2 PLAN_TABLE . To gauge efficiency, a DBA must decode the PLAN_TABLE data and determine whether a more efficient access path is available.

SQL code reviews are required to ensure that optimal SQL design techniques are used. SQL code walkthroughs are typically performed by a DBA, a performance analyst, or someone with experience in SQL coding. This walkthrough must consist of reviews of the SQL statements, the selected access paths, and the program code in which the SQL is embedded. It also includes an evaluation of the RUNSTATS information to ascertain whether production-level statistics were used at the time of the EXPLAIN .

A line-by-line review of application source code and EXPLAIN output is tedious and prone to error, and it can cause application backlogs. A plan analysis tool can greatly simplify this process by automating major portions of the code review process. A plan analysis tool can typically perform the following functions:

  • Analyze the SQL in an application program, describing the access paths chosen in a graphic format, an English description, or both.

  • Issue warnings when specific SQL constructs are encountered. For example, each time a sort is requested (by ORDER BY , GROUP BY , or DISTINCT ), a message is presented informing the user of the requisite sort.

  • Suggest alternative SQL solutions based on an "expert system" that reads SQL statements and their corresponding PLAN_TABLE entries and poses alternate SQL options.

  • Extend the rules used by the "expert system" to capture site-specific rules.

  • Analyze at the subsystem, application, plan, package, or SQL statement level.

  • Store multiple versions of EXPLAIN output and create performance comparison and plan history reports.

Currently, no tool can analyze the performance of the COBOL code in context along with the SQL that is embedded in it. For example, consider an application program that embeds a singleton SELECT inside a loop. The singleton SELECT requests a single row based on a predicate, checking for the primary key of that table. The primary key value is changed for each iteration of the loop so that the entire table is read from the lowest key value to the highest key value.

A plan analysis tool will probably not flag the SQL statement because the predicate value is for the primary key, which causes an indexed access. It could be more efficient to code a cursor, without a predicate, to retrieve every row of the table, and then fetch each row one by one. This method might use sequential prefetch or query I/O parallelism, thereby reducing I/O and elapsed time, and therefore enhancing performance. Only a trained analyst can catch this type of design problem during a code walkthrough. Plan analysis tools also miss other potential problems, such as when the program has two cursors that should be coded as a one-cursor join. Although a plan analysis tool significantly reduces the effort involved in the code review process, it cannot eliminate it.

Following are some required features for a plan analysis tool:

  • It must be capable of interpreting standard DB2 EXPLAIN output and present the information in an easy-to-understand (preferably graphical) format.

  • It must automatically scan application source code and PLAN_TABLE s, reporting on the selected access paths and the predicted performance.

  • It must be able to provide a historical record of access paths by program, package, plan, or SQL statement.

Performance Monitors

Performance monitoring and tuning can be one of the most time-consuming tasks for large or critical DB2 applications. This topic was covered in depth in Parts V and VI. DB2 performance monitoring and analysis tools support many features in many ways. For example, DB2 performance tools can operate as follows :

  • In the background mode as a batch job reporting on performance statistics written by the DB2 trace facility

  • In the foreground mode as an online monitor that either traps DB2 trace information using the instrumentation facility interface or captures information from DB2 control blocks as DB2 applications execute

  • By sampling the DB2 and user address spaces as the program runs and by capturing information about the performance of the job independent of DB2 traces

  • By capturing DB2 trace information and maintaining it in a history file (or table) for producing historical performance reports and for predicting performance trends

  • As a capacity planning device by giving the tool statistical information about a DB2 application and the environment in which it will operate

  • As an after-the-fact analysis tool on a PC workstation for analyzing and graphing all aspects of DB2 application performance and system-wide DB2 performance

DB2 performance tools support one or more of these features. The evaluation of DB2 performance monitors is a complex task. Often more than one performance monitor is used at a single site. Vendors who sell suites of performance monitors for other system software also frequently offer DB2 performance monitors. Whenever possible, try to utilize a DB2 performance monitor that not only offers full functionality for DB2 monitoring, but also integrates with your other performance monitors that work with DB2 (such as your monitors for CICS, IMS, and z/OS).

For more information on DB2 performance monitoring and tuning, refer to Parts V and VI.

Products to Enhance Performance

Performance is an important facet of DB2 database administration. Many shops dedicate several analysts to tweaking and tuning SQL, DB2, and its environment to elicit every performance enhancement possible. If your shop falls into this category, several tools on the market enhance the performance of DB2 by adding functionality directly to DB2. These DB2 performance tools can interact with the base code of DB2 and provide enhanced performance. Typically, these products take advantage of known DB2 shortcomings.

For example, products exist to perform the following functions:

  • Enable DSNZPARMs to be changed without recycling DB2; although DB2 as of V7 offers the ability to change some ZPARM values online, that capacity is not provided for all of the DSNZPARMs, yet.

  • Enhance the performance of reading a DB2 page.

  • Enhance or augment DB2 buffer pool processing.

Care must be taken when evaluating DB2 performance tools. New releases of DB2 might negate the need for these tools because functionality was added or a known shortcoming was corrected. However, this does not mean that you should not consider performance tools. They can pay for themselves after only a short period of time. Discarding the tool when DB2 supports its functionality is not a problem if the tool has already paid for itself in terms of better performance.

CAUTION

Because these tools interact very closely with DB2, be careful when migrating to a new release of DB2 or a new release of the tool. Extra testing should be performed with these tools because of their intrusive nature.


DB2 Programming and Development Tools

Often times, application development efforts require the population and maintenance of large test beds for system integration, unit, and user testing. A category of testing tools exists to facilitate this requirement. Testing tools enable an application developer or quality assurance analyst to issue a battery of tests against a test base and analyze the results. Testing tools are typically used for all types of applications and are extended to support testing against DB2 tables.

Many other types of tools enhance the DB2 application development effort. These DB2 programming and development tools can perform as follows:

  • Compare two DB2 tables to determine the differences. These tools enable the output from modified programs to be tested to determine the impact of code change on application output.

  • Enable the testing of SQL statements in a program editor as the programmer codes the SQL.

  • Explain SQL statements in an edit session.

  • Generate complete code from in-depth specifications. Some tools even generate SQL. When code generators are used, great care should be taken to ensure that the generated code is efficient before promoting it to production status.

  • Use 4GLs ( fourth-generation languages) that interface to DB2 and extend the capabilities of SQL to include procedural functions (such as looping or row-at-a-time processing).

Due to the variable nature of the different types of DB2 programming tools, they should be evaluated case by case.

QMF Enhancement Tools

A special category of tool, supporting QMF instead of DB2, automatically creates COBOL programs from stored QMF queries. QMF provides a vehicle for the ad hoc development, storage, and execution of SQL statements. When an ad hoc query is developed, it often must be stored and periodically executed. This is possible with QMF, but QMF can execute only dynamic SQL. It does not support static SQL. A method of running critical stored queries using static SQL would be beneficial, because static SQL generally provides better performance than dynamic SQL.

QMF enhancement tools convert the queries, forms, and procs stored in QMF into static SQL statements embedded in a COBOL program. The COBOL program does all the data retrieval and formatting performed by QMF, providing the same report as QMF would. However, the report is now created using static SQL instead of dynamic SQL, thereby boosting performance.

Query Tools

DB2 provides DSNTEP2 and the SPUFI query tool bundled with the DBMS. Most organizations find these inadequate, however, in developing professional, formatted reports or complete applications. It can be inadequate also for inexperienced users or those who want to develop or execute ad hoc queries.

QMF addresses each of these deficiencies. The capability to format reports without programming is probably the greatest asset of QMF. This feature makes QMF ideal for use as an ad hoc query tool for users.

Another important feature is the capability to develop data manipulation requests without using SQL. QMF provides QBE and Prompted Query in addition to SQL. QBE, or Query By Example, is a language in itself. The user makes data manipulation requests graphically by coding keywords in the columns of a tabular representation of the table to be accessed. For example, a QBE request to retrieve the department number and name for all departments that report to 'A00' would look like the construct shown in Figure 39.2.

Figure 39.2. DB2 security cascading revokes.

graphics/39fig02.gif


Prompted Query builds a query by prompting the end user for information about the data to be retrieved. The user selects a menu option and Prompted Query asks a series of questions, the answers to which are used by QMF to build DML. Both QBE and Prompted Query build SQL "behind the scenes" based on the information provided by the end user.

QMF can also be used to build application systems. A QMF application accesses DB2 data in three ways:

  • Using the QMF Callable Interface from an application program

  • Using the QMF Command Interface ( QMFCI ) in a CLIST to access QMF

  • Using a QMF procedure

Why would you want to call QMF from an application? QMF provides many built-in features that can be used by application programs to reduce development cost and time. For example, QMF can display online reports that scroll not only up and down but also left and right. (Coding left and right scrolling in an application program is not a trivial task.) QMF also can issue the proper form of dynamic SQL, removing the burden of doing so from the novice programmer. Refer to Chapter 12, "Dynamic SQL Programming," for an in-depth discussion of dynamic SQL techniques.

Another benefit of QMF is that you can use inherent QMF commands to accomplish tasks that are difficult to perform with a high-level language such as COBOL. Consider, for example, the following QMF commands:

EXPORT

Automatically exports report data to a flat file. Without this QMF command, a program would have to allocate a data set and read the report line by line, writing each line to the output file.

DRAW

Reads the DB2 Catalog and builds a formatted SQL SELECT , INSERT , UPDATE , or DELETE statement for any table.

SET

Establishes global values for variables used by QMF.


QMF, however, is not the only game in town. Other vendors provide different DB2 table query and reporting tools that can be used to enhance DB2's ad hoc query capabilities. Some of these products are similar in functionality to QMF but provide additional capabilities. They can do the following:

  • Use static SQL rather than dynamic SQL for stored queries

  • Provide standard query formats and bundled reports

  • Provide access to other file formats such as VSAM data sets or IMS databases in conjunction with access to DB2 tables

  • Provide access from IMS/TM (QMF is supported in TSO and CICS only)

  • Execute DB2 commands from the query tool

Tools that operate on workstations and PCs are becoming more popular than their mainframe counterparts. This is because the PC provides an environment that is more conducive to quickly creating a report from raw data. Using point-and-click, drag-and-drop technology greatly eases the report generation process. Responding to this trend, IBM now offers QMF for Windows, in addition to the traditional QMF for ISPF.

Additionally, data warehousing is driving the creation of tools that enable rapid querying along business dimensions. These tools provide OLAP, or on-line analytical processing. For an overview of data warehousing and OLAP please refer to Chapter 45, "Data Warehousing with DB2."

Finally, fourth-generation languages (4GLs) are gaining more and more popularity for accessing DB2 data. Though not a typical type of DB2 add-on tool, these products provide more functionality than a report writing tool, but with the GUI front-end that makes them easier to use than 3GL programming languages such as COBOL and C. 4GL tools typically work in one of three ways:

  • Queries are developed using 4GL syntax, which is then converted "behind the scenes" into SQL queries.

  • SQL is embedded in the 4GL code and executed much like SQL embedded in a 3GL.

  • A hybrid of these two methods, in which the executed SQL is either difficult or impossible to review.

In general, you should avoid 4GLs that require a hybrid approach. When a hybrid method is mandatory, exercise extreme caution before using that 4GL. These methods are usually difficult to implement and maintain, and they typically provide poor performance.

If you do use a 4GL to access DB2 data, heed the following cautions :

  • Many 4GLs provide only dynamic SQL access, which is usually an inefficient way to develop entire DB2 applications. Even if the 4GL provides static SQL access, often the overhead associated with the DB2 interface is high. For this reason, use 4GLs to access DB2 data only for ad hoc or special processing. 4GLs are generally an unacceptable method of developing complete DB2 applications.

  • Be wary of using the syntax of the 4GL to join or "relate" DB2 tables. Instead, use views that efficiently join the tables using SQL, then access the views using the 4GL syntax. I was involved in an application tuning effort in which changing a "relate"in the 4GL syntax to a view reduced the elapsed time of a 4GL request by more than 250 percent.

Repositories

A repository stores information about an organization's data assets. Repositories are used to store metadata , or data about data. They are frequently used to enhance the usefulness of DB2 application development and to document the data elements available in the data warehouse.

What Is Metadata?

Metadata is frequently defined as "data about data." But that is an insufficient definition. Metadata characterizes data. It is used to provide documentation such that data can be understood and more readily consumed by your organization. Metadata answers the who, what, when, where, why , and how questions for users of the data.


In choosing a repository, base your decision on the metadata storage and retrieval needs of your entire organization, not just DB2. Typically, a repository can perform the following functions:

  • Store information about the data, processes, and environment of the organization.

  • Support multiple ways of looking at the same data. An example of this concept is the three-schema approach, in which data is viewed at the conceptual, logical, and physical levels.

  • Support data model creation and administration. Integration with popular CASE tools is also an important evaluation criterion.

  • Scan the operational environment to generate metadata from operational systems.

  • Store in-depth documentation, as well as produce detail and management reports from that documentation.

  • Support change control.

  • Enforce naming conventions.

  • Generate copy books from data element definitions.

These are some of the more common functions of a data dictionary. When choosing a data dictionary for DB2 development, the following features are generally desirable:

  • The data stores used by the repository are in DB2 tables. This enables DB2 applications to directly read the data dictionary tables.

  • The repository can directly read the DB2 Catalog or views on the DB2 Catalog. This ensures that the repository has current information on DB2 objects.

  • If the repository does not directly read the DB2 Catalog, an interface is provided to ease the population of the repository using DB2 Catalog information.

  • The repository provides an interface to any modeling and design tools used.

This section is a brief overview of repositories ”an extended discussion of data dictionaries is beyond the scope of this book.

NOTE

Because the DB2 Catalog stores database metadata it can be thought of as a crude data dictionary, but it does not fulfill all of the functionality of a true data dictionary or repository product.


Security Tools

DB2 security is provided internal to DB2 with the GRANT and REVOKE data control language components of SQL. Using this mechanism, authorization is granted explicitly and implicitly to users of DB2. Authorization exits enable DB2 to communicate with other security packages such as IBM's RACF and Computer Associate's Top Secret and ACF2. This eases the administrative burden of DB2 security by enabling the corporate data security function to administer groups of users. DB2 authorization is then granted to the RACF groups, instead of individual userids. This decreases the volume of security requests that must be processed by DB2.

DB2's implementation of security has several problems. Paramount among these deficiencies is the effect of the cascading REVOKE . If an authority is revoked from one user who previously granted authority to other users, all dependent authorizations are also revoked . For example, consider Figure 39.3. Assume that Bob is a SYSADM . He grants DBADM WITH GRANT OPTION to Ron and Dianne. Ron then grants the same to Rick and Bill, as well as miscellaneous authority to Chris, Jeff, and Monica. Dianne grants DBADM WITH GRANT OPTION to Dale, Carl, and Janet. She grants miscellaneous authority to Mike and Sue also. Rick, Bill, Dale, Carl, and Janet now have the authority to grant authority to other users. What would be the effect of revoking Ron's DBADM authority? Chris, Jeff, and Monica would lose their authority. In addition, Rick and Bill would lose their authority, as would everyone who was granted authority by either Rick or Bill, and so on.

Figure 39.3 . DB2 security cascading REVOKE s.

graphics/39fig03.gif


This problem can be addressed by a DB2 security add-on tool. These tools typically analyze the effects of a REVOKE . For example, the implications of revoking Ron's DBADM authority would have been clearly displayed, showing all implicit revokes. These tools enable the user to revoke the authority and optionally reassign all dependent authority either by storing the appropriate GRANT statements to reapply the authorizations implicitly revoked or by revoking the authority and automatically reapplying all implicit revokes in the background.

These tools provide other functions. Consider the administrative overhead when DB2 users are hired , quit, or are transferred. Security must be added or removed. A good security tool enables a user to issue a GRANT LIKE command, which can copy DB2 authority from one DB2 object to another or from one user to another. Consider two examples.

Suppose that Ron is transferred to another department. A security tool can assign all of Ron's authority to another user before revoking Ron's authority. Or suppose that a new DB2 table is created for an existing DB2 application, and it requires the same users to access its data as can access the other tables in the application. This type of tool enables a user to copy all security from one table to the new table.

There is one other type of DB2 security product. Rather than augment DB2 security, however, this type of product replaces DB2 security with an external package.

The primary benefit is the consolidation of security. If your organization uses a security package from another vendor rather than RACF for regular data security, security administration for regular data security and DB2 security can be consolidated into a single unit. A second benefit is that the cascading revoke effect can be eliminated because MVS data security packages do not cascade security revocations.

There are some problems with this type of tool, however. For example, these tools do not conform to the rigorous definition of the relational model, which states that the DBMS must control security. Some do not provide all types of DB2 security. For example, INSTALL SYSADM is still required in DB2 for installation of DB2 and DB2 Catalog and Directory recovery.

Keep in mind, too, that for some DB2 security features, such as multilevel security, it is necessary for DB2 to interface with an external security product.

Utility Enhancement Tools

The IBM DB2 COPY , LOAD , RECOVER , REORG , and UNLOAD utilities are notorious for their inefficiency, sometimes requiring more than 24 hours to operate on very large DB2 tables. These utilities are required to populate, administer, and organize DB2 databases.

Several vendors provide support tools that replace the DB2 utilities and provide the same functionality more efficiently. For example, one vendor claims that its REORG utility executes six to ten times faster than the DB2 REORG utility. These claims must be substantiated for the applications at your organization, but enough inefficiencies are designed into the IBM DB2 utilities to make this claim believable.

Before committing to an alternate utility tool, be sure that it conforms to the following requirements:

  • Does not subvert the integrity of the data in the DB2 tables.

  • Minimally provides the same features as the corresponding DB2 utility. For example, if the DB2 REORG utility can REORG both indexes and tablespaces, the enhanced REORG tool must be capable of doing the same.

  • Does not subvert standard DB2 features, when possible. For example, DB2 image copies are maintained in the DB2 Catalog. The enhanced COPY tool, therefore, should store its image copies there as well.

  • Provides an execution time at least twice as fast as the corresponding DB2 utility. For example, if the DB2 LOAD utility requires 20 minutes to load a table, the enhanced LOAD tool must load the same table in at least 10 minutes. (This should not be a hard-and-fast rule. Sometimes even a moderate increase in processing time is sufficient to cost-justify a third-party utility tool, especially now that IBM is charging extra for their utilities.)

  • Corrects the deficiencies of the standard DB2 utilities, when possible. For example, the DB2 LOAD utility will not load data in sequence by the clustering index. An enhanced tool might provide this capability.

  • When testing utility tools from different vendors, ensure that you are conducting fair tests. For example, always reload or recover prior to resting REORG utilities so that you don't skew your results due to different levels of table space organization. Additionally, always run the tests for each tool on the same object with the same amount of data.

CAUTION

IBM utility I/O is charged to the DB2 subsystem. The third-party tool will most likely charge I/O to the batch utility job.


CAUTION

Third-party utility execution usually cannot be monitored using the -DISPLAY UTILITY command. Some ISVs provide an equivalent display capability for their utilities, though.


CAUTION

Some third-party utility products call an IBM utility product for certain features and functionality. Before deciding to completely replace IBM with another vendor's utilities, be sure you understand the limitations that can occur because the IBM utilities will no longer be available.


One last category of DB2 utility tool is the utility manager. This type of tool provides administrative support for the creation and execution of DB2 utility jobstreams. These utility generation and management tools can do the following:

  • Automatically generate DB2 utility parameters and JCL, with correct workspace assignments

  • Monitor DB2 utility jobs as they execute

  • Automatically schedule DB2 utilities when exceptions are triggered

  • Assist in the scheduling of DB2 utilities to kick off the most important ones first, or to manage the available batch window

  • Restart utilities with a minimum of intervention. For example, if a utility cannot be restarted, the tool automatically issues a - TERM UTIL command and resubmits the utility.

 <  Day Day Up  >  


DB2 Developers Guide
DB2 Developers Guide (5th Edition)
ISBN: 0672326132
EAN: 2147483647
Year: 2004
Pages: 388

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net