Tuning the zOS and OS390 Environment

 <  Day Day Up  >  

Tuning the z/OS and OS/390 Environment

Operating system tuning is a complex task best accomplished by extensively trained technicians. All DB2 users, however, should understand the basics of z/OS resource exploitation and the avenues for tuning it. Operating system tuning, as it affects DB2 performance, can be broken down into four areas:

  • Memory use

  • CPU use

  • I/O use

  • Operating system environment parameters

Now turn your attention to each of these four areas. The sections that follow offer various tuning guidelines and strategies along the way.

Tuning Memory Use

How does DB2 utilize available memory? Before answering this question, you need a basic understanding of what memory is and how it is used by z/OS. Memory is the working storage available for programs and the data the programs use as they operate .

Storage is often used as a synonym for memory. MVS stands for Multiple Virtual Storage, which refers to MVS's capability to manage virtual memory. To manage virtual memory, the operating system uses a large pool of memory, known as virtual storage , to "back up" real storage . (Real storage is also called central storage. Virtual storage is also called expanded storage.)

Real storage is addressable. Programs and their data must be placed in real storage before they can run. Virtual memory management is the reason that multiple address spaces can execute concurrently, regardless of the physical memory they eventually use. This way, the system can process more jobs than can be held in real storage; information is swapped back and forth between virtual storage and real storage, a process known as paging .

You'll discover two types of paging. The first, moving data between virtual and real storage, is inexpensive in terms of resource consumption and occurs regularly. As more real storage is requested , a second type of paging can result. This type of paging consists of moving portions of memory to disk temporarily. This type is expensive and should be avoided.

TUNING STRATEGY

Consider using storage isolation to fence the DB2 address spaces. Doing so prevents DB2 from paging to disk. Storage isolation must be implemented by systems programmers.


Virtual storage can be broken down further in two ways:

  • Common area versus private area

  • Above the line versus below the line

The common area is the portion of virtual storage addressable from any address space. The private area stores data that is addressable by only an individual address space. A common area and private area exist both above and below the line. But what does that mean?

Above and below the line refers to an imaginary line in virtual storage at the 16-megabyte level. Memory above the line is often called extended storage . In earlier versions of MVS, 16 megabytes was the upper limit for virtual and real storage addressability. New releases of MVS add addressability above the 16-megabyte line. And z/OS provides addressability up to 16-exabytes. The constraints imposed by the addressing schemes of older systems, however, can cause dense packing of applications into memory below the line. Systems that use memory above the line provide more efficient memory management, as well as relief for systems requiring memory use below the line.

How does DB2 fit into this memory structure? The answer differs quite a bit depending on whether you are running DB2 V8 or a previous release. Let's discuss releases prior to V8 first (see Figure 27.1). DB2 manages memory efficiently , making use of extended storage when possible. A well- tuned DB2 subsystem requires less than 2 megabytes of virtual storage below the line. The things that affect below-the-line storage are the DSMAX and number of threads using functions (like AMS) that still run below the 16M line.

Figure 27.1. DB2 memory use (pre-V8).
graphics/27fig01.gif

z/Architecture, DB2 V8, and 64 Bits
graphics/v8_icon.gif

One of the biggest impacts of DB2 V8 is the requirement to be running a zSeries machine and z/OS v1.3 or greater. DB2 V8 does not support old hardware, nor does it support OS/390. Owing to these architectural requirements, DB2 will have the ability to support large virtual memory. This means DB2 can now surmount the limitation of 2GB real storage that was imposed due to S/390's 31-bit addressing.


Moving from a 31-bit architecture of past operating systems to a 64-bit architecture allows DB2 to access much more virtual storage. Using z/OS, DB2 can deploy a single large address space of up to 16 exabytes (2 64 bytes). This architecture replaces both hiperspaces and data spaces. Virtual storage management is improved because it is all in one place and simpler ”thereby improving the scalability, availability, and performance of your DB2 subsystems.

graphics/v8_icon.gif

Another by-product of this additional storage is that IBM has moved many of DB2's storage areas above the 2GB bar. Refer to Figure 27.2 for an updated look at memory usage by DB2 V8. As of V8, most of the DB2 code now runs above the 2-gigabyte bar. Most of the DSNDBM1 address space, the log manager, parts of DDF, buffer pools, data sharing castout buffers, IRLM locks, the RID pool, sort pools, compression dictionaries, and DBD and OBD objects in the EDM pool have all been moved above the bar.


Figure 27.2. DB2 V8 memory use.

graphics/27fig02.gif


A Rundown of DB2's Memory Structures

Let's examine the primary consumers of memory in a DB2 subsystem and see how we might be able to tune them for better performance.

Buffer Pools

DB2 provides 80 virtual buffer pools and optional hiperpools (pre-V8) for maintaining recently accessed table and index pages in virtual storage. The Buffer Manager component of DB2 manages I/O and the use of buffers to reduce the cost of I/O. If the Buffer Manager can satisfy a GETPAGE request from memory in the buffer pool rather than from disk, performance can increase significantly.

DB2 provides buffer pools as follows :

  • 50 buffer pools for 4KB pages (named BP0 through BP49)

  • 10 buffer pools for 8KB pages (named BP8K0 through BP8K9)

  • 10 buffer pools for 16KB pages (named BP16K0 through BP16K9)

  • 10 buffer pools for 32KB pages (named BP32K and BP32K1 through BP32K9)

The size of a buffer pool is specified to DB2 in pages. As of DB2 V8, the maximum total buffer pool size is 1 terabyte.

Tuning DB2 buffer pools is a critical piece of overall DB2 subsystem tuning. Strategies for effective buffer pool tuning are presented in Chapter 28, "Tuning DB2's Components ," in the section on DB2 subsystem tuning.

In addition to the buffer pools, DB2 creates a RID pool and a sort pool.

Sort Pool

RIDs processed during the execution of list prefetch are stored in the RID pool. Remember that hybrid joins and multiple-index access paths use list prefetch. The RID pool should be increased as your application's use of list prefetch and multiple-index access paths increase.

The size of the RID pool can be explicitly specified using the MAXRBLK DSNZPARM parameter. The RID pool can range in size from 128K to 10 GB. The default is 8000K. Of course, you can set the RID pool to 0 too, which has the effect of disabling the RID pool, causing DB2 to avoid access paths and join methods that require RID pool storage.

CAUTION

Do not set the RID pool to 0. It is almost always better to allow DB2 to determine when it makes sense to use the RID pool for accessing data than to arbitrarily disable the RID pool.


graphics/v8_icon.gif

As of DB2 V8, 25% of the RID pool is located below the 2-GB bar and 75% is located above the 2-GB bar.


Sort Pool

The sort pool, sometimes called a sort work area, is used when DB2 invokes a sort. Before I discuss the sort pool, examine the DB2 sorting process, which is shown in Figure 27.3. The RDS (Relational Data Services) component of DB2 uses a tournament sort technique to perform internal DB2 sorting.

Figure 27.3. How DB2 sorts.

graphics/27fig03.gif


The tournament sort works as follows:

  • Rows to be sorted are passed through a tree structure like the one in Figure 27.3. A row enters the tree at the bottom. It is compared to rows already in the tree, and the lowest values (for ascending sequence) or the highest values (for descending sequence) are moved up the tree.

  • When a row emerges from the top of the tree, it is usually placed in an ordered set of rows in memory. Sometimes, however, a value emerges from the top of the tree but does not fit into the current ordered set because it is out of range.

  • When a row does not fit into the current ordered set, the complete ordered set of rows is written to a logical work file. This ordered set is then called a run .

  • Logical work files are located in the buffer pool. As logical work files grow, sometimes they are written to physical work files. DB2 uses the DSNDB07 database to store physical work files.

  • After all the rows have passed through the tree, the accumulated runs are merged, forming a sorted results set. This set is returned to the requester, completely sorted.

How, then, does the sort pool affect RDS sorting? As the sort pool becomes larger, so does the tree used for the tournament sort. As the tree becomes larger, fewer runs are produced. As fewer runs are produced, less data must be merged and the likelihood of using DSNDB07 diminishes. The result is a more efficient sort process.

You can use the following formula to estimate an efficient sort pool for each process:

 

 32000 * (12 + sort key length + sort data length + 4) 

Of course, each process will have a different optimal sort pool size because the length of the sort key and the length of the data to be sorted will vary.

Each concurrent sort operation is assigned a sort pool. The sort work area can range in size from a minimum of 240K to a maximum of 128 MB. The size of the sort pool can be explicitly specified using the SRTPOOL DSNZPARM parameter. If the sort pool is not explicitly specified, its default value is 2000K.

TUNING STRATEGY

As with all DB2 parameters, it is better to explicitly specify RID and sort pool sizes to allow them to default.


EDM Pool

The EDM pool is used to maintain DBDs, cursor tables (for plans), package tables, the authorization cache, and the dynamic statement cache needed by executing SQL statements. The size of the EDM pool is specified in the DSNZPARMs and must be determined before starting DB2. To estimate the size of the EDM pool, you must have the following information:

  • The maximum number of concurrently executing plans and packages

  • The average plan and package size

  • The average cache size for plans

  • The number of concurrently accessed DBDs

  • The average DBD size

For new DB2 subsystems, letting the DB2 installation process use default values to calculate the size of the EDM pool is best. For existing DB2 subsystems, you can arrive at the average plan and package sizes by issuing the following SQL queries. For the average plan size, use this query:

 

 SELECT   AVG(PLSIZE) FROM     SYSIBM.SYSPLAN 

For the average package size, use this query:

 

 SELECT   AVG(PKSIZE) FROM     SYSIBM.SYSPACKAGE 

Add the two averages and divide by 2 to arrive at the total average plan and package size.

TUNING STRATEGY

Binding with the ACQUIRE(USE) option results in smaller plan sizes than binding with ACQUIRE(ALLOCATE) . Additional code is stored with the plan for ACQUIRE(ALLOCATE) . To reduce the amount of storage used by plans and packages in the EDM pool, specify ACQUIRE(USE) at bind time.

However, plan size usually should not be the determining factor for the specification of the ACQUIRE parameter. Instead, follow the guidelines presented in Chapter 13, "Program Preparation."


Another factor influencing the overall size of plans is the authorization cache. You can associate an authid cache for each plan by setting the size in the CACHESIZE parameter of the BIND command.

TUNING STRATEGY

Binding with the CACHESIZE(0) option also results in smaller plan sizes. However, the caching of authids enhances performance. So, once again, plan size should not be the determining factor in setting CACHESIZE either. The default cache size is 1024KB, which is probably overkill for many shops . Use the formula specified in Chapter 13 to calculate an appropriate CACHESIZE for each plan ”instead of relying on the default.


NOTE

Authids are not checked for plans that can be executed by PUBLIC . Avoid specifying a CACHESIZE for these plans.


For the average size of the plan authorization ID cache, use the following query:

 

 SELECT   AVG(CACHESIZE) FROM     SYSIBM.SYSPLAN; 

Package authorization caching is a system-wide option. Caching is either enabled or disabled for the entire subsystem and a global cache is used. Therefore, package authorization caching does not have an impact on package size.

To arrive at the average DBD size, you must know the average number of columns per table and the average number of tables per database. A general formula for calculating the average DBD size follows:

 

 average DBD size = [(average # of tables per database) x 1K]                  + [(average # of columns per table) x .5K] 

You can use the following queries to arrive at the average number of tables per database and the average number of columns per table. First, to determine the average number of tables per database, issue the following query:

 

 SELECT   COUNT(*) / COUNT(DISTINCT(DBNAME)) FROM     SYSIBM.SYSTABLES WHERE    TYPE = 'T'; 

You can use the following query to arrive at the average number of columns per table:

 

 SELECT   AVG(COLCOUNT) FROM     SYSIBM.SYSTABLES WHERE    TYPE = 'T'; 

To arrive at the average number of concurrent plans, packages, and DBDs, you would be wise to accumulate a series of DB2 accounting statistics for your peak processing time. Use these figures to estimate the number of concurrent plans.

Determining the average number of concurrent packages is not easy. You must completely understand your particular DB2 implementation to be successful at determining this number. Asking the following questions can help:

  • How many plans use packages instead of simply DBRMs? Issue the following two queries to determine this information:

     

     SELECT   COUNT(DISTINCT PLANNAME) FROM     SYSIBM.SYSPACKLIST; SELECT   COUNT(*) FROM     SYSIBM.SYSPLAN WHERE    OPERATIVE = 'Y' AND      VALID IN('Y','A'); 

  • On average, how many versions of a package are permitted to remain in the DB2 Catalog? How many are used?

To determine the average number of concurrent DBDs, you must understand each application's database use. If an application that typically uses three databases is much more active than another that uses 12 databases, you must factor this information into your EDM pool sizing strategy. Obtaining this information can be difficult, so you might need to estimate. A general calculation for the EDM pool size follows:

 

 EDM Pool Size = [(((#CPP) + (#TPP/4)) x PP-AVG) +                  (((#CPP) + (#TPP/4)) x C-AVG)  +                  ((#DBD) x DBD-AVG) + 50K] x 1.25 

Value

Description

#CPP

Number of concurrent plans and packages

#TPP

Total number of plans and packages

#DBD

Total number of concurrently used databases

PP-AVG

Average size of all plans and packages

C-AVG

Average authorization cache size

DBD-AVG

Average authorization cache size


The systems programmer calculates the size of the EDM pool during DB2 installation based on estimates of the values discussed in this section. The installation process for DB2 contains the preceding algorithm. The calculation used by the DB2 installation process is only as good as the information supplied to it. The default values (calculated by DB2 during the installation process) are adequate for most shops if correct estimates were input.

As DB2 use expands, however, the EDM pool should expand proportionally. The size of the EDM pool can be explicitly specified using the EDMPOOL DSNZPARM parameter.

As your DB2 usage patterns change, plan and package sizes can grow, necessitating EDM pool growth. For example, using DEGREE(ANY) instead of DEGREE(1) increases plan and package sizes.

TUNING STRATEGY

Overestimate the size of the EDM pool. Having EDM pool memory available as the number of DB2 plans, packages, and databases increases can be better than reacting to a problem after it occurs. Periodically monitor the number of plans, packages, and databases in conjunction with usage statistics, and increase the EDM pool as your DB2 use increases.


Additionally, as more applications are made operational, or as more concurrent users start to access existing applications, EDM pool usage will increase.

TUNING STRATEGY

Prior to DB2 V8, if your DSNDBM1 is storage-constrained you can move some of the EDM pool into a data space. This is particularly helpful if you use dynamic statement caching. You can move some EDM storage into a data space by specifying a non-zero value for EDMPOOL DATA SPACE SIZE on the DSNTIPC installation panel.

graphics/v8_icon.gif

As of DB2 V8, this option is not available because data spaces are no longer supported. Of course, more memory is available due to z/OS and the ability of the EDM pool to utilize space above the 2GB bar.



TUNING STRATEGY

graphics/v8_icon.gif

Because cached dynamic statements are stored in a separate pool in DB2 V8, re-evaluate your EDM pool storage needs. You might be able to decrease the size of the EDM pool after you move to DB2 V8.



graphics/v8_icon.gif

DB2 V8 separates the EDM pool into separate storage areas as follows:


The main EDM pool for managing CTs and PTs in use, SKCTs and SKPTs for the most frequently used applications, and cache blocks for your plans that have caches.

The EDM DBD cache for the DBDs in use and DBDs referred to by the SKCTs and SKPTs for the most frequently used applications.

The EDM statement cache for the skeletons of the most frequently used dynamic SQL statements, if your system has enabled the dynamic statement cache.

DB2 Working Storage

DB2 working storage is memory used by DB2 as a temporary work area. The best way to estimate the working storage size for DB2 is to separate the number of concurrent DB2 users into users of dynamic SQL and users of static SQL. Dynamic SQL uses more working storage (but possibly less of the EDM pool) than static SQL. Estimate approximately 25KB per static SQL user and 75KB per dynamic SQL user . Additionally, DB2 itself uses about 600K. Therefore, you can estimate DB2 working storage usage by using the following:

 

 (concurrent static SQL users x 25K) + (concurrent dynamic SQL users x 75K) + 600K 

TUNING STRATEGY

You cannot explicitly tune the amount of memory used by concurrent static and dynamic SQL. Implicit control over the number of users can be established by the DSNZPARM values specified for IDFORE , IDBACK , and CTHREAD .


DB2 Code

The DB2 code itself requires approximately 4,300KB of storage. This value is inflexible .

IRLM

Locks are maintained in memory by the IRLM. This capability enables DB2 to process a lock request quickly and efficiently without a physical read.

TUNING STRATEGY

If the IRLM start-up parameters specify PC=Y , the locks are stored in the private address space for the IRLM. PC=N stores the locks in expanded memory, so this specification is more efficient than PC=Y .

As of DB2 V8 all locks are stored in the private address space for the IRLM. As such, this option is not available as of V8.


The IRLM uses approximately 250 bytes per lock. This number is constant whether you are using row locks or page locks, so keep that in mind if you are considering row-level locking. You will likely consume much more storage with row locking because DB2 will probably have to lock more rows than it would pages (depending on the number of rows per page for the table space).

Open Data Sets

Each open VSAM data set requires approximately 1.8KB for the VSAM control block that is created. Refer to Chapter 5, "Data Definition Guidelines," for a discussion of the CLOSE parameter for DB2 table spaces and indexes and its effect on performance.

TUNING STRATEGY

Use segmented table spaces with multiple tables to reduce the amount of memory used by open data sets. When each table is assigned to a unique table space, DB2 must manage more open data sets ”one for each table space and table combination. As the number of tables in a table space increases, DB2 must manage fewer open data sets. (All considerations for multi-table table spaces, as outlined in Chapter 5, still apply.)


TUNING STRATEGY

The memory cost per open data set, approximately 1.8K, is small in comparison to the performance gains associated with leaving the data sets open to avoid VSAM open and close operations. Favor using CLOSE YES for most of your table spaces and indexes. Doing so leaves data sets open until the maximum number of open data sets is reached. At this point, DB2 chooses the least recently used data sets to close. You might want to consider specifying CLOSE NO for your most critical objects so that other objects are closed before your most critical objects.


Total Memory Requirements

By adding the memory requirements, as specified in the preceding sections, for the EDM pool, buffer pools, RID pool, sort pool, working storage, open data sets, and IRLM for each DB2 subsystem, you can estimate the memory resources required for DB2. If insufficient memory is available, consider limiting the availability of DB2 until more memory can be procured.

TUNING STRATEGY

DB2 uses virtual and real storage. DB2's performance increases as you assign more memory. If you intend to have very large DB2 applications, do not be stingy with memory.


Tuning CPU Use

Tuning CPU use is a factor in reducing DB2 resource consumption and providing an efficient environment. The major factors affecting CPU cost are as follow:

  • Amount and type of I/O

  • Number of GETPAGE requests

  • Number of columns selected in the SQL statement

  • Number of predicates applied per SQL statement

The following paragraphs offer additional information about each of these factors, including suggested tuning strategies.

By reducing physical I/O requests, you decrease CPU consumption. Similarly, the use of sequential prefetch can decrease CPU cost because more data is returned per physical I/O.

TUNING STRATEGY

Encourage the use of sequential prefetch when every (or almost every) row in a table will be accessed. You can do so by coding SELECT statements without predicates, by coding SELECT statements with minimal predicates on columns that are not indexed, or sometimes, by specifying a large number in the OPTIMIZE clause (for example, OPTIMIZE FOR 1000000 ROWS ). Because the OPTIMIZE FOR n ROWS clause was originally designed to reduce the estimated number of rows to be retrieved (not to increase that number), this trick does not always work.


Each GETPAGE request causes the Data Manager to request a page from the Buffer Manager, which causes additional CPU use.

TUNING STRATEGY

If possible, serialize data requests in static applications so that requests for the same piece of data are not duplicated . If a program requires the same data more than once, try to arrange the processes that act on that data to be contiguous, such that a single I/O is required instead of multiple I/Os. For example, if an employee's department number is required in three separate parts of a transaction, select the information once and save it for the other two times.


As the number of selected columns increases, DB2 must do more work to manipulate these columns, thereby using excess CPU.

TUNING STRATEGY

Code each SELECT statement (even ad hoc SQL) to return only columns that are absolutely needed.


As your number of predicates increases, DB2 must do more work to evaluate the predicates and ensure that the data returned satisfies the requirements of the predicates.

TUNING STRATEGY

Avoid coding redundant predicates. Use your knowledge of the application data in coding SQL. For example, if you know that employees must have an EDLEVEL of 14 or higher to hold the title of MANAGER , use this knowledge when you're writing SQL statements. The EDLEVEL predicate in the following query should not be coded because it is redundant, given the preceding qualification:

 

 SELECT   EMPNO, LASTNAME FROM     DSN8610.EMP WHERE    JOB = 'MANAGER' AND      EDLEVEL >= 14; 

Document the removal of redundant predicates in case policy changes. For example, if managers can have an education level of 10 , the EDLEVEL predicate is no longer redundant and must be added to the query again. Because tracking this information can be difficult, you should avoid removing predicates that are currently redundant but that might not always be so.


Tuning I/O

I/O is probably the single most critical factor in the overall performance of your DB2 subsystem and applications. This factor is due to the physical nature of I/O: it is limited by hardware speed. The mechanical functionality of a storage device is slower and more prone to breakdown than the rapid, chip-based technologies of CPU and memory. For this reason, paying attention to the details of tuning the I/O characteristics of your environment is wise.

What is I/O? Simply stated, I/O is a transfer of data by the CPU from one medium to another. I stands for input, or the process of receiving data from a physical storage medium. O stands for output, which is the process of moving data to a physical storage device. In every case, an I/O involves moving data from one area to another.

In the strictest sense of the term , an I/O can be a movement of data from the buffer pool to a working storage area used by your program. This type, however, is a trivial I/O with a lower cost than an I/O requiring disk access, which is the type of I/O you must minimize and tune.

The best way to minimize the cost of I/O is to use very large buffer pools. This way, you can increase the possibility that any requested page is already in memory, thereby tuning I/O by sometimes eliminating it. In general, I/O decreases as the size of the buffer pools increases. This method, however, has drawbacks. Buffer pools should be backed up with real and virtual memory, but your shop might not have extra memory to give DB2. Also, DB2 basically takes whatever memory you give it and almost always can use more.

NOTE

Of course, another way to minimize the cost of I/O is to utilize faster hardware. IBM's Enterprise Storage System (ESS), sometimes referred to as SHARK, can process data requests faster than older disk storage devices (such as 3380 or 3390 DASD units). The majority of improvements in ESS performance come from improvements to the bus architecture, higher parallelism, improved disk interconnection technology, and increased ESCON channel attachments.


Even with large buffer pools, data must be read from the disk storage device at some point to place it in the buffer pools. Tuning I/O, therefore, is wise.

The number of all reads and writes makes up the I/O workload incurred for any single resource. Therefore, the cost of I/O depends on the disk device, the number of pages retrieved per I/O, and the type of write operation.

The characteristics of the disk device that contains the data being read include the speed of the device, the number of data sets on the device, the proximity of the device to the device controller, and concurrent access to the device. You can improve performance by moving DB2 data sets to disk devices with faster retrieval rates. Consider the following estimates for retrieving a single page from different types of devices:

Device

Retrieval Cost

3380

.020 to .028 seconds per page

3390

.015 to .020 seconds per page

Solid State

.004 to .006 seconds per page


Obviously, a solid-state device offers better performance because it lacks the mechanical aspects of 3380 and 3390 DASD units.

The second factor affecting I/O cost is the number of pages retrieved per I/O. As I indicated in the preceding section, sequential prefetch can increase the number of pages read per I/O. Sequential prefetch also functions as a read-ahead engine. Reads are performed in the background, before they are needed and while other useful work is being accomplished. This way, I/O wait time can be significantly reduced.

Refer to the following average response times. (Note that all times are approximate.) A single page being read by sequential prefetch can be two to four times more efficient than a single page read by synchronous I/O.

Device

Sequential Prefetch

Sequential Prefetch (per page)

Synchronous Read

3380

80ms

2.5ms

25ms

3390

40ms

1.5ms

10ms


Better response times can be achieved with modern storage devices. In a document titled "DB2 for OS/390 Performance on IBM Enterprise Storage Server," IBM has published a prefetch rate of 11.8 MB/second with ESS and 5.8 MB/second with RAMAC-3.

The third factor in I/O cost is the type of write operation: asynchronous versus synchronous. DB2 can not only read data in the background but also write data in the background. In most cases, DB2 does not physically externalize a data modification to disk immediately following the successful completion of the SQL DELETE , INSERT , or UPDATE statement. Instead, the modification is externalized to the log. Only when the modified page is removed from DB2's buffers is it written to disk. This process is called an asynchronous, or deferred, write. Synchronous writes, on the other hand, are immediately written to disk. DB2 tries to avoid them, and it should. If you ensure that sufficient buffers are available, synchronous writes can be avoided almost entirely.

Several types of I/O must be tuned. They can be categorized into the following five groups:

Application I/O

Internal I/O

Sort I/O

Log I/O

Paging I/O

In the sections that follow, you will examine each of these types of I/O.

Application I/O

Application I/O is incurred to retrieve and update application data. As DB2 applications execute, they read and modify data stored in DB2 tables. This process requires I/O.

You can apply the following strategies to tune all five types of I/O covered here, not just application I/O. They are of primary importance, however, for application I/O.

TUNING STRATEGY

Tune I/O by increasing the size of the buffer pools. With larger buffer pools, application data can remain in the buffer pool longer. When data is in the buffer pool, it can be accessed quickly by the application without issuing a physical I/O.


TUNING STRATEGY

Tune I/O speed by using the fastest disk drives available. For example, replace older 3380 devices with newer , faster 3390 devices, RAMAC, or ESS. Most applications require multiple I/Os as they execute. For each I/O, you can save from 15ms to 40ms with 3390s instead of 3380s. The performance gains can be tremendous for applications requiring thousands (or even millions) of I/Os.


TUNING STRATEGY

For non-SMS users only: Use proper data set placement strategies to reduce disk head contention . To do so, follow these basic rules:

  • Avoid placing a table's indexes on the same disk device as the table space used for the table.

  • Analyze the access pattern for each application. When tables are frequently accessed together, consider placing them on separate devices to minimize contention.

  • Limit shared disk. Putting multiple, heavily accessed data sets from different applications on the same device is unwise. Cross-application contention can occur, causing head movement, undue contention, and I/O waits. Be cautious not only of high-use DB2 tables sharing a single volume, but also of mixing DB2 tables with highly accessed VSAM, QSAM, and other data sets.

  • Place the most heavily accessed table spaces and indexes closest to the disk controller unit. The closer a disk device is on the string to the actual controller, the higher its priority will be. The performance gain from this placement is minimal ( especially for 3390 devices), but consider this option when you must squeeze out every last bit of performance.

  • Avoid having table space and index data sets in multiple extents. When the data set consists of more than a single extent, excess head movement can result, reducing the efficiency of I/O.

  • graphics/v8_icon.gif Use the data-partitioned secondary indexes to explicitly partition secondary indexes to match partitioned table spaces and distribute the DPSI and table space partitions over multiple devices.

  • Use the PIECESIZE parameter to explicitly distribute non-partitioned table spaces and indexes over multiple devices.

  • Favor allocation of data sets in cylinders .


Another factor impacting the efficiency of accessing DB2 application data is partitioning. When data is partitioned, it is more likely that DB2 can utilize query parallelism to read data.

TUNING STRATEGY

Consider partitioning simple and segmented table spaces to take advantage of DB2's parallel I/O capabilities. Although partitioning is not required, partitioning can help to encourage DB2 to use parallelism.


Internal I/O

DB2 requires internal I/Os as it operates. Different types of data must be read and updated by DB2 as applications, utilities, and commands execute. This type of I/O occurs during the following:

  • Recording utility execution information in the DB2 Directory

  • Updating the DB2 Catalog as a result of DCL, DDL, or utility executions

  • Reading the DB2 Catalog and DB2 Directory when certain DB2 commands (for example, -DISPLAY DATABASE ) are issued

  • Retrieving skeleton cursor tables, skeleton plan tables, and DBDs from the DB2 Directory to enable programs to execute

  • Retrieving data from the DB2 Catalog during BIND , REBIND , and dynamic SQL use

  • Miscellaneous DB2 Catalog I/O for plans marked as VALIDATE(RUN) and for other runtime needs

  • Reading the Resource Limit Specification Table

TUNING STRATEGY

Limit activities that incur internal I/O during heavy DB2 application activity. This way, you can reduce the possibility of application timeouts due to the unavailability of internal DB2 resources resulting from contention.


TUNING STRATEGY

To enhance the performance of I/O to the DB2 Catalog, consider placing the DB2 Catalog on a solid-state device that uses memory chips rather than mechanical disk. Although solid-state devices are often expensive, they can reduce I/O cost significantly. A power outage , however, can cause the DB2 Catalog to be unavailable or damaged. For many shops, this risk might be too great to take. You can find additional tuning strategies for the DB2 Catalog and DB2 Directory in Chapter 28, "Tuning DB2's Components."


Sort I/O

Sorting can cause an I/O burden on the DB2 subsystem. To sort very large sets of rows, DB2 sometimes uses physical work files in the DSNDB07 database to store intermediate sort results. DSNDB07 consists of table spaces stored on disk. The use of disk-based work files for sorting can dramatically affect performance.

TUNING STRATEGY

Consider placing DSNDB07 on a solid-state device when applications in your DB2 subsystem require large sorts of many rows or the sorting of a moderate number of very large rows.


TUNING STRATEGY

Tune DSNDB07 because you will probably use it eventually. Be sure that multiple table spaces are defined for DSNDB07 and that they are placed on separate disk devices. Furthermore, ensure that the underlying VSAM data sets for the DSNDB07 table spaces are not using multiple extents.


TUNING STRATEGY

If the cost of sorting is causing a bottleneck at your shop, ensure that you are using the following sorting enhancements:

  • The microcode sort feature can improve the cost of sorting by as much as 50%. Microcode is very efficient software embedded in the architecture of the operating system. The microcode sort can be used only by DB2 V2.3 and higher and only when DB2 is run on one of the following CPU models: ES/9000 Model 190 and above, ES/3090-9000T, and ES/3090 Models 180J, 200J, 280J, and above.

  • Provide for unlimited logical work files based on the size of the buffer pool. This capability can significantly reduce I/O because more sort data can be contained in memory rather than written out to DSNDB07 .

  • Define DSNDB07 in a separate buffer pool and tune it accordingly for sorting. Keep in mind that although most sort operations are sequential, sorting also requires some random processing.


TUNING STRATEGY

Be sure to create DSNDB07 work files appropriately. Define multiple work files of equal size. You should consider allowing these files to go into extents, as well. Secondary extents allow runaway queries to complete. If you would rather have a runaway query fail than have it acquire the storage for sort work files using extents, define the work files without the ability to take extents. If you allow extents, define them on all work files, not just the last one.


Log I/O

Log I/O occurs when changes are made to DB2 data. Log records are written to DB2's active log data sets for each row that is updated, deleted, or inserted. Every modification (with the exception of REORG LOG NO and LOAD LOG NO ) is logged by DB2 to enable data recovery. In addition, when you run the RECOVER utility to restore or recover DB2 table spaces, an active log data set (and sometimes multiple archive log data sets) must be read.

For these reasons, optimal placement of DB2 log data sets on disk is critical.

TUNING STRATEGY

Put your log data sets on your fastest disk devices. For example, a shop using 3380 and 3390 devices should place log data sets on 3390 disk volumes with the DASD fast write feature. DASD fast write is a caching technique that significantly enhances the speed of I/O for DB2 log data sets.

The two types of DB2 log data sets are active logs and archive logs. As the active log data sets are filled, DB2 invokes a process called log offloading to move information from the active logs to the archive logs. Log offloading can have a severe impact on the throughput of a DB2 subsystem.


TUNING STRATEGY

Ensure that your log data sets are on different volumes and on separate channels. Avoid placing more than one active log data set on the same disk volume. Otherwise, the whole reason for having dual active logs is negated and the overall performance of DB2 will be impaired significantly during the log offloading process.


Optimal utilization of tapes and tape drives is critical for an efficient DB2 log offloading process. Recall from Chapter 23, "Locking DB2 Data," that log offloading is the process of writing entries from the active log to the archive log.

TUNING STRATEGY

Consider making the active log the same size as a full cartridge. When the log is offloaded, the archive will utilize a full cartridge, resulting in fewer wasted tapes.

Of course, if you use disk archives then there is no reason to impose a size limit.


Paging I/O

Paging I/Os occur when memory is over-utilized and pages of storage are relocated temporarily to disk. When needed, they will be read from disk back into main storage. This process causes very high overhead.

TUNING STRATEGY

Avoid paging by fencing the DB2 address spaces as suggested in the section titled "Tuning Memory Use" at the beginning of this chapter.


TUNING STRATEGY

Increase the amount of real and virtual storage for your CPU. When you increase the amount of memory at the system's disposal, paging is less frequent.

In addition to the tuning of I/O at the data set level, you must monitor and tune I/O at the disk device level. The overall performance of I/O depends on the efficiency of each disk volume to which DB2 data sets have been allocated.


TUNING STRATEGY

Consistently monitor each disk volume to ensure that contention is minimal. You can do so with a third-party tool designed to report on the usage characteristics of disk devices. In general, if device contention for any disk volume is greater than 30%, an I/O problem exists. Each shop should analyze its disk usage patterns, reducing contention as much as possible given the shop's budgetary constraints. When contention is high, however, consider moving some data sets on the device to other, less active volumes.


Some disk devices offer hardware caching as an option for all data sets stored on the device. In these cases, the actual disk drive can be used to cache data reads. These features are not usually effective for reading DB2 data.

TUNING STRATEGY

Avoid caching for disk volumes containing DB2 application table space and index data sets. The benefits of caching are greatly reduced for most DB2 application processing because of the efficient, asynchronous manner in which DB2 can read data (using sequential prefetch) and write data (using deferred write).


RAMAC Devices

Some of the conventional wisdom regarding data set placement and I/O changes with RAMAC storage devices. A device is not a physical volume, it is a virtual volume that is spread across multiple physical volumes on the RAMAC. For this reason, arm movement is not a concern.

With RAMAC, it is possible that you could place data sets on separate volumes only to have RAMAC place them on the same physical volume. For this reason, consider using SMS to place the data, and use DFDSS to move data sets when contention occurs.

Tuning Various z/OS Parameters and Options

Because z/OS is a complex operating system, it can be difficult to comprehend. In this section, I discuss ”in easy-to-understand language ”some environmental tuning options for z/OS.

The z/OS environment is driven by the Systems Resource Manager (SRM). The SRM functions are based on parameters coded by systems programmers in the SYS1.PARMLIB library. Three members of this data set are responsible for defining most performance-oriented parameters for MVS: OPT , IPS , and ICS . You can tune the items discussed in this chapter by modifying these members. However, I do not discuss how to set these parameters in this book.

You should not take this type of tuning lightly. z/OS tuning is complex, and a change made to benefit DB2 might affect another z/OS subsystem. All DB2 personnel in your shop (including management, database administration, and DB2, IMS, CICS, and z/OS systems programming) should discuss these types of tuning options before implementing them. Only a trained systems programmer should make these types of changes.

The first item to consider is whether a job is swappable. A swappable job can be temporarily swapped out of the system by MVS. When a job is swapped out, it is not processed. It therefore is not using CPU, cannot request I/O, and generally is dormant until it is swapped back into the system. Almost all of your jobs should be swappable so that MVS can perform as it was designed ”maximizing the number of jobs that can be processed concurrently with a minimum of resources.

Because the DB2 address spaces, however, are non-swappable, DB2 itself is never swapped out. Therefore, a DB2 application program requesting DB2 functions never has to wait for DB2 because it has been swapped out. The following list outlines which components of your overall environment can be swappable:

DB2

Non-swappable

CICS

Swappable or non-swappable

IMS

Non-swappable

TSO

Swappable

QMF

Swappable

Application

Swappable


TUNING STRATEGY

When a CICS subsystem is being used to access DB2, it should be defined as non-swappable to enhance response time (and thereby increase the performance) of the DB2/CICS transactions.


Usually, an application address space is swapped out so that z/OS can maintain even control over the processing environment. z/OS might determine that a job should be swapped out for the following reasons:

  • Too many jobs are running concurrently for all of them to be swapped in simultaneously. The maximum number of address spaces that can be simultaneously swapped in is controlled by the SRM based on parameters and the workload.

  • Another job needs to execute.

  • A shortage of memory.

  • Terminal wait. A TSO user might be staring at the screen, thinking about what to do next . Online TSO application programs do not need to be swapped in until the user takes another action.

The dispatching priority of an address space is a means of controlling the rate at which the address space can consume resources. A higher dispatching priority for an address space translates into faster performance because resources are more readily available to jobs with higher dispatching priorities. Controlling the dispatching priorities of jobs is an important tuning technique.

Normally, SRM controls the dispatching priority. Your shop may be using the Workload Manager (WLM) to control priorities. Systems programmers assign the dispatching priority of different address spaces. To ensure optimal DB2 performance, arrange the dispatching priorities of your DB2- related address spaces as shown in Figure 27.4. Batch application address spaces are generally dispatched below TSO (Long). Some critical batch jobs could be dispatched higher than TSO (Long).

Figure 27.4. Dispatching priority hierarchy.

graphics/27fig04.gif


TUNING STRATEGY

Increasing the dispatching priority of batch DB2 application jobs that are critical or long-running increases their performance. However, this increase is at the expense of other jobs running with lower dispatching priorities. Tinkering with the dispatching priorities of application jobs is not a good practice unless it is an emergency. The dispatching priority of an address space can be changed "on “the-fly," but only by authorized personnel.


When you're planning for a high amount of batch activity, ensure that an adequate number of initiators is available for the batch jobs. Initiators are essentially servers, under the control of JES, that process jobs as they are queued. In determining whether initiators are available, take the following into account:

  • An initiator is assigned to a job class or classes, specified on the job card of your batch JCL. If an initiator is not assigned to the job class that your DB2 jobs will be using, that initiator will not be used.

  • The number of initiators available for DB2 job classes dictates the number of DB2 batch jobs that can run concurrently from an MVS perspective. The IDBACK DSNZPARM parameter determines the number of background DB2 jobs that can be run concurrently from a DB2 perspective.

    TUNING STRATEGY

    Synchronize the value of IDBACK to the number of initiators for the DB2 job classes at your site. If non-DB2 jobs can be run in DB2 job classes, or if the initiator is available also for non-DB2 job classes, the value of IDBACK should be less than the total number of initiators assigned to DB2 job classes.


  • Jobs are removed from the job queue for execution by an initiator in order of their selection priority. Selection priority is coded on the job card of your JCL ( PRTY ). Most shops disable the PRTY parameter and place strict controls on the selection priority of jobs and job classes.

    Note that selection priority is different from dispatching priority. Selection priority controls the order in which jobs are queued for processing. Dispatching priority controls the resources available to a job after it is executing.

TUNING STRATEGY

Where initiators are at a premium (for example, fewer initiators than concurrent jobs), ensure that the DB2 jobs with the highest priority are assigned a higher selection priority than other DB2 jobs. This way, you can ensure that DB2 jobs are processed in order from most critical to least critical by the system.


Operating system tuning is an important facet of DB2 tuning. After the z/OS environment has been tuned properly, it should operate smoothly with little intervention (from DB2's perspective). Getting to the optimal z/OS environment, however, can be an arduous task.

Tuning z/OS is only one component of DB2 environment tuning. Tuning the teleprocessing environment, discussed next, is vital in achieving proper online performance.

 <  Day Day Up  >  


DB2 Developers Guide
DB2 Developers Guide (5th Edition)
ISBN: 0672326132
EAN: 2147483647
Year: 2004
Pages: 388

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net