Data sharing has thrown a whole new spin on DB2 performance and tuning. From application selection to postimplementation troubleshooting, problems can arise in several new places. Old performance problems that were acceptable or tolerable in the past are magnified in the data sharing environment. You need new skills in diverse areas to monitor and tune for overall performance.
New hardware and new rules operate in a data sharing environment. Contrary to popular belief, data sharing is not simply an install option! The introduction of the coupling facility in the parallel sysplex data sharing architecture accounts for a whole new set of factors to be concerned with in terms of performance. The coupling facility, unique to DB2 data sharing, provides many performance benefits over other sharing architectures used by other products, but it must also be cared for. You need to be concerned with activity involving the coupling facility owing to the number of accesses DB2 will have to issue to the coupling facility in terms of LOCK/UNLOCK requests, physical directory reads, cache updates, and reads of buffer-invalidated data, in order to maintain the consistency and coherency of the shared data.
In an ideal world with more processors in the complex, the transaction rate achieved by a single DB2 would be that multiplied by the number of available processors. Not necessarily. Because of the requirement of additional buffer management and global locking capability, DB2 and IRLM processing costs are increased, which can decrease the overall transaction rate attainable. Typical overhead seen for data sharing has been around 5 percent to 15 percent after the second member was enabled in the data sharing group. As each member is added, overhead is generally low but is very dependent on the amount of sharing among the members.
It is key for estimating data sharing performance to understand the overhead involved and to gain an appreciation of the tuning efforts required to minimize the impact of this overhead. First, you must set realistic goals, define performance objects, and, most importantly, tune your current environment. Keep in mind that bad performers will become worse and that new problems will surface. The key to a successful implementation is education of those involved in the migration and support of the data sharing environment. This will make the monitoring, tuning, and troubleshooting much less painful.
The majority of all DB2 performance problems in a data sharing environment will be concentrated in two areas: locking and buffer pools. However, many times these performance problems are still related to poor application design.
Processing costs for data sharing will vary, owing to the degree of data sharing, locking factors, workload characteristics, hardware/software configurations, application design, physical design, and various application options. The processing costs can be controlled to some degree by application and system tuning. Data sharing costs are a function of the processing required, in addition to the normal processing in order to have concurrency control for inter-DB2 interest and data coherency. Hardware/software costs can include speed of the processors and level of the coupling facility control code (CFCC), coupling facility structure sizes, and link configurations, level of hardware, software maintenance, and number of members in the data sharing group. Workload characteristics can include real, false, and XES contention and disk contention; workload dynamics; thread reuse; and application use of lock avoidance.