Data Sharing Components


Most of the hardware components in a data sharing environment are simply those in the parallel sysplex, but they are vital to the operation and success of data sharing. DB2 database administrators must learn about these components, how they are configured, and how they are used. Many performance and availability issues stem from these components.

Coupling Facility

The coupling facility is one of the most important components of a data sharing environment. This specialized piece of hardware with dedicated high-speed links is connected to each DB2 in the data sharing group. The coupling facility is the center for communication and data sharing among the subsystems.

At least two coupling facilities are needed for each data sharing group: One is required, and the other is mainly for availability and capacity. Also, for reasons of both availability and performance, at least one of these coupling facilities should be a dedicated processor that runs only the coupling facility control code.

An S/390 9674 or z900 Model-100 is a dedicated microprocessor that is used as an external coupling facility. The microprocessor is a CPC that runs only as a coupling facility in a logical partition with dedicated processor resources. This is the more optimal configuration because a failure of a coupling facility and a connected MVS image, which is more likely when on the same CPC, can lead to extended recovery times. You also get better performance by using dedicated hardware, as well as better connectivity to the coupling facility.

Internal Coupling Facilities (ICFs) are a relatively new option for coupling facility configurations. An IBM 9672 or z/900 machine can be configured with an internal coupling facility and can use one or more engines, depending on the generation of the machine. Such an ICF is a more attractive alternative to the dedicated 9674 or z/900 Model 100, mainly for economic reasons. You will still want to have at least one external coupling facility for high availability.

The coupling facility, usually one of the most important items in this environment, is usually significantly undersized in terms of storage. To start, you should have at least 1GB on each coupling facility. Attempting to support a data sharing group with anything less can be very difficult, if not impossible.

You will need enough storage to hold all the necessary structures and enough empty room to hold the structures from the other coupling facility in the group, in order to be able to rebuild, or support duplexed structures, all the necessary structures from a failing coupling facility, or a coupling facility that has been taken offline for maintenance.

The coupling facility consists of three structures important for DB2 data sharing: the shared communication area (SCA), lock, and group buffer pool (GBP). When a coupling facility is configured, it is often recommended practice to have the lock and SCA structures in one coupling facility and the GBPs in the other, as shown in Figure 9-2.

Figure 9-2. Structure placement in coupling facilities


Shared Communication Area

The SCA structure, also known as the list structure, holds all database status information, system information, and any critical information for any recovery situation. The SCA contains an array of lists that includes some of the following:

  • Database exception table (DBET)

  • BSDS information

  • Stopped page sets

  • LPL entries

  • GRECP entries (Group Buffer Pool Recover Pending)

  • GCLSN (Global Commit Log Sequence Number)

  • EDM pool invalidations

Lock Structure

The lock structure controls intersystem locking and serialization on records, pages, table spaces, and so on. It also holds all the global locks. The lock structure comprises two parts. The first part, the lock list, contains

  • Names of modified resources

  • Lock status of the resource

  • Modify and retained locks

The second part, the lock hash table, is used for quick intersystem lock-contention detection and contains

  • Owning members of modified resources

  • Lock-status information

The lock structure is connected by the IRLM during the DB2 member start-up. Sizing of the lock structure is not too difficult; normally, you can start with 32MB or more; for above-average workloadshigh-volume DMLyou could start with 64MB. If the lock structure is too small, you can have increased lock contention because there are only so many entries in the hash table for locks to be held.

Group Buffer Pools

The group buffer pools, known as the cache structures, provide high-performance access to shared data, data coherency, and serialization of updates to data on DASD. The group buffer pools are shared by all the DB2 subsystems that have a corresponding local buffer pool defined; if objects are defined in local BP1 and a GBP1 exists, these objects can be shared. This is done by providing a corresponding group buffer pool in the coupling facility.

Up to 80 virtual, or local, buffer pools are possible, so up to 80 group buffer pools are allowed. Of course, this number is limited by the size of the coupling facility; the group buffer pools will take up space in the coupling facility. Sizing group buffer pools is more of an art than a science, requiring knowledge of how the data in each buffer pool is accessed and a good breakout of objects into individual pools.

The GBP is allocated the first time it is used, and all the pages in it can be shared. The GBP registers data pages and also handles the invalidation of data pages cross-system. More information on group buffer pools is provided later in this chapter.

Structures and Policies

The structures mentioned so far are defined to the coupling facility via policies. Various policies are used for various definitions in the parallel sysplex environment. Policies, or couple data sets, store information about the systems in the sysplex, Cross-System Facility (XCF) groups and members' definitions, and general status information. Policies are formatted using the MVS IXCL1DSU utility in SYS1.MIGLIB and must be accessible by all members residing in shared DASD. Following are a few policies that are key to the operation of a data sharing environment.

Coupling Facility Resource Management

The CFRM policy defines the structures to the coupling facility, which is where you define the SCA, lock, and GBP structures. When defining the policies to the coupling facilities, it is important to leave room for growth of the structures. Also, you will want to plan carefully what the size should be, according to the space available in your coupling facility, and account for failover conditions.

An example of a CFRM policy follows:

 /*-----------------------------------------------------*/ /* DB2 DATA SHARING GROUP: DSNDSGA / LIST STRUCTURE    */ /*-----------------------------------------------------*/           STRUCTURE NAME(DSNDSGA_SCA)                  INITSIZE(4000)                  SIZE(10000)                  PREFLIST(CF01,CF02)                  REBUILDPERCENT(5)  /*-----------------------------------------------------*/  /* DB2 DATA SHARING GROUP: DSNDSGA / CACHE STRUCTURE(S)*/  /*-----------------------------------------------------*/           STRUCTURE NAME(DSNDSGA_GBP0)                  INITSIZE(8000)                  SIZE(16000)                  PREFLIST(CF02,CF01)                  REBUILDPERCENT(5)           STRUCTURE NAME(DSNDSGA_GBP1)                  INITSIZE(8000)                  SIZE(16000)                  PREFLIST(CF02,CF01)                  REBUILDPERCENT(5) 

Sysplex Failure Management

The SFM policy holds information about the importance of each subsystem in the sysplex. During a coupling-facility failure and subsequent structure rebuilds, the REBUILDPERCENT contained in the CFRM is compared to the WEIGHT in the SFM policy to determine whether to rebuild a structure. An example of a SFM policy follows:

 DATA TYPE(SFM) DEFINE POLICY NAME(POLICY1) CONNFAIL(NO) REPLACE(YES)    SYSTEM NAME(*)    ISOLATETIME(0) DEFINE POLICY NAME(POLICY2) CONNFAIL(YES) REPLACE(YES)    SYSTEM NAME(*)    ISOLATETIME(0)    WEIGHT(5)    SYSTEMNAME(SYS1)    PROMPT    WEIGHT(25) 

Automatic Restart Manager

The ARM policy is optional, but is highly recommended. The ARM keeps specific work running in the event of a failure. ARM will restart a DB2 member subsystem on the same or on a different system as defined in the policy. The more quickly you can get DB2 restarted in the event of a hardware failure or abend, the better. It is important for availability that all retained locks, discussed later in this chapter, are resolved quickly. The only way to do this is to restart the failing member, and this is where ARM is critical.

Links

High-speed links are used to connect coupling facilities to processors running the operating systems (z/OS and OS/390). There are several types:

  • Multimode fiber (50/125 micron), supporting distances up to 1km.

  • Single-mode fiber (9/125, or 10/125 micron), supports distances up to 3km whose characteristics are the same as those used by ESCON XDF.

  • Internal coupling channels, supporting connections between LPARs and in a CPC.

One link between the coupling facility and the processor will suffice, but at least two are recommended for performance and, of course, availability. These links are not the same as XCF links, which may use channel to channel (CTC) and are used for inter-MVS communications. Rather, coupling-facility links are used for high-speed communication between MVS systems and the coupling facility (CF) via XES (Cross-System Extended Services).

Sysplex Timer

The sysplex timer is a required component of the parallel sysplex environment. You must have at least onetwo are highly recommended. This timer keeps timestamps of the S/390 and zSeries processors. Synchronized timestamps are used by the DB2 members in the data sharing group. The timer is used to determine the order of events, regardless of which processor in the sysplex performed them. For example, the LRSNs can be guaranteed to order events for recovery because they are based on the TOD (time of day) clock generated by the timer.

NOTE

The sysplex time should be on a separate power source or UPS. If all timers fail, all MVS systems in the sysplex are placed in nonrestartable wait states.


Cross System Coupling Facility

XCF is a component of z/OS. All DB2 members join an XCF group when they join the data sharing group. Each IRLM will join another XCF group. XCF is used for communication between IRLM and XES and to notify other members to retrieve database or system status and to control information changes made to SCA. DB2 also uses XCF for some intersystem communication and for processing DB2 commands and utilities within the group. Figure 9-3 shows two XCF groups.

Figure 9-3. XCF groups


XCF services are of three types:

  1. Group services provide means for requesting information about other members in the same XCF group via DISPLAY XCF commands.

  2. Signaling services provide means for intragroup communication.

  3. Status-monitoring services enable members to check their own status and to relay this information to others in the group. These services also allow for monitoring other members in the sysplex.

Shared Data

In a data sharing environment, many items are shared by all the members in the group. In order to have these available for each member in the group, the items must reside on shared DASD. The following items need to be on shared DASD:

  • MVS catalog

  • DB2 catalog

  • DB2 directory

  • Couple data sets

  • Shared databases

  • ICF user catalog for shared databases

  • LOG data sets for read access, although separate log exists for each member

  • BSDS data sets for read access, although separate BSDS data sets exist for each member

  • Work files: required for queries using sysplex query parallelism, keeps DB2 connected to its work files regardless of where the DB2 has to be restarted; no longer DSNDB07

However, not all the data must be shared. Some data can be isolated for use on one member subsystem. These nonshared objects, defined unique to one member, can reside/remain on nonshared DASD, if you wish. Figure 9-4 shows how DB2 can exist in a data sharing group but not share data. To leave objects isolated to one member subsystem, you simply put the object in a virtual buffer pool that is not backed by a group buffer pool in the coupling facility. Any attempt to access an isolated object from the wrong member will result in an unavailable resource error.

Figure 9-4. Sharing data


NOTE

Not sharing data may be a consideration if the application is not suited for data sharing or the data is not required to be shared among several DB2s.




DB2 for z. OS Version 8 DBA Certification Guide
DB2 for z/OS Version 8 DBA Certification Guide
ISBN: 0131491202
EAN: 2147483647
Year: 2003
Pages: 175
Authors: Susan Lawson

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net