Chapter 9. Data Sharing
Data sharing, available in DB2 since version 4, allows an application to run on one or more DB2 subsystems in a parallel sysplex environment. The applications can read and write to the same data concurrently. Prior to data sharing, DDF was used to access data on other subsystems, or other, more creative means were used, such as replication between subsystems.
The subsystems that can share data must belong to a data sharing group. The subsystems in the group are known as members. Up to 32 members are allowed in a data sharing group. Only members in the group can share data, and a member can belong to only one group.
DB2 data sharing operates in a parallel sysplex environment, which is a cluster of z/OS systems that can communicate with one another. Some important components allow for this communication to occur and ensure the consistency and coherency of the data being shared. All members in a group share the same DB2 catalog, directory, and user data, as shown in Figure 9-1.
Figure 9-1. Data sharing
DB2 data sharing has many benefits:
Improved price/performance by using S/390 microprocessor technology.
Increased capacity with more power and a higher degree of intertransaction parallelism available.
Continuous availability through the ability to hide unplanned outages and to plan outages and the ability to keep running even if a member is lost.
Incremental, or horizontal, growth by adding processors without any disruption.
Configuration flexibility, with the ability to start/stop members as required and separate subsystems by function, such as batch, ad hoc, and OLTP.
Ability to split large queries across all CPCs (Central Processor Complexes) with sysplex query parallelism.
Flexibility for scheduling existing workloads by cloning a CICS region on another MVS and removing the restriction of a CICS application that can run on only one MVS system.
Increased throughput because a pplications can run concurrently on several subsystems.
Reduced need for distributed processing, because applications will not have to use DRDA to communicate in order to share data, eliminating the overhead of DRDA for this purpose.
Ability to have affinity and nonaffinity workloads in the same group, run workloads on different processors, or run a workload on a particular processor.
Shared Data Architecture (SDA), based on coupling technology, which allows the use of high-speed coupling facility channels and reduced system-to-system communication, supplying multiple paths to the data for higher availability; dynamic workload routing; and is based on capacity, not location. Data partitioning is not needed for growth, and SDA does not rely on node-to-node communication for resources.