This section describes the various types of z/OS address spaces and their interrelationships. The control region is the core of an IMS subsystem, running in one z/OS address space. Each control region uses many other address spaces that provide additional services to the control region, and in which the IMS application programs run. Some IMS applications and utilities run in separate, standalone regions, called batch regions. Batch regions are separate from an IMS subsystem and its control region and have no connection with it. For more information, see "IMS Batch Environment" on page 33. IMS Control RegionThe IMS control region is a z/OS address space that can be initiated through a z/OS START command or by submitting job control language (JCL)[2] job.
The IMS control region provides the central point of control for an IMS subsystem. The IMS control region:
The IMS control region also provides all logging, restart, and recovery functions for the IMS subsystems. The terminals, message queues, and logs are all attached to this region. Fast Path (one of the IMS database types) database data sets are also allocated by the IMS control region. A z/OS type-2 supervisor call (SVC) routine is used for switching control information, message and database data between the control region, all other regions, and back. Four different types of IMS control regions can be defined using the IMS system definition process. You choose the one you want depending on which IMS functions you want. The four types of IMS control regions support the four IMS environments. These environments are discussed in more detail in "IMS Environments" on page 29. IMS EnvironmentsEach of the IMS environments is a distinct combination of hardware and programs that supports distinct processing goals. The four IMS environments are:
IMS DB/DC EnvironmentThe DB/DC environment has both IMS TM and IMS DB installed and has the functionality of the entire IMS product. The processing goals of the DB/DC environment are to:
As shown in Figure 4-1 on page 30, the DB/DC control region provides access to the: Figure 4-1. Structure of a Sample IMS DB/DC Environment
Related Reading:
IMS DBCTL EnvironmentThe DBCTL environment has only IMS DB installed. The processing goals of the DBCTL environment are to:
DBCTL can provide IMS database functions to batch message programs (BMP and JMP application programs) connected to the IMS control region, and to application transactions running in CICS regions, as shown in Figure 4-2 on page 32. Figure 4-2. Structure of a Sample IMS DBCTL Environment
When a CICS system connects to IMS using the DRA, each CICS system has a predefined number of connections with IMS. Each of these connections is called a thread. Although threads are not jobs from the perspective of IMS, each thread appears to the IMS system to be another IMS dependent region. When a CICS application issues a DL/I call to IMS, the DL/I processing runs in one of these dependent regions. When a DB/DC environment is providing access to IMS databases for a CICS region, it is referred to in some documentation as providing DBCTL services, though it might, in fact, be a full DB/DC environment and not just a DBCTL environment. IMS DCCTL EnvironmentThe DCCTL environment is an IMS Transaction Manager subsystem that has no database components. A DCCTL environment is similar to the "DC" component of a DB/DC environment. The primary difference is that a DCCTL control region owns no databases and does not service DL/I database calls. The processing goals of the DCCTL environment are to:
As shown in Figure 4-3 on page 34, the DCCTL system, in conjunction with the IMS External Subsystem Attach Facility (ESAF), provides a transaction manager facility to external subsystems (for example, DB2 UDB for z/OS). Most IMS customers use a DB/DC environment as a transaction manager front end for DB2 UDB for z/OS. Figure 4-3. Structure of a Sample IMS DCCTL Environment
In a DCCTL environment, transaction processing and terminal management is identical to transaction processing and terminal management in a DB/DC environment. IMS Batch EnvironmentThe IMS batch environment consists of a batch region (a single address space) where an application program and IMS routines reside. The batch job that runs the batch environment is initiated with JCL, like any operating-system job. There are two types of IMS batch environments: DB Batch and TM Batch. These environments are discussed in "DB Batch Environment" and in "TM Batch" on page 35. DB Batch EnvironmentIn the DB Batch environment, IMS application programs that use only IMS DB functions can be run in a separate z/OS address space that is not connected to an IMS online control region. These batch applications are typically very long-running jobs that perform large numbers of database accesses, or applications that do not perform synchronization-point processing to commit the work. DB Batch applications can access only full-function databases, which are explained in "Implementation of IMS Databases" on page 62. Another aspect of a DB Batch environment is that the JCL is submitted through TSO or a job scheduler. However, all of the IMS code used by the application resides in the address space in which the application is running. The job executes an IMS batch region controller that then loads and calls the application. Figure 4-4 on page 35 shows an IMS batch region. Figure 4-4. Structure of an IMS DB Batch Environment
The batch address space opens and reads the IMS database data sets directly. Attention: If multiple programs, either running under the control of an IMS control region or in other batch regions, need to access databases at the same time, then you must take steps to ensure data integrity. See Chapter 9, "Data Sharing" on page 119 for more information about how the data can be updated by multiple applications in a safe manner. The batch region controller writes its own separate IMS log. In the event of a program failure, it might be necessary to take manual action (for example, submit jobs to run IMS utilities) to recover the databases to a consistent point. With online dependent application regions, this is done automatically by the IMS control region. You can also use DBRC to track the IMS logs and ensure that correct recovery action is taken in the event of a failure. An application can be written so that it can run in both a batch address space and a BMP address space without change. You can vary the execution environment of the programs between batch and BMP address spaces to lengthen the run time, support the need of other applications to access the data at the same time, or to run your procedures for recovering from application failures. TM BatchIMS TM supports a batch region for running TM batch application programs. Using TM Batch, you can either take advantage of the IMS Batch Terminal Simulator for z/OS or access an external subsystem through the IMS External Subsystem Attach Facility, ESAF. One example of an external subsystem is DB2 UDB for z/OS. You can connect DB2 UDB for z/OS in an IMS TM batch environment in one of two ways. You can use the SSM parameter on the TM batch-region execution JCL and specify the actual name of the batch program on the MBR parameter. Alternatively, you can code the DDITV02 DD statement on the batch-region execution JCL and specify the name of the DB2 UDB for z/OS module, DSNMTV01, on the MBR parameter. TM Batch does not provide DL/I database capabilities. Related Reading:
IMS Separate Address SpacesThe IMS control region has separate address spaces that provide some of the IMS subsystem services. These regions are automatically started by the IMS control region as part of its initialization, and the control region does not complete initialization until these regions have started and connected to the IMS control region. All separate address spaces (except for DBRC) are optional, depending on the IMS features used. For DL/I, separate address space options can be specified at IMS initialization. DBRC RegionThe DBRC region provides all access to the DBRC recovery control (RECON) data sets. The DBRC region also generates batch jobs for DBRC (for example, for archiving the online IMS log). Every IMS control region must have a DBRC region because it is needed, at a minimum, for managing the IMS logs. DL/I Separate Address SpaceThe DL/I separate address space (DLISAS) performs most data set access functions for IMS DB (except for the Fast Path DEDB databases). The DLISAS allocates full-function database data sets and also contains some of the control blocks associated with database access and some database buffers. For a DBCTL environment, the DLISAS is required and always present. For a DB/DC environment, you have the option of having IMS database accesses performed by the control region or having the DB/DC region start DLISAS. For performance and capacity reasons, use DLISAS. DLISAS is not present for a DCCTL environment because the Database Manager functions are not present. Dependent RegionsIMS provides address spaces for the execution of system and application programs that use IMS services. These address spaces are called dependent regions. The dependent regions are started by the submission of JCL to the operating system. The JCL is submitted as a result of a command issued to the IMS control region, through automation, or by a regular batch job submission. After the dependent regions are started, the application programs are scheduled and dispatched by the IMS control region. In all cases, the z/OS address space executes an IMS control region program. The application program is then loaded and called by the IMS code. Up to 999 dependent regions can be connected to one IMS control region, made up of any combination of the following dependent region types:
Table 4-1 describes the support for dependent regions by IMS environment type.
Message Processing RegionMessage processing regions (MPRs) run applications that process messages that come into IMS TM as input (for example, from terminals or online programs). MPRs can be started by IMS submitting the JCL as a result of an IMS command. The address space does not automatically load an application program but waits until work becomes available. Priority settings determine which MPR runs the application program. When the IMS determines that an application is to run in a particular MPR, the application program is loaded into that region and receives control. The application processes the message and any further messages for that transaction that are waiting to be processed. Then, depending on options specified on the transaction definition, the application either waits for further input, or another application program is loaded to process a different transaction. IMS Fast Path RegionAn IMS Fast Path (IFP) region runs application programs to process messages for transactions that have been defined as Fast Path transactions. Fast Path applications are very similar to the applications that run in an MPR. Like MPRs, the IFP regions can be started by the IMS control region submitting the JCL as a result of an IMS command. The difference between MPRs and IFP regions is in the way IMS loads and dispatches the application program and handles the transaction messages. To allow for this different processing, IMS imposes restrictions on the length of the application data that can be processed in an IFP region as a single message. IMS uses a user-written exit routine (or the IBM-supplied sample) to determine whether a transaction message should be processed in an IFP region and in which IFP region it should be processed. The IMS Fast Path facility that processes messages is called the expedited message handler (EMH). The EMH speeds the processing of the messages by having the applications loaded and waiting for input messages, and, if the message is suitable, dispatching it directly in the IFP region, bypassing the IMS message queues. IFP regions can also be used for other types of work besides running application programs. IFP regions can be used for Fast Path utility programs. For further discussion on using these regions for other types of work, see the IMS Version 9: Installation Volume 2: System Definition and Tailoring. Batch Message Processing RegionUnlike MPR or IFP regions, a BMP region is not usually started by the IMS control region, but is started by submitting a batch job, for example by a user from TSO or by a job scheduler. The batch job then connects to an IMS control region that is defined in the execution parameters. Two types of applications can run in BMP regions:
BMP regions have access to the IMS full-function and Fast Path databases, provided that the control region has the Database Manager component installed. BMP regions can also read and write to z/OS sequential files, with integrity, using the IMS GSAM access method (see "GSAM Access Method" on page 107). BMP regions can also be used for other types of work besides running application programs. BMP regions can be used for jobs that, in the past, were run as batch update programs. The advantage of converting batch jobs to run in BMP regions is that the batch jobs can now run along side of a transaction environment and these BMP applications can be run concurrently instead of sequentially. For a further discussion on using these regions for other types of work, see the IMS Version 9: Installation Volume 2: System Definition and Tailoring. Java Dependent RegionsTwo IMS dependent regions provide a Java Virtual Machine (JVM) environment for Java or object-oriented COBOL applications: Java message processing (JMP) regions
Java batch processing (JBP) regions
Figure 4-5 shows a Java application that is running in a JMP or JBP region. JDBC or IMS Java hierarchical interface calls are passed to the IMS Java layer, which converts them to DL/I calls. Figure 4-5. JMP or JBP Application That Uses the IMS Java Function
JMP and JBP regions can run applications written in Java, object-oriented COBOL, or a combination of the two. Related Reading: For more information about writing Java applications for IMS, see Chapter 18, "Application Programming in Java" on page 311 or IMS Version 9: IMS Java Guide and Reference. Common Queue Server Address SpaceCommon Queue Server (CQS) is a generalized server that manages data objects on a z/OS coupling facility on behalf of multiple clients. CQS is used by IMS shared queues and the Resource Manager address space in the Common Service Layer. CQS uses the z/OS coupling facility as a repository for data objects. Storage in a coupling facility is divided into distinct objects called structures. Authorized programs use structures to implement data sharing and high-speed serialization. The coupling facility stores and arranges the data according to list structures. Queue structures contain collections of data objects that share the same names, known as queues. Resource structures contain data objects organized as uniquely named resources. CQS receives, maintains, and distributes data objects from shared queues on behalf of multiple clients. Each client has its own CQS access the data objects on the coupling facility list structure. IMS is one example of a CQS client that uses CQS to manage both its shared queues and shared resources. CQS runs in a separate address space that can be started by the client (IMS). The CQS client must run under the same z/OS image where the CQS address space is running. CQS is used by IMS DCCTL and IMS DB/DC control regions if they are participating in sysplex sharing of IMS message queues or resource structures. IMS DBCTL can also use CQS and a resource if it is using the IMS coordinated online change function. Clients communicate with CQS using CQS requests that are supported by CQS macro statements. Using these macros, CQS clients can communicate with CQS and manipulate client data on shared coupling facility structures. Figure 4-6 shows the communications and the relationship between clients, CQSs, and the coupling facility. Figure 4-6. Client Systems, CQS, and a Coupling Facility
Related Reading: For complete information about CQS, see IMS Version 9: Common Queue Server Guide and Reference. Common Service LayerThe IMS Common Service Layer (CSL) is a collection of IMS system address spaces that provide the infrastructure needed for systems management tasks. The IMS CSL reduces the complexity of managing multiple IMS systems by providing you with a single-image perspective in an IMSplex. An IMSplex is one or more IMS subsystems that can work together as a unit. Typically, these subsystems:
The CSL address spaces include Operations Manager (OM), Resource Manager (RM), and Structured Call Interface (SCI). They are briefly described in the following sections. Related Reading: For a further discussion of IMS in a sysplex environment, see:
For a detailed discussion of IMS in a sysplex environment, see:
Operations Manager Address SpaceThe Operations Manager (OM) controls the operations of an IMSplex. OM provides an application programming interface (the OM API) through which commands can be issued and responses received. With a single point of control (SPOC) interface, you can submit commands to OM. The SPOC interfaces include the TSO SPOC, the REXX SPOC API, and the IMS Control Center. You can also write your own application to submit commands. Related Reading: For a further discussion of OM, see "Operations Manager" on page 497. Resource Manager Address SpaceThe Resource Manager (RM) is an IMS address space that manages global resources and IMSplex-wide processes in a sysplex on behalf of RM's clients. IMS is one example of an RM client. Related Reading: For a further discussion of RM, see "Resource Manager" on page 498. Structured Call Interface Address SpaceThe Structured Call Interface (SCI) allows IMSplex members to communicate with one another. The communication between IMSplex members can happen within a single z/OS image or among multiple z/OS images. Individual IMS components do not need to know where the other components reside or what communication interface to use. Related Reading: For a further discussion of SCI, see "Structured Call Interface" on page 498. Internal Resource Lock ManagerThe internal resource lock manager (IRLM) is delivered as an integral part of IMS, but you do not have to install or use it unless you need to perform block-level or sysplex data sharing. IRLM is also the required lock manager for DB2 UDB for z/OS. The IRLM address space is started before the IMS control region with the z/OS START command. If the IMS start-up parameters specify IRLM, the IMS control region connects to the IRLM that is specified on startup and does not complete initialization until the connection is successful. Typically, one IRLM address space runs on each z/OS system to service all IMS subsystems that share the same set of databases. For more information on data sharing in a sysplex environment, see:
Recommendation: Do not use the same IRLM address space for IMS and DB2 UDB for z/OS because the tuning requirements of IMS and DB2 are different and conflicting. The IRLM code is delivered with both IMS and DB2 UDB for z/OS and interacts closely with both. Therefore, you might want to install the IRLM code for IMS and DB2 UDB for z/OS separately (that is, in separate SMP/E zones) so that you can maintain release and maintenance levels independently. Installing the IRLM code separately can be helpful if you need to install prerequisite maintenance on IRLM for one database product because doing so does not affect the use of IRLM by the other product. |