Chapter 4. IMS and z/OS
This chapter describes how IMS subsystems are implemented on a z/OS system and how IMS uses some of the facilities that are a part of the z/OS operating system.
In This Chapter:
How IMS Relates to z/OS
IMS is a large application that runs on z/OS. There is a symbiotic relationship between IMS and z/OS. Both are tailored to provide the most efficient use of the hardware and software
IMS runs as a z/OS subsystem and uses several address spaces: one controlling address space, several separate address spaces that provide IMS services, and several address spaces that run IMS application programs. z/OS address spaces are sometimes called
The various components of an IMS system are explained in more detail in "Structure of IMS Subsystems."
Structure of IMS Subsystems
This section describes the various types of z/OS address spaces and their interrelationships.
The control region is the
Some IMS applications and utilities run in separate, standalone regions, called batch regions. Batch
IMS Control Region
The IMS control region is a z/OS address space that can be initiated through a z/OS START command or by submitting job control language (JCL)  job.
The IMS control region provides the central point of control for an IMS subsystem. The IMS control region:
The IMS control region also provides all logging, restart, and recovery functions for the IMS subsystems. The terminals, message queues, and logs are all attached to this region. Fast Path (one of the IMS database types) database data sets are also allocated by the IMS control region.
A z/OS type-2 supervisor call (SVC) routine is used for switching control information, message and database data between the control region, all other regions, and back.
Four different types of IMS control regions can be defined using the IMS system definition process. You choose the one you want depending on which IMS functions you want. The four types of IMS control regions support the four IMS environments. These environments are discussed in more detail in "IMS Environments" on page 29.
Each of the IMS environments is a distinct combination of hardware and programs that supports distinct processing goals. The four IMS environments are:
IMS DB/DC Environment
The DB/DC environment has both IMS TM and IMS DB installed and has the functionality of the entire IMS product. The processing goals of the DB/DC environment are to:
As shown in Figure 4-1 on page 30, the DB/DC control region provides access to the:
Figure 4-1. Structure of a Sample IMS DB/DC Environment
IMS DBCTL Environment
The DBCTL environment has only IMS DB installed. The processing goals of the DBCTL environment are to:
DBCTL can provide IMS database functions to batch message programs (BMP and JMP application programs) connected to the IMS control region, and to application transactions running in CICS regions, as shown in Figure 4-2 on page 32.
Figure 4-2. Structure of a Sample IMS DBCTL Environment
When a CICS system connects to IMS using the DRA, each CICS system has a predefined number of connections with IMS. Each of these connections is called a thread . Although threads are not jobs from the perspective of IMS, each thread appears to the IMS system to be another IMS dependent region. When a CICS application issues a DL/I call to IMS, the DL/I processing runs in one of these dependent regions.
When a DB/DC environment is providing access to IMS databases for a CICS region, it is referred to in some documentation as providing DBCTL services, though it might, in fact, be a full DB/DC environment and not just a DBCTL environment.
IMS DCCTL Environment
The DCCTL environment is an IMS Transaction Manager subsystem that has no database
As shown in Figure 4-3 on page 34, the DCCTL system, in conjunction with the IMS External Subsystem Attach Facility (ESAF), provides a transaction manager facility to external subsystems (for example, DB2 UDB for z/OS). Most IMS customers use a DB/DC environment as a transaction manager front end for DB2 UDB for z/OS.
Figure 4-3. Structure of a Sample IMS DCCTL Environment
In a DCCTL environment, transaction processing and terminal management is identical to transaction processing and terminal management in a DB/DC environment.
IMS Batch Environment
The IMS batch environment consists of a batch region (a single address space) where an application program and IMS routines reside. The batch job that runs the batch environment is initiated with JCL, like any operating-system job.
There are two types of IMS batch environments: DB Batch and TM Batch. These environments are discussed in "DB Batch Environment" and in "TM Batch" on page 35.
DB Batch Environment
In the DB Batch environment, IMS application programs that use only IMS DB functions can be run in a separate z/OS address space that is not connected to an IMS online control region. These batch applications are typically very long-running jobs that perform large
Another aspect of a DB Batch environment is that the JCL is submitted through TSO or a job scheduler. However, all of the IMS code used by the application resides in the address space in which the application is running. The job executes an IMS batch region controller that then loads and calls the application. Figure 4-4 on page 35 shows an IMS batch region.
Figure 4-4. Structure of an IMS DB Batch Environment
The batch address space opens and reads the IMS database data sets directly.
If multiple programs, either running under the control of an IMS control region or in other batch regions, need to access databases at the same time, then you must take steps to ensure data integrity. See Chapter 9, "Data Sharing" on page 119 for more information about how the data can be updated by multiple applications in a safe manner.
The batch region controller
An application can be written so that it can run in both a batch address space and a BMP address space without change. You can vary the execution environment of the programs between batch and BMP address spaces to lengthen the run time, support the need of other applications to access the data at the same time, or to run your procedures for recovering from application failures.
IMS TM supports a batch region for running TM batch application programs. Using TM Batch, you can either take advantage of the IMS Batch Terminal Simulator for z/OS or access an external subsystem through the IMS External Subsystem Attach Facility, ESAF. One example of an external subsystem is DB2 UDB for z/OS.
You can connect DB2 UDB for z/OS in an IMS TM batch environment in one of two ways. You can use the SSM parameter on the TM batch-region execution JCL and specify the actual name of the batch program on the MBR parameter. Alternatively, you can code the DDITV02 DD statement on the batch-region execution JCL and specify the
TM Batch does not provide DL/I database capabilities.
IMS Separate Address Spaces
The IMS control region has separate address spaces that provide some of the IMS subsystem services.
These regions are automatically started by the IMS control region as part of its initialization, and the control region does not complete initialization until these regions have started and connected to the IMS control region. All separate address spaces (except for DBRC) are optional, depending on the IMS features used. For DL/I, separate address space options can be specified at IMS initialization.
The DBRC region provides all access to the DBRC recovery control (RECON) data sets. The DBRC region also generates batch jobs for DBRC (for example, for archiving the online IMS log). Every IMS control region must have a DBRC region because it is needed, at a minimum, for managing the IMS logs.
DL/I Separate Address Space
The DL/I separate address space (DLISAS)
For a DBCTL environment, the DLISAS is required and always present.
For a DB/DC environment, you have the option of having IMS database accesses performed by the control region or having the DB/DC region start DLISAS. For performance and capacity reasons, use DLISAS.
DLISAS is not present for a DCCTL environment because the Database Manager functions are not present.
IMS provides address spaces for the execution of system and application programs that use IMS services. These address spaces are called dependent regions.
The dependent regions are started by the submission of JCL to the operating system. The JCL is submitted as a result of a command issued to the IMS control region, through automation, or by a regular batch job submission.
After the dependent regions are started, the application programs are scheduled and dispatched by the IMS control region. In all cases, the z/OS address space executes an IMS control region program. The application program is then loaded and called by the IMS code.
Up to 999 dependent regions can be connected to one IMS control region, made up of any combination of the following dependent region types:
Table 4-1 describes the support for dependent regions by IMS environment type.
Table 4-1. Support for Dependent Region Type by IMS Environment
Message Processing Region
Message processing regions (MPRs) run applications that process messages that come into IMS TM as input (for example, from terminals or online programs). MPRs can be started by IMS submitting the JCL as a result of an IMS command. The address space does not automatically load an application program but waits until work becomes available.
Priority settings determine which MPR runs the application program. When the IMS determines that an application is to run in a particular MPR, the application program is loaded into that region and receives control. The application processes the message and any further messages for that transaction that are waiting to be
IMS Fast Path Region
An IMS Fast Path (IFP) region runs application programs to process messages for transactions that have been defined as Fast Path transactions.
Fast Path applications are very similar to the applications that run in an MPR. Like MPRs, the IFP regions can be started by the IMS control region submitting the JCL as a result of an IMS command. The difference between MPRs and IFP regions is in the way IMS loads and dispatches the application program and handles the transaction messages. To allow for this different processing, IMS imposes restrictions on the length of the application data that can be processed in an IFP region as a single message.
IMS uses a
IFP regions can also be used for other types of work besides running application programs. IFP regions can be used for Fast Path utility programs. For further discussion on using these regions for other types of work, see the IMS Version 9: Installation Volume 2: System Definition and Tailoring .
Batch Message Processing Region
Unlike MPR or IFP regions, a BMP region is not usually started by the IMS control region, but is started by submitting a batch job, for example by a user from TSO or by a job scheduler. The batch job then connects to an IMS control region that is defined in the execution parameters.
Two types of applications can run in BMP regions:
BMP regions have access to the IMS full-function and Fast Path databases, provided that the control region has the Database Manager component installed. BMP regions can also read and write to z/OS sequential files, with integrity, using the IMS GSAM access method (see "GSAM Access Method" on page 107).
BMP regions can also be used for other types of work besides running application programs. BMP regions can be used for jobs that, in the past, were run as batch update programs. The advantage of converting batch jobs to run in BMP regions is that the batch jobs can now run along side of a transaction environment and these BMP applications can be run concurrently instead of sequentially. For a further discussion on using these regions for other types of work, see the IMS Version 9: Installation Volume 2: System Definition and Tailoring .
Java Dependent Regions
Two IMS dependent regions provide a Java Virtual Machine (JVM) environment for Java or object-oriented COBOL applications:
Java message processing (JMP) regions
Java batch processing (JBP) regions
Figure 4-5 shows a Java application that is running in a JMP or JBP region. JDBC or IMS Java hierarchical interface calls are passed to the IMS Java layer, which converts them to DL/I calls.
Figure 4-5. JMP or JBP Application That Uses the IMS Java Function
JMP and JBP regions can run applications written in Java, object-oriented COBOL, or a combination of the two.
Related Reading: For more information about writing Java applications for IMS, see Chapter 18, "Application Programming in Java" on page 311 or IMS Version 9: IMS Java Guide and Reference .
Common Queue Server Address Space
Common Queue Server (CQS) is a generalized server that
CQS uses the z/OS coupling facility as a repository for data objects. Storage in a coupling facility is divided into distinct objects called
. Authorized programs use structures to implement data sharing and high-speed serialization. The coupling facility stores and arranges the data according to list structures. Queue structures contain collections of data objects that share the same
CQS receives, maintains, and distributes data objects from shared queues on behalf of multiple clients. Each client has its own CQS access the data objects on the coupling facility list structure. IMS is one example of a CQS client that uses CQS to manage both its shared queues and shared resources.
CQS runs in a separate address space that can be started by the client (IMS). The CQS client must run under the same z/OS image where the CQS address space is running.
CQS is used by IMS DCCTL and IMS DB/DC control regions if they are participating in sysplex sharing of IMS message queues or resource structures. IMS DBCTL can also use CQS and a resource if it is using the IMS coordinated online change function.
Clients communicate with CQS using CQS
Figure 4-6. Client Systems, CQS, and a Coupling Facility
Related Reading: For complete information about CQS, see IMS Version 9: Common Queue Server Guide and Reference .
Common Service Layer
The IMS Common Service Layer (CSL) is a collection of IMS system address spaces that provide the infrastructure needed for systems management
The IMS CSL
The CSL address spaces include Operations Manager (OM), Resource Manager (RM), and Structured Call Interface (SCI). They are
Related Reading: For a further discussion of IMS in a sysplex environment, see:
For a detailed discussion of IMS in a sysplex environment, see:
Operations Manager Address Space
The Operations Manager (OM) controls the operations of an IMSplex. OM provides an application programming interface (the OM API) through which commands can be issued and responses received. With a single point of control (SPOC) interface, you can submit commands to OM. The SPOC interfaces include the TSO SPOC, the REXX SPOC API, and the IMS Control Center. You can also write your own application to submit commands.
Related Reading: For a further discussion of OM, see "Operations Manager" on page 497.
Resource Manager Address Space
The Resource Manager (RM) is an IMS address space that manages global resources and IMSplex-wide processes in a sysplex on behalf of RM's clients. IMS is one example of an RM client.
Related Reading: For a further discussion of RM, see "Resource Manager" on page 498.
Structured Call Interface Address Space
The Structured Call Interface (SCI) allows IMSplex members to communicate with one another. The communication between IMSplex
Related Reading: For a further discussion of SCI, see "Structured Call Interface" on page 498.
Internal Resource Lock Manager
The internal resource lock manager (IRLM) is delivered as an integral part of IMS, but you do not have to install or use it unless you need to perform block-level or sysplex data sharing. IRLM is also the required lock manager for DB2 UDB for z/OS.
The IRLM address space is started before the IMS control region with the z/OS START command. If the IMS start-up parameters specify IRLM, the IMS control region connects to the IRLM that is specified on startup and does not complete initialization until the connection is successful.
Typically, one IRLM address space runs on each z/OS system to service all IMS subsystems that share the same set of databases. For more information on data sharing in a sysplex environment, see:
Do not use the same IRLM address space for IMS and DB2 UDB for z/OS because the tuning requirements of IMS and DB2 are different and conflicting. The IRLM code is delivered with both IMS and DB2 UDB for z/OS and