2.5 Log streams

 < Day Day Up > 



2.5 Log streams

System Logger exploiters write data to log streams, which can be thought of as simply a collection of data. Log stream storage spans interim storage and offload data sets, where the type of interim storage used depends on the type of log stream.

As discussed previously, the IXCMIAPU utility and optionally IXGINVNT services, are used to define, update, and delete both CF-Structure based and DASD-only log streams. Before choosing which type of log stream (CF-Structure or DASD-only) is best suited to your application, you should consider a couple of attributes of the log data that affect the decision:

  • The location and concurrent activity of writers and readers to a log stream's log data.

  • The volume (data flow) of log stream data.

We will discuss these considerations in the following sections.

It is also important to understand how the parameters specified in the log stream definition will impact storage usage and performance. Some parameters have different meanings or exist only for one type of log stream as indicated in Table 2-3 on page 22 (note that recommendations still might be different for each type of log stream).

Table 2-3: Parameters by log stream type

Parameters

Definition same for both log stream types?

NAME

same

RMNAME

same

DESCRIPTION

same

DASDONLY

different

STRUCTNAME

only CF-Structure based

MAXBUFSIZE

only DASD-only

STG_DUPLEX

only CF-Structure based

DUPLEXMODE

only CF-Structure based

LOGGERDUPLEX

only CF-Structure based

STG_DATACLAS

same

STG_MGMTCLAS

same

STG_STORCLAS

same

STG_SIZE

different

LS_DATACLAS

same

LS_MGMTCLAS

same

LS_STORCLAS

same

LS_SIZE

same

AUTODELETE

same

RETPD

same

HLQ

same

EHLQ

same

HIGHOFFLOAD

different

LOWOFFLOAD

different

LIKE

same

MODEL

same

DIAG same

 

OFFLOADRECALL

same

2.5.1 CF-Structure based log streams

CF-Structure based log streams can contain data from multiple systems, allowing System Logger applications to merge data from multiple systems throughout the sysplex. These log streams use CF structures for interim storage. Aged log data is then offloaded to offload data sets for more permanent storage. Figure 2-3 shows how a CF-Structure based log stream spans the two levels of storage.

click to expand
Figure 2-3: Storage levels for a CF-Structure based log stream

When an exploiter writes a log block to a CF-Structure based log stream, System Logger first writes the data to the CF structure; the log block is then duplexed to another storage medium (chosen based on several factors to be discussed later). When the CF structure space allocated for the log stream reaches the installation-defined high threshold, System Logger moves log blocks from the CF structure to offload data sets, so that the CF space for the log stream is available to hold new log blocks. From an exploiter's point of view, the actual location of the log data in the log stream is transparent; however there are configurations that can effect system performance (such as the use of staging data sets).

There are a number of things you must take into account before and after you have decided to use CF-Structure based log streams. If you decide you should use them, then you should also have a basic understanding of how they work. We spend the rest of this section addressing these points.

When should my application use CF-Structure based log streams?

Earlier we discussed some factors to be taken into consideration when deciding what type of log stream your application should use:

  • The location and concurrent activity of readers and writers to a log stream's log data.

    Will there be more than one concurrent log writer and/or log reader to the log stream from more than one system in the sysplex? For instance, if your System Logger exploiter merges data from multiple systems in the sysplex and you expect the log stream to be connected and be written to by these systems at the same time, CF-Structure based log streams are the only option.

  • The volume (data flow) of log stream data.

    Will there be large volumes of log data recorded to the log stream? Since DASD-only log streams always use staging data sets, high volume writers of log data may be throttled back by the I/O required to record each log block sequentially to the log stream's staging data sets. Note that even CF-Structure based log streams may use staging data sets for duplexing, depending on the environment System Logger is running in and the parameters specified for the log stream in the System Logger policy.

In addition to these, you should consider any advice given by the exploiter; some exploiters may not recommend DASD-only log streams (APPC/MVS, for example).

Requirement: 

To use a CF-Structure based log stream, you must have access to a CF, even for single system scope applications.

Setting up for and defining CF structures

If you have decided that DASD-only log streams are more appropriate for your applications, skip to 2.5.2, "DASD-only log streams" on page 44 now as the remainder of this section discusses the use of CF-Structure log streams.

If are reading this, you've determined that a CF-Structure based log stream suits the needs of your exploiter, and now you need to set up for them. CF-Structure based log streams require a CF structure, which must be defined in the CFRM policy as well as in the System Logger policy. There are a few questions and concepts that merit further discussion, after which we'll discuss specific definition parameters.

How many CF structures do I need?

There are some general recommendations to keep in mind when determining how many CF structures you will need in the sysplex, as well as your log stream configuration:

  • You should always refer to the System Logger exploiter recommendations.

  • It is a good idea to group log streams of similar type (active or funnel-type) and characteristics together, because of the way System Logger allocates space within the CF structures. When you have more than one log stream using a single CF structure, System Logger divides the structure storage equally among the log streams that have at least one connected System Logger application.

    For example, if an installation assigns three log streams to a single structure, but only one log stream has a connected application, then that one log stream can use the entire CF structure. When an exploiter connects to the second log stream, System Logger dynamically divides the structure evenly between the two log streams. When another exploiter connects to the third log stream, System Logger allocates each log stream a third of the CF space.

    The block size and write rate of the log streams should also be similar (for more information on how this can impact you, see "The entry-to-element ratio" on page 26). Having this information can help you understand and plan how much CF space you have for each log stream and predict how often log stream data will be written to DASD as the CF space becomes filled.

  • Another important consideration is assuring that peer recovery can work where possible. Successful peer recovery requires planning your System Logger configuration such that multiple systems in the sysplex connect to the same CF structure (they don't have to be connected to the same log stream, just the same structure). If there is no peer connection available to perform recovery for a failed system, recovery is delayed until either the failing system re-IPLs, or another system connects to a log stream in the same CF structure to which the failing system was connected. Where possible, you should always plan for peer recovery. For more information, see "Peer and same system log stream recovery" on page 69.

  • In general, try to keep the number of log streams per structure as small as possible; we recommend it be 10 or under. This is related to the LOGSNUM parameter (discussed further in "Defining CF-Structure based log streams" on page 32) which specifies the maximum number of log streams for each structure. The main reason you should try to hold to this recommendation is that System Logger connect and log stream recovery processing (which affect the restart time of System Logger applications) has been optimized to provide parallelism at the CF structure level. Therefore, the more structures you use, the greater parallelism you'll get during log stream connect and rebuild processing. For example, if you have one structure with four log streams defined to it, System Logger will have to process connect requests to each sequentially. If you divide up those log streams among multiple structures, (for instance, four structures each containing one log stream each), System Logger could process connect requests to each log stream in parallel.

  • System Logger has different numbers of subtasks to carry out different processes.

    For example, System Logger has one task per system to allocate or delete offload data sets. This means that if an allocation or delete request is delayed, all other allocation or delete requests on that system will queue for that task. This is one of the reasons why we recommend sizing your offload data sets large enough to avoid very frequent allocation of new data sets.

    On the other hand, System Logger has one offload thread per log stream. So, even if you have ten log streams in the one structure, it would still be possible for all of them to be going through the offload process at the same time.

    However, the attribute that should impact your decision of whether to use DASD-only or CF-Structure log streams is the number of tasks to process IXGCONN requests. There is one connect task per structure, but only one allocation task for all staging data sets. So, let's say you were restarting after a system failure, and all the exploiters are trying to connect back to their log streams. If you are using CF-Structure log streams, and had ten structures, System Logger could process ten connect requests in parallel. However, if all the log streams were DASD-only log streams, System Logger would only process one connect request at a time, obviously elongating the time it would take for all the exploiters to get connected back to their log streams again.

The entry-to-element ratio

One consideration when planning for CF structure usage and log stream placement is ensuring CF storage is used in an efficient manner. To better understand how System Logger attempts to regulate storage usage requires that we discuss entries and elements.

The important CF structure definition parameters for this discussion are MAXBUFSIZE and AVGBUFSIZE (see "Defining CF-Structure based log streams" on page 32 for details and a complete definition for each). MAXBUFSIZE is used to determine the size of the elements System Logger will use, either 256 bytes (if MAXBUFSIZE is less than or equal to 65276) or 512 bytes (if MAXBUFSIZE is greater than 65276). AVGBUFSIZE (or, after System Logger recalculates this value, the effective AVGBUFSIZE) is then used to determine the entry-to-element ratio so that the ratio is defined as 1 entry per number of elements required to hold an average log block written to the structure. For example, if the element size is 256 bytes (because you specified a MAXBUFSIZE of 65276 or less), and you specify an AVGBUFSIZE of 2560, the initial entry-to-element ratio would be 10:1.

This is a critical point to consider when planning which log streams should reside in the same structure. Entries and elements are created from the pool of storage assigned to a CF structure, based on the entry-to-element ratio currently in use. Let's say, for example, a ratio of 1 entry to 10 elements yields a pool of 1000 entries and 10000 elements. The entries are placed in a pool that can be used by any log stream, and the elements are divided evenly among all the log streams currently connected to the structure, as shown in Figure 2-4 on page 27.

click to expand
Figure 2-4: Entries and elements divided among connected log streams

Every 30 minutes, System Logger queries the CF structure to determine if the entry-to-element ratio should be dynamically altered. If there is more than a 10% difference between the existing ratio setting and the current "in-use" ratio, System Logger will attempt the alter (at least 20% of the entries and elements need to be in use for System Logger to issue the request, and if more than 90% of either are in use, the request may not be honored). The ratio alteration will result in an effective AVGBUFSIZE being used instead of the defined value; the effective AVGBUFSIZE can be seen in the IXCMIAPU LIST STRUCTURE report.

You should note that this is not an exact science as the alter can be effected by temporary spikes in the number of writes or block size; however, over time the value used should closely reflect the average "in-use" ratio. The exception case is where log streams with significantly different characteristics are defined in the same structure. In this case, the best option is to separate the log streams into different structures.

Let's look at an example of how a log stream writing a different average log block size and a significantly different number of writes per second can impact CF structure efficiency. In Figure 2-5 on page 28, there are three log streams defined to a structure with an entry-to-element ratio of 1:10.

click to expand
Figure 2-5: Mixing different profile log streams in the same structure

In this example, log streams A and B are similar in both the entry-to-element ratio and number of writes per second; log stream C writes a much smaller log block size and with a greater frequency. Remember, however, they are all given an equal number of elements. Assuming the log streams continue to display the characteristics shown in Figure 2-5, let's examine the results as time passes.

At time 0 as shown in Figure 2-6 on page 28, the three log streams, all defined to the same CF structure, have been allocated an equal number of elements (1/n of the available pool) and they all have access to the pool of 3000 entries.

click to expand
Figure 2-6: Entry/element usage at time=0

Figure 2-7 shows at time=2 the beginning of a usage problem. Notice that while log streams A and B are a good pair for this structure, log stream C has already used 33% of the entry pool, but only about 7% of the total element pool.

click to expand
Figure 2-7: Entry/element usage at time=1

Figure 2-8 shows that after 4 seconds, log stream C is using 66% of all entries, far more than its "fair" share. It will soon run out of entries in the structure, and yet it still has 6000 empty elements. By contrast, log streams A and B are using their fair share and are good candidates to reside in the same structure. In fact, in this example, an offload would be initiated for all the log streams in the structure once 90% of entries have been used. This type of offload moves data from all the log streams in the structure and is more disruptive than a normal offload that is kicked off because the HIGHOFFLOAD threshold for a single log stream has been reached. (For more information about offload processing, refer to 2.6, "Offload processing" on page 57.)

click to expand
Figure 2-8: Entry/element usage at time=2

Every 30 minutes System Logger would attempt to alter the entry to element ratio in this example, decreasing the pool of available elements and increasing the number of entries for a ratio of around 1 entry to 7 elements (assuming the log stream characteristics in this example stay relatively static). This ratio change would not fix any problems though; assuming we start at time=0 again, it is easy to see that at a future time we will encounter element-full conditions for log streams A and B as they are still using 10 elements per entry and the existing entry-full condition for log stream C will continue to occur. The only way to efficiently resolve this situation would be to move log stream C to a different structure, with a smaller AVGBUFSIZE specified.

Defining CF structures - CFRM policy

Before using a CF structure defined in the System Logger policy, it must first be defined in the CFRM policy using the IXCMIAPU utility. For a complete list of parameters and an explanation of how to define the CFRM policy, see topic C.2.2 in z/OS MVS Setting Up a Sysplex, SA22-7625. In this section we address the parameters critical to System Logger.

It should also be noted that use of any System-Managed Duplexing-related keyword implies that the CFRM CDS has been formatted using the SMDUPLEX(1) keyword.

Note 

It is a good idea to follow the System Logger application recommendations for CF structure values. You can view recommendations for those applications covered in this book in each of their respective sections.

The CFRM policy keywords that are of interest from a System Logger perspective are:

NAME

Must be the same here as specified on the DEFINE STRUCTURE keyword in the System Logger policy.

SIZE, INITSIZE

SIZE is the largest size the structure can be increased to without updating the CFRM policy. It is specified in 1 KB units.

INITSIZE is the initial amount of storage to be allocated for the structure in the CF. The number is also specified in units of 1 KB. INITSIZE must be less than or equal to SIZE. The default for INITSIZE is the SIZE value.

You should always refer to the System Logger application recommendations for sizing your CF structures. See the application chapters in this book for recommendations for some of the more common System Logger applications.

IBM provides a Web-based tool to help you determine initial values to use for SIZE and INITSIZE; it is called the CF Sizer and can be found at:

http://www.ibm.com/servers/eserver/zseries/cfsizer

Before using the CF Sizer tool, you will need to know some information about the type of log streams that will be connected to the CF structure and the log stream definitions you will use. We will discuss some of the parameters requested by CF Sizer in this chapter. For further information, see topic 9.4.3. in z/OS MVS Setting Up a Sysplex, SA22-7625.

ALLOWAUTOALT

ALLOWAUTOALT(YES) allows the CF structure size and its entry-to-element ratio to be altered automatically by XES when the CF or the structure is constrained for space.

This parameter should always be set to NO (the default) for System Logger CF structures.

As described previously, System Logger has functions for altering the entry-to-element ratio of a structure. Having two independent functions (XES and System Logger) both trying to adjust the ratio can lead to inefficient and unexpected results.

System Logger also manages the usable space within the CF structure and provides for data offloading to offload data sets. If XES were to keep increasing the structure size, it is possible that the log stream would never get to its high offload threshold until the structure had reached its maximum size as defined on the SIZE parameter in the CFRM policy. System Logger is designed to manage the space within the structure size you give it—letting XES adjust the structure size negates this function.

Finally, because System Logger is constantly offloading data from the structure to DASD, is likely that, on average, System Logger CF structures will appear to be only half full. This makes them prime candidates for XES to steal space from should the CF become storage-constrained. Specifying ALLOWAUTOALT(NO) protects the structure from this processing.

Refer to the section entitled "Define the Coupling Facility Structures Attributes in the CFRM Policy Couple Data Set" in z/OS MVS Setting Up a Sysplex, SA22-7625, for a discussion of what can happen if you enable ALLOWAUTOALTER for a System Logger structure.

DUPLEX

The DUPLEX parameter is used to specify if and when the CF structure data will be duplexed. Use of this parameter implies that the sysplex is enabled for System-Managed Duplexing. There are three options for DUPLEX:

  • ENABLE: When DUPLEX(ENABLED) is specified for a structure in the CFRM active policy, the system will automatically attempt to initiate duplexing (either user-managed or system-managed) for that structure as soon as it is allocated.

  • ALLOWED: When DUPLEX(ALLOWED) is specified for a structure, the structure is eligible for user-managed or System-Managed Duplexing, however the duplexing must be initiated by a connector or by the operator; that is, the structure will not automatically be duplexed by XES.

  • DISABLED: Specifies that neither user-managed nor System-Managed Duplexing can be used for the specified structure.

Note 

Use of the DUPLEX(ENABLED) or DUPLEX(ALLOWED) will impact how System Logger duplexes log data in interim storage. For more information on System Logger duplexing of interim storage, see 2.8.1, "Failure independence" on page 65.

Defining CF structures - System Logger policy

Now that you have successfully defined your CF structures to the CFRM policy, you can define them to the System Logger policy using either the IXCMIAPU utility or the IXGINVNT macro. The following parameters are used to define CF structures to the System Logger policy:

STRUCTNAME

Specifies the name of the CF structure you are defining. STRUCTNAME must match the structure name as defined in the CFRM policy, and the structure name as specified on the corresponding log stream definitions.

For CF-Structure based log streams, this is the structure that will be used as interim storage before data is offloaded to offload data sets.

LOGSNUM

Specifies the maximum number of log streams that can be allocated in the CF structure being defined. logsnum must be a value between 0 and 512.

As we discussed previously in "How many CF structures do I need?" on page 25, the value specified for logsnum should ideally be no higher than 10.

MAXBUFSIZE

Specifies the size, in bytes, of the largest log block that can be written to log streams allocated in this structure. The value for MAXBUFSIZE must be between 1 and 65532 bytes. The default is 65532 bytes.

The MAXBUFSIZE is used to determine what element size will be used; if the value specified is less than or equal to 65276, System Logger will use 256 byte elements. If it is over 65276, 512 byte elements will be used. See "The entry-to-element ratio" on page 26 for more information.

Unless you specifically need a buffer size larger than 65276, we recommend that you specify MAXBUFSIZE=65276.

AVGBUFSIZE

Specifies the average size, in bytes, of log blocks written to all the log streams using this CF structure. AVGBUFSIZE must be between 1 and the value for MAXBUFSIZE. The default value is 1/2 of the MAXBUFSIZE value.

System Logger uses the average buffer size to control the initial entry-to-element ratio for the structure. See "The entry-to-element ratio" on page 26 for more information.

Tip 

Starting with z/OS V1R3, it is no longer necessary to delete and redefine the log streams defined to a CF structure if you wish to move them to another structure. See 2.5.3, "Updating log stream definitions" on page 54 for more information.

Defining CF-Structure based log streams

Now that you have planned your log stream configuration and defined the necessary CF structures, it is time to define the log streams. This is done using the IXCMIAPU utility or the IXGINVNT service. Let's take a look at the different parameters and how they impact System Logger operations.

NAME

Specifies the name of the log stream that you want to define. The name can be made up of one or more segments separated by periods, up to the maximum length of 26 characters.

NAME is a required parameter with no default. Besides being the name of the log stream, it is used as part of the offload data set and staging data set names:

  • offload data set example: <hlq>.logstreamname.<seq#>

  • staging data set example: <hlq>.logstreamname.<system>

Note that if the log stream name combined with the HLQ are longer than 33 characters, the data component of the offload and maybe the staging data sets may contain a system-generated low level qualifier, for example:

    IXGLOGR.A2345678.B2345678.C2345678.A0000000 (Cluster)    IXGLOGR.A2345678.B2345678.C2345678.IE8I36RM (Data) 

RMNAME

Specifies the name of the resource manager program associated with the log stream. RNAME must be 8 alphanumeric or national ($,#,or @) characters, padded on the right with blanks if necessary. You must define RMNAME in the System Logger policy before the resource manager can connect to the log stream. See the System Logger chapter in z/OS MVS Assembler Services Guide, SA22-7605, for information on writing a resource manager program to process a log stream.

DESCRIPTION

Specifies user-defined data describing the log stream. DESCRIPTION must be 16 alphanumeric or national ($,#,@) characters, underscore (_) or period (.), padded on the right with blanks if necessary.

DASDONLY

Specifies whether the log stream being defined is a CF or a DASD-only log stream. This is an optional parameter, with the default being DASDONLY(NO).

Since we are reviewing CF-Structure based log streams in this section, DASDONLY(NO) would be used to indicate that we do not want a DASD-only log stream.

STRUCTNAME

Specifies the name of the CF structure associated with the log stream being defined.

The CF structure must have already been defined to both the CFRM policy and System Logger policy as discussed in "Setting up for and defining CF structures" on page 24.

STG_DUPLEX

Specifies whether this log stream is a candidate for duplexing to DASD staging data sets.

If you specify STG_DUPLEX(NO), which is the default, log data for a CF-Structure based log stream will be duplexed in System Logger-owned dataspaces, making the data vulnerable to loss if your configuration contains a single point of failure.

If you specify STG_DUPLEX(YES), log data for a CF log stream will be duplexed in staging data sets when the conditions defined by the DUPLEXMODE parameter are met. This method ensures that log data is protected from a system or CF failure.

For more information about duplexing of log data by System Logger, refer to 2.8.1, "Failure independence" on page 65.

Note 

Even if you do not plan on using staging data sets, we recommend that you plan for them as there are some failure scenarios under which they would still be used to maintain System Logger's failure independence. If you have installed the PTF for APAR OA03001 you can specify STG_DATACLAS, STG_MGMTCLAS, STG_STORCLAS and STG_SIZE parameters even with STG_DUPLEX(NO). Without the PTFs installed, you cannot specify any of those parameters with STG_DUPLEX(NO)—in this case, an ACS routine is necessary to associate SMS classes with the staging data sets.

DUPLEXMODE

Specifies the conditions under which the log data for a CF log stream should be duplexed in DASD staging data sets.

If you specify DUPLEXMODE(COND), which is the default, the log data will be duplexed in staging data sets only if a system's connection to the CF-Structure based log stream contains a single point of failure and is therefore vulnerable to permanent log data loss.

If you specify DUPLEXMODE(UNCOND), the log data for the CF-Structure based log stream will be duplexed in staging data sets, regardless of whether the connection is failure independent.

LOGGERDUPLEX

Specifies whether Logger will continue to provide its own log data duplexing if the structure containing the log stream is being duplexed using System-Managed Duplexing and the two structure instances are failure-isolated from each other.

  • LOGGERDUPLEX(UNCOND) indicates that System Logger should use its own duplexing of the log data regardless of whether System-Managed Duplexing is being used for the associated structure.

  • LOGGERDUPLEX(COND) indicates that System Logger will only use its own duplexing if the two structure instances are in the same failure domain. If the two instances are failure-isolated from each other, System Logger will not duplex the log data to either a staging data set or a dataspace.

CDS requirement: 

The active primary TYPE=LOGR CDS in the sysplex must be formatted at a HBB7705 or higher level in order to support the use of the LOGGERDUPLEX keyword. Otherwise, the define request will fail. See "LOGR CDS format levels" on page 14 for more information.

STG_DATACLAS

Specifies the name of the SMS data class that will be used when allocating the DASD staging data sets for this log stream.

If you specify STG_DATACLAS(NO_STG_DATACLAS), which is the default, the data class is defined through SMS ACS routine processing. An SMS value specified on the STG_DATACLAS parameter, including NO_STG_DATACLAS, always overrides one specified on a model log stream used on the LIKE parameter.

Whether it is preferable to have the DATACLAS association in the LOGR policy, or in the SMS ACS routines depends on your installation software management rules.

Either way, you need to provide a DATACLAS where SHAREOPTIONS (3,3) has been specified for the allocation of the staging data sets. SHAREOPTIONS (3,3) is required to allow you to fully share the data sets across multiple systems within the GRS complex. If your system is running in a monoplex configuration, SHAREOPTION (1,3), which is the default on the dynamic allocation call, is allowed.

See z/OS DFSMS: Using Data Sets, SC26-7410 for more information about SMS.

Important: 

Do not change the Control Interval Size attributes for staging data sets to be anything other than 4096. System Logger requires the Control Interval Size for staging data sets be set to 4096. If staging data sets are defined with characteristics that result in a Control Interval Size other than 4096, System Logger will not use them. Operations involving staging data sets defined with a Control Interval Size other than 4096 will fail to complete successfully.

STG_MGMTCLAS

Specifies the name of the SMS management class used when allocating the staging data sets for this log stream.

If you specify STG_MGMTCLAS(NO_STG_MGMTCLAS), which is the default, the management class is defined by standard SMS processing. See z/OS DFSMS: Using Data Sets, SC26-7410 for more information about SMS. An SMS value specified on the STG_MGMTCLAS parameter, including NO_STG_MGMTCLAS, always overrides one specified on a model log stream used on the LIKE parameter.

For information on defining SMS management classes, see Chapter 5 in z/OS DFSMSdfp Storage Administration Reference, SC26-7402.

STG_STORCLAS

Specifies the name of the SMS storage class used when allocating the DASD staging data sets for this log stream.

If you specify STG_STORCLAS(NO_STG_STORCLAS), which is the default, the storage class is defined by standard SMS processing. See z/OS DFSMS: Using Data Sets, SC26-7410 for more information about SMS. An SMS value specified on the STG_STORCLAS parameter, including NO_STG_STORCLAS, always overrides one specified on a model log stream used on the LIKE parameter.

For information on defining SMS storage classes, see z/OS DFSMSdfp Storage Administration Reference, SC26-7402, Chapter 6.

STG_SIZE

Specifies the size, in 4 KB blocks, of the DASD staging data set for the log stream being defined. (Note that the size of the CF structures is specified in 1 KB blocks.)

When specified, this value will be used to specify a space allocation quantity on the log stream staging data set allocation request. It will override any size characteristics specified in the specified data class (via STG_DATACLAS) or a data class that gets assigned via a DFSMS ACS routine.

If you omit STG_SIZE for a CF-Structure based log stream, System Logger does one of the following, in the order listed, to allocate space for staging data sets:

  • Uses the STG_SIZE of the log stream specified on the LIKE parameter, if specified.

  • Uses the maximum CF structure size for the structure to which the log stream is defined. This value is obtained from the value defined on the SIZE parameter for the structure in the CFRM policy.

Note that if both the STG_DATACLAS and STG_SIZE are specified, the value for STG_SIZE overrides the space allocation attributes for the data class specified on the STG_DATACLAS value.

Of all the staging data set-related parameters, STG_SIZE is the most important. It is important to tune the size of the staging data sets when you first set up the log stream, and then re-check them every time you change the size of the associated CF structure and/or the number of log streams in that structure.

Note 

Poor sizing of the DASD staging data sets can result in poor System Logger performance and inefficient use of resources. For more information on using the STG_SIZE parameter, see "Estimating log stream sizes" on page 264.

LS_DATACLAS

Specifies the name of the SMS data class that will be used when allocating the DASD offload data sets for this log stream.

If you specify LS_DATACLAS(NO_LS_DATACLAS), which is the default, the data class is defined through SMS ACS routine processing. An SMS value specified on the LS_DATACLAS parameter, including NO_LS_DATACLAS, always overrides one specified on a model log stream used on the LIKE parameter.

Whether it is preferable to have the DATACLAS association in the LOGR policy, or in the SMS ACS routines depends on your installation software management rules.

Either way, you need to provide a DATACLAS where SHAREOPTIONS (3,3) has been specified for the allocation of the offload data sets. SHAREOPTIONS (3,3) is required to allow you to fully share the data sets across multiple systems within the GRS complex. If your system is running in a monoplex configuration, SHAREOPTION (1,3), which is the default on the dynamic allocation call, is allowed.

See z/OS DFSMS: Using Data Sets, SC26-7410 for more information about SMS.

Recommendation: 

To ensure optimal I/O performance, we recommend you use a CISIZE of 24576 bytes for the offload data sets. The data class you specify on the LS_DATACLAS parameter should be defined to use this CISIZE.

LS_MGMTCLAS

Specifies the name of the SMS management class to be used when allocating the offload data sets.

If you specify LS_MGMTCLAS(NO_LS_MGMTCLAS), which is the default, the management class is defined by standard SMS processing. See z/OS DFSMS: Using Data Sets, SC26-7410 for more information about SMS. An SMS value specified on the LS_MGMTCLAS parameter, including NO_LS_MGMTCLAS, always overrides one specified on a model log stream used on the LIKE parameter.

For information on defining SMS management classes, see Chapter 5 in z/OS DFSMSdfp Storage Administration Reference, SC26-7402.

LS_STORCLAS

Specifies the name of the SMS storage class to be used when allocating the offload data sets.

If you specify LS_STORCLAS(NO_LS_STORCLAS), which is the default, the storage class is defined by standard SMS processing. See z/OS DFSMS: Using Data Sets, SC26-7410 for more information about SMS. An SMS value specified on the LS_STORCLAS parameter, including NO_LS_STORCLAS, always overrides one specified on a model log stream used on the LIKE parameter.

For information on defining SMS storage classes, see Chapter 6 in z/OS DFSMSdfp Storage Administration Reference, SC26-7402.

LS_SIZE

Specifies the size, in 4 KB blocks, of the log stream offload DASD data sets for the log stream being defined.

When specified, this value will be used to specify a space allocation quantity on the log stream offload data set allocation request. It will override any size characteristics specified in an explicitly specified data class (via LS_DATACLAS) or a data class that gets assigned via a DFSMS ACS routine.

The smallest valid LS_SIZE value is 16 (64 KB), however in practice you would be unlikely to want such a small offload data set.

The largest size that a log stream offload data set can be defined for System Logger use is slightly less than 2 GB. If the data set size is too large, System Logger will automatically attempt to reallocate it smaller than 2 GB. The largest valid LS_SIZE value is 524287.

If you omit LS_SIZE, System Logger does one of the following, in the order listed, to allocate space for offload data sets:

  • Uses the LS_SIZE of the log stream specified on the LIKE parameter, if specified.

  • Uses the size defined in the SMS data class for the log stream offload data sets.

  • Uses dynamic allocation rules as specified in the ALLOCxx member of SYS1.PARMLIB for allocating data sets, if SMS is not available. Unless you specify otherwise in ALLOCxx, the default allocation amount is two tracks—far too small for most System Logger applications.

We recommend you explicitly specify an LS_SIZE for each log stream even if you are using an SMS data class, as the value specified in the data class is unlikely to be appropriate for every log stream.

Note 

Poor sizing of the offload data sets can result in System Logger performance issues, particularly during offload processing. For more information on using the LS_SIZE parameter, see "Sizing offload data sets" on page 264.

AUTODELETE

Specifies when System Logger can physically delete log data.

If you specify AUTODELETE(NO), which is the default, System Logger physically deletes an offload data set only when both of the following are true:

  • All the log data in the data set has been marked for deletion by a System Logger application using the IXGDELET service or by an archiving procedure like CICS/VR for CICS log streams or IFBEREPS, which is shipped in SYS1.SAMPLIB, for LOGREC log streams.

  • The retention period for all the data in the offload data set has expired.

If you specify AUTODELETE(YES), System Logger automatically deletes log data whenever data is either marked for deletion (using the IXGDELET service) or the retention period for all the log data in the offload data set has expired.

Be careful when using AUTODELETE(YES) if the System Logger application manages log data deletion using the IXGDELET service. With AUTODELETE(YES), System Logger may delete data that the application expects to be accessible. If you specify AUTODELETE=YES with RETPD=0, data is eligible for deletion as soon as it is written to the log stream.

For more information on AUTODELETE and deleting log data, see 2.7, "Deleting log data" on page 62.

Important: 

Consult your System Logger application recommendations before using either the AUTODELETE or RETPD parameters. Setting these parameters inappropriately can result in significant performance and usability issues.

RETPD

Specifies the retention period, in days, for log data in the log stream. The retention period begins when data is written to the log stream, not the offload data set. Once the retention period for all the log blocks in an offload data set has expired, the data set is eligible for physical deletion. The point at which System Logger physically deletes the data set depends on what you have specified on the AUTODELETE parameter and RETPD parameters:

  • RETPD=0 - System Logger physically deletes expired offload data sets whenever offload processing is invoked, even if no log data is actually moved to any offload data sets.

  • RETPD>0 - System Logger physically deletes expired offload data sets only when an offload data set fills and a new one gets allocated for the log stream.

System Logger will not process a retention period or delete data for log streams that are not connected and being written to by an application.

For example, a RETPD=1 would specify that any data written today would be eligible for deletion tomorrow. RETPD=10 would indicate data written today is eligible for deletion in 10 days.

The value specified for RETPD must be between 0 and 65,536.

For more information on RETPD and deleting log data, see 2.7, "Deleting log data" on page 62.

HLQ

Specifies the high level qualifier for both the offload and staging data sets.

If you do not specify a high level qualifier, or if you specify HLQ(NO_HLQ), the log stream will have a high level qualifier of IXGLOGR (the default value). If you specified the LIKE parameter, the log stream will have the high level qualifier of the log stream specified on the LIKE parameter. The value specified for HLQ overrides the high level qualifier for the log stream specified on the LIKE parameter.

HLQ and EHLQ are mutually exclusive and cannot be specified for the same log stream definition.

EHLQ

Specifies the extended high level qualifier for both the offload and staging data sets. The EHLQ parameter was introduced by z/OS 1.4, and log streams specifying this parameter can only be connected to by systems running this level of z/OS or higher.

EHLQ and HLQ are mutually exclusive and cannot be specified for the same log stream definition.

When the EHLQ parameter is not explicitly specified on the request, the resulting high level qualifier to be used for the log stream data sets will be based on whether the HLQ or LIKE parameters are specified. If the HLQ parameter is specified, then that value will be used for the offload data sets. When no high level qualifier is explicitly specified on the DEFINE LOGSTREAM request, but the LIKE parameter is specified, then the high level qualifier value being used in the referenced log stream will be used for the newly defined log stream. If the EHLQ, HLQ, and LIKE parameters are not specified, then the default value "IXGLOGR" will be used.

When EHLQ=NO_EHLQ is specified or defaulted to, the resulting high level qualifier will be determined by the HLQ value from the LIKE log stream or using a default value.

See Example 2-1 for usage examples.

Example 2-1: Usage of EHLQ keyword

start example
 1) Assume the OPERLOG log stream (NAME=SYSPLEX.OPERLOG) is defined with "EHLQ=MY.OWN.PREFIX". The offload data set would be allocated as:                   MY.OWN.PREFIX.SYSPLEX.OPERLOG.suffix     where the suffix is provided by System Logger. 2) Assume the OPERLOG log stream (NAME=SYSPLEX.OPERLOG) is attempted to be defined with "EHLQ=MY.PREFIX.IS.TOO.LONG". Even though the EHLQ value is less than the maximum 33 characters, an offload data set cannot be allocated with an overall name greater than 44 characters. In this example, the define request would fail:                   MY.PREFIX.IS.TOO.LONG.SYSPLEX.OPERLOG.suffix     The above data set name is not valid because it is too long. 
end example

CDS requirement: 

The active primary TYPE=LOGR CDS must be formatted at a HBB705 or higher level in order to specify the EHLQ keyword. Otherwise, the request will fail. See "LOGR CDS format levels" on page 14 for more information.

HIGHOFFLOAD

Specifies what percentage of the elements in the CF portion of the log stream can be used before offload processing is invoked. When the threshold is reached, System Logger begins offloading data from the CF structure to the offload data sets.

The default HIGHOFFLOAD value is 80%. If you omit the HIGHOFFLOAD parameter or specify HIGHOFFLOAD(0) the log stream will be defined with the default value.

The HIGHOFFLOAD value specified must be greater than the LOWOFFLOAD value.

For more information on offload thresholds and the offload process, see 2.6, "Offload processing" on page 57.

Note 

For the recommended HIGHOFFLOAD values for various System Logger applications, refer to the chapter in this book that describes the subsystem you are interested in or consult the application's manuals.

LOWOFFLOAD

Specifies the point at which System Logger stops offloading CF log data to offload data sets. When offload processing has offloaded enough data that only this percentage of the elements in the CF portion of the log stream are being used, offload processing will end.

If you specify LOWOFFLOAD(0), which is the default, or omit the LOWOFFLOAD parameter, System Logger uses the 0% usage mark as the low offload threshold.

The value specified for LOWOFFLOAD must be less than the HIGHOFFLOAD value.

Note 

For the recommended LOWOFFLOAD values for various System Logger applications, refer to the chapter in this book that describes the subsystem you are interested in or consult the application's manuals.

MODEL

Specifies whether the log stream being defined is a model, exclusively for use with the LIKE parameter to set up general characteristics for other log stream definitions.

If you specify MODEL(NO), which is the default, the log stream being defined is not a model log stream. Systems can connect to and use this log stream. The log stream can also be specified on the LIKE parameter, but is not exclusively for use as a model.

If you specify MODEL(YES), the log stream being defined is only a model log stream. It can be specified only as a model for other log stream definitions on the LIKE parameter.

Programs cannot connect to a log stream name that is defined as a model (MODEL(YES)) using an IXGCONN request.

No offload data sets are allocated for a model log stream.

The attributes of a model log stream are syntax checked at the time of the request, but not verified until another log stream references the model log stream on the LIKE parameter.

Model log streams can be thought of as a template for future log stream definitions. They are defined in the System Logger policy and will show up in output from a "D LOGGER,L" command or an IXCMIAPU LIST LOGSTREAM report. Some applications (such as CICS) use model log streams to allow installations to set up a log stream definition containing common characteristics that will be used as a basis for additional log streams that will be defined dynamically by the application (using the IXGINVNT service).

See Example 2-2 on page 40 for an example of how a model log stream is used.

Example 2-2: MODEL and LIKE usage

start example
 The following statements will define a model log stream:   DATA TYPE(LOGR) REPORT(YES)    DEFINE LOGSTREAM NAME(CFSTRM1.MODEL) MODEL(YES)       STG_DUPLEX(NO) LS_SIZE(1000) HLQ(LOGHLQ) DIAG(YES)       HIGHOFFLOAD(70) LOWOFFLOAD(10)       AUTODELETE(YES) with these attributes: LOGSTREAM NAME(CFSTRM1.MODEL) STRUCTNAME() LS_DATACLAS()           LS_MGMTCLAS() LS_STORCLAS() HLQ(LOGHLQ) MODEL(YES) LS_SIZE(1000)           STG_MGMTCLAS() STG_STORCLAS() STG_DATACLAS() STG_SIZE(0)           LOWOFFLOAD(10) HIGHOFFLOAD(70) STG_DUPLEX(NO) DUPLEXMODE()           RMNAME() DESCRIPTION() RETPD(0) AUTODELETE(NO) OFFLOADRECALL(YES)           DASDONLY(NO) DIAG(YES) LOGGERDUPLEX(UNCOND) EHLQ(NO_EHLQ) Here is a LIKE log stream definition based on the above model: DATA TYPE(LOGR) REPORT(YES)    DEFINE LOGSTREAM NAME(CFSTRM1.PROD) LIKE(CFSTRM1.MODEL)       STRUCTNAME(LOG_TEST_001)       STG_DUPLEX(YES) DUPLEXMODE(UNCOND) LOGGERDUPLEX(COND)       EHLQ(LOG.EHLQ) OFFLOADRECALL(NO)       LOWOFFLOAD(20) that is defined with the following attributes: LOGSTREAM NAME(CFSTRM1.PROD) STRUCTNAME(LOG_TEST_001) LS_DATACLAS()           LS_MGMTCLAS() LS_STORCLAS() HLQ(NO_HLQ) MODEL(NO) LS_SIZE(1000)           STG_MGMTCLAS() STG_STORCLAS() STG_DATACLAS() STG_SIZE(0)           LOWOFFLOAD(20) HIGHOFFLOAD(70) STG_DUPLEX(YES) DUPLEXMODE(UNCOND)           RMNAME() DESCRIPTION() RETPD(0) AUTODELETE(NO) OFFLOADRECALL(NO)           DASDONLY(NO) DIAG(YES) LOGGERDUPLEX(COND) EHLQ(LOG.EHLQ) Notice that while we inherited values from the MODEL log stream (HIGHOFFLOAD, LS_SIZE, etc.), the values specified in the LIKE log stream definition will override them (EHLQ, STG_DUPLEX, LOWOFFLOAD, etc.). 
end example

LIKE

Specifies the name of another log stream defined in the System Logger policy. The characteristics of this log stream (such as storage class, management class, high level qualifier and so on) will be copied for the log stream you are defining if those characteristics are not explicitly coded on the referencing log stream. The parameters explicitly coded on this request, however, override the characteristics of the MODEL log stream specified on the LIKE parameter.

See Example 2-2 on page 40 for an example of how a model log stream is used.

DIAG

Specifies whether or not dumps or additional diagnostics should be provided by System Logger for certain conditions.

If you specify DIAG(NO), which is the default, this indicates that no special System Logger diagnostic activity is requested for this log stream, regardless of the DIAG specifications on the IXGCONN, IXGDELET and IXGBRWSE requests.

If you specify DIAG(YES), this indicates that special System Logger diagnostic activity is allowed for this log stream and can be obtained when the appropriate specifications are provided on the IXGCONN, IXGDELET, or IXGBRWSE requests.

We recommend that you specify DIAG(YES) as the additional diagnostics it provides will provide IBM service with additional information to debug problems. Note that specifying it will cause a slight performance impact when certain problems occur for which System Logger will now collect additional information; however, for the rest of the time, specifying DIAG(YES) has no effect on performance.

OFFLOADRECALL

Indicates whether offload processing is to recall the current offload data set if it has been migrated.

If you specify OFFLOADRECALL(YES), offload processing will attempt to recall the current offload data set.

If you specify OFFLOADRECALL(NO), offload processing will skip recalling the current offload data set and allocate a new one.

If you use this option, you need to also consider how quickly your staging data sets get migrated, and how likely they are to be recalled. If they are likely to be migrated after just one offload, you should attempt to set LS_SIZE to be roughly equivalent to the amount of data that gets moved in one offload. There is no point allocating a 1 GB offload data set, and then migrating it after you only write 50 MB of data into it. Similarly, if the data set is likely to be recalled (for an IXGBRWSE, for example), there is no point having a 1 GB data set clogging up your DASD if it only contains 50 MB of data. If the offload data sets will only typically contain 50 MB of data, then size them to be slightly larger than that.

How your log stream is used is a factor when deciding what value to code for OFFLOADRECALL. In general, if you can't afford the performance hit of having to wait for a offload data set to be recalled, we suggest you code OFFLOADRECALL(NO).

Duplexing log data for CF-Structure based log streams

Because the availability of log data is critical to many applications to allow successful recovery from failures, System Logger provides considerable functionality to ensure that log data managed by it will always be available if needed. When an application writes data to the log stream, it is duplexed while in interim storage until the data is "hardened" by offloading to the offload data sets. Duplexing ensures that the data is still available even if the CF containing the log stream were to fail.

For CF-Structure based log streams, System Logger always writes log data to the CF structure first. It will then duplex the data to some other medium (either another CF structure, staging data sets, or local buffers) depending on the values of several parameters in your log stream's definition as shown in Figure 2-9 on page 43. The following cases correspond to those in Figure 2-9 on page 43:

click to expand
Figure 2-9: System Logger duplexing combinations

System-Managed Duplexing is not being used:

  • Case 1: There is a failure-dependent connection between the connecting system and the structure, or, the CF containing the structure is volatile.

  • Case 2: There is a failure-independent connection between the connecting system and the structure, and, the CF containing the structure is non-volatile.

Note 

For a description of a failure dependent connection, refer to "System Logger recovery" on page 65.

System-Managed Duplexing is being used:

There are two instances of the structure.

  • Case 3: There is a failure-dependent connection between the connecting system and the composite structure view[1], or, the structure CF state is volatile.

  • Case 4: There is a failure-independent connection between the connecting system and the composite structure view, and a failure dependent relationship between the two structure instances, and the structure CF state is non-volatile.

  • Case 5: There is a failure-independent connection between the connecting system and the composite structure view, and a failure independent relationship between the two structure instances, and the structure CF state is non-volatile.

It is possible that System Logger could start out duplexing to one medium (say, local buffers) and some event (CF becomes volatile, losing a CF, update of the above parameters in log stream definition, and so on) could cause the transition from one duplexing medium to another. For example, a log stream defined with STG_DUPLEX(YES), DUPLEXMODE(COND) and LOGGERDUPLEX(COND) to a structure falling under the category of case 5 could lose connectivity to one of the CFs and fall back to a simplex mode case 2 (assuming the remaining CF/system view is failure independent) and System Logger would begin duplexing to its local buffers. For more information on how CF state changes can effect System Logger duplexing, see "Structure state changes" on page 71.

You can view the storage medium(s) System Logger is duplexing to by entering the "D LOGGER,C,LSN=logstreamname,DETAIL" system command. See Example 2-3 for sample output. Note that the location(s) listed on the DUPLEXING: line are where the copy of the data is held. For example, in Example 2-3, the OPERLOG structure is being duplexed using System-Managed Duplexing, however the CFs are not failure isolated from System Logger, so System Logger actually has three copies of the data in this case—one in the CF structure (not shown in the command), a second in the duplex copy of that structure (shown by the keyword STRUCTURE), and a third copy in the staging data set (shown by the keyword STAGING DATA SET).

Example 2-3: Example output of display command showing duplex mediums

start example
 IXG601I   14.20.30  LOGGER DISPLAY 555 CONNECTION INFORMATION BY LOGSTREAM FOR SYSTEM #@$3 LOGSTREAM                  STRUCTURE        #CONN  STATUS ---------                  ---------        ------ ------ SYSPLEX.OPERLOG            LOG_TEST_001     000002 IN USE   DUPLEXING: STRUCTURE, STAGING DATA SET     JOBNAME: SICA      ASID: 0082       R/W CONN: 000001 / 000000       RES MGR./CONNECTED: *NONE*  / NO       IMPORT CONNECT: NO     JOBNAME: CONSOLE   ASID: 000A       R/W CONN: 000000 / 000001       RES MGR./CONNECTED: *NONE*  / NO       IMPORT CONNECT: NO 
end example

2.5.2 DASD-only log streams

DASD-only log streams are single system in scope; that is, only one system in the sysplex can be connected at a given time, although multiple applications from the same system can be connected simultaneously. These log streams use local buffers in System Logger's dataspace for interim storage. Data is then offloaded to offload data sets for longer term storage. Figure 2-10 shows how a DASD-only log stream spans the two levels of storage.

click to expand
Figure 2-10: DASD-only log stream spanning two levels of storage

When an application writes a log block to a DASD-only log stream, System Logger first writes the data to local buffers; the log block is then always duplexed to staging data sets. When the staging data set subsequently reaches the installation-defined high threshold, System Logger moves the log blocks from local buffers to offload data sets and deletes those log blocks from the staging data set. From an application point of view, the actual location of the log data in the log stream is transparent.

There are many considerations you must take into account before and after you have decided to use DASD-only log streams. If you decide your System Logger application should use them, then you should also have a basic understanding of how they work. We'll spend the rest of this section addressing these points.

If you have determined that all your log streams should use CF-Structure log streams, you should skip to 2.5.3, "Updating log stream definitions" on page 54 now.

When should my application use DASD-only log streams?

Earlier we discussed some factors to be taken into consideration when deciding what type of log stream your application should use:

  • The location and concurrent activity of writers and readers to a log stream's log data.

  • The volume (data flow) of log stream data.

In addition to these, you should consider any advice given by the application; for example, applications that require that multiple systems connect to the log stream, like APPC/MVS for example, may require that the log stream is kept in a CF. In general though, DASD-only log streams can be used when:

  • There is no requirement to have more than one system in the sysplex accessing the log stream at the same time.

  • There are lower volumes of log data being written to the log stream.

Note 

A DASD-only log stream is single system in scope. This means that even though there can be multiple connections to it from a single system in the sysplex, there cannot be multiple systems connected to the log stream at the same time.

Implementing DASD-only log streams

There are many parameters to consider when defining a log stream. We cover the specifics of each in this section; note that some of this material repeats information from the CF-Structure based log stream section; any differences are indicated in Table 2-3 on page 22.

Defining DASD-only log streams

DASD-only log streams are also defined using the IXCMIAPU utility or the IXGINVNT service. Let's take a look at the different parameters and how they impact System Logger operations.

NAME

Specifies the name of the log stream that you want to define. The name can be made up of one or more segments separated by periods, up to the maximum length of 26 characters.

NAME is a required parameter with no default. Besides being the name of the log stream, it is used as part of the offload data set and staging data set names:

  • offload data set example: <hlq>.logstreamname.<seq#>

  • staging data set example: <hlq>.logstreamname.<sysplex_name>

Note that if the log stream name combined with the HLQ are longer than 33 characters, the data component of the offload and maybe the staging data sets may contain a system-generated low level qualifier: for example:

    IXGLOGR.A2345678.B2345678.C2345678.A0000000 (Cluster)    IXGLOGR.A2345678.B2345678.C2345678.IE8I36RM (Data) 

RMNAME

Specifies the name of the resource manager program associated with the log stream. RNAME must be 8 alphanumeric or national ($,#,or @) characters, padded on the right with blanks if necessary. You must define RMNAME in the System Logger policy before the resource manager can connect to the log stream. See the System Logger chapter in z/OS MVS Assembler Services Guide, SA22-7605, for information on writing a resource manager program to back up a log stream.

DESCRIPTION

Specifies user-defined data describing the log stream. DESCRIPTION must be 16 alphanumeric or national ($,#,@) characters, underscore (_) or period (.), padded on the right with blanks if necessary.

DASDONLY

Specifies whether the log stream being defined is a CF or a DASD-only log stream. This is an optional parameter with the default being DASDONLY(NO).

Since we are reviewing DASD-only log streams in this section, DASDONLY(YES) would be used to indicate that we do want a DASD-only log stream.

MAXBUFSIZE

Specifies the size, in bytes, of the largest log block that can be written to the DASD-only log stream being defined in this request.

The value for MAXBUFSIZE must be between 1 and 65,532 bytes. The default is 65,532 bytes.

This parameter is only valid with DASDONLY(YES).

There are some additional considerations for MAXBUFSIZE if you plan on migrating the log stream to a CF structure at some point. See "Migrating a log stream from DASD-only to CF-Structure based" on page 56.

Important: 

Remember, STG_xx parameters only apply to staging data sets; not to offload data sets. LS_xx parameters are used to associate SMS constructs with offload data sets.

STG_DATACLAS

Specifies the name of the SMS data class that will be used when allocating the DASD staging data sets for this log stream.

If you specify STG_DATACLAS(NO_STG_DATACLAS), which is the default, the data class is defined by standard SMS processing. See z/OS DFSMS: Using Data Sets, SC26-7410 for more information about SMS. An SMS value specified on the STG_DATACLAS parameter, including NO_STG_DATACLAS, always overrides one specified on a model log stream used on the LIKE parameter.

Remember to use SHAREOPTIONS(3,3) when defining your SMS data class!

For information on defining SMS data classes, see Chapter 7 in z/OS DFSMSdfp Storage Administration Reference, SC26-7402.

Remember to use SHAREOPTIONS(3,3) when defining your SMS data class!

For information on defining SMS data classes, see z/OS DFSMSdfp Storage Administration Reference, SC26-7402, Chapter 7.

Important: 

Do not change the Control Interval Size attributes for staging data sets to be anything other than 4096. System Logger requires that the Control Interval Size for staging data sets be set to 4096. If staging data sets are defined with characteristics that result in a Control Interval Size other than 4096, System Logger will not use those data sets to keep a duplicate copy of log data. Operations involving staging data sets defined with a Control Interval Size other than 4096 will fail to complete successfully. The DASD-only log stream will be unusable until this is corrected.

STG_MGMTCLAS

Specifies the name of the SMS management class used when allocating the staging data set for this log stream.

If you specify STG_MGMTCLAS(NO_STG_MGMTCLAS), which is the default, the management class is defined by standard SMS processing. See z/OS DFSMS: Using Data Sets, SC26-7410 for more information about SMS. An SMS value specified on the STG_MGMTCLAS parameter, including NO_STG_MGMTCLAS, always overrides one specified on a model log stream used on the LIKE parameter.

For information on defining SMS management classes, see Chapter 5 in z/OS DFSMSdfp Storage Administration Reference, SC26-7402.

STG_STORCLAS

Specifies the name of the SMS storage class used when allocating the DASD staging data set for this log stream.

If you specify STG_STORCLAS(NO_STG_STORCLAS), which is the default, the storage class is defined by standard SMS processing. See z/OS DFSMS: Using Data Sets, SC26-7410 for more information about SMS. An SMS value specified on the STG_STORCLAS parameter, including NO_STG_STORCLAS, always overrides one specified on a model log stream used on the LIKE parameter.

For information on defining SMS storage classes, see z/OS DFSMSdfp Storage Administration Reference, SC26-7402, Chapter 6.

STG_SIZE

Specifies the size, in 4 KB blocks, of the DASD staging data set for the log stream being defined.

When specified, this value will be used to specify a space allocation quantity on the log stream staging data set allocation request. It will override any size characteristics specified in the specified data class (via STG_DATACLAS) or a data class that gets assigned via a DFSMS ACS routine.

Note that if both the STG_DATACLAS and STG_SIZE are specified, the value for STG_SIZE overrides the space allocation attributes for the data class specified on the STG_DATACLAS value.

If you omit STG_SIZE for a DASD-only log stream, System Logger does one of the following, in the order listed, to allocate space for staging data sets:

  • Uses the STG_SIZE of the log stream specified on the LIKE parameter, if specified.

  • Uses the size defined in the SMS data class for the staging data sets.

  • Uses dynamic allocation rules (as defined in the ALLOCxx member of Parmlib) for allocating data sets if the SMS ACS routines do not assign a data class.

Note 

Sizing DASD staging data sets incorrectly can cause System Logger performance issues. For more information on using the STG_SIZE parameter, see "Sizing interim storage" on page 265.

LS_DATACLAS

Specifies the name of the SMS data class that will be used when allocating offload data sets.

If you specify LS_DATACLAS(NO_LS_DATACLAS), which is the default, the data class is defined by standard SMS processing. See z/OS DFSMS: Using Data Sets, SC26-7410 for more information about SMS. An SMS value specified on the LS_DATACLAS parameter, including NO_LS_DATACLAS, always overrides one specified on a model log stream used on the LIKE parameter.

System Logger uses VSAM linear data sets for offload data sets. They require a control interval (CISIZE) from 4096 to 32768 bytes, and it can be expressed in increments of 4096; the default is 4096.

Remember to specify SHAREOPTIONS(3,3) when defining your SMS data class.

For information on defining SMS data classes, see Chapter 7 in z/OS DFSMSdfp Storage Administration Reference, SC26-7402.

Recommendation: 

If you want to ensure optimal I/O performance, we recommend you use a CISIZE of 24576 bytes. You can specify a DFSMS data class that is defined with a control interval size of 24576 on the LS_DATACLAS parameter of a log stream definition to have offload data sets allocated with control interval sizes of 24576.

LS_MGMTCLAS

Specifies the name of the SMS management class to be used when allocating the offload data sets.

If you specify LS_MGMTCLAS(NO_LS_MGMTCLAS), which is the default, the management class is defined by standard SMS processing. See z/OS DFSMS: Using Data Sets, SC26-7410 for more information about SMS. An SMS value specified on the LS_MGMTCLAS parameter, including NO_LS_MGMTCLAS, always overrides one specified on a model log stream used on the LIKE parameter.

For information on defining SMS management classes, see Chapter 5 in z/OS DFSMSdfp Storage Administration Reference, SC26-7402.

LS_STORCLAS

Specifies the name of the SMS storage class to be used when allocating the offload data sets.

If you specify LS_STORCLAS(NO_LS_STORCLAS), which is the default, the storage class is defined by standard SMS processing. See z/OS DFSMS: Using Data Sets, SC26-7410 for more information about SMS. An SMS value specified on the LS_STORCLAS parameter, including NO_LS_STORCLAS, always overrides one specified on a model log stream used on the LIKE parameter.

For information on defining SMS storage classes, see Chapter 6 in z/OS DFSMSdfp Storage Administration Reference, SC26-7402.

LS_SIZE

Specifies the size, in 4 KB blocks, of the log stream offload DASD data sets for the log stream being defined.

When specified, this value will be used to specify a space allocation quantity on the log stream offload data set allocation request. It will override any size characteristics specified in an explicitly specified data class (via LS_DATACLAS) or a data class that gets assigned via a DFSMS ACS routine.

The smallest valid LS_SIZE value is 16 (64 KB), however in practice you would be unlikely to want such a small offload data set.

The largest size that a log stream offload data set can be defined for System Logger use is slightly less than 2 GB. If the data set size is too large, System Logger will automatically attempt to reallocate it smaller than 2 GB. The largest valid LS_SIZE value is 524287.

If you omit LS_SIZE, System Logger does one of the following, in the order listed, to allocate space for offload data sets:

  • Uses the LS_SIZE of the log stream specified on the LIKE parameter, if specified.

  • Uses the size defined in the SMS data class for the log stream offload data sets.

  • Uses dynamic allocation rules as specified in the ALLOCxx member of SYS1.PARMLIB for allocating data sets, if SMS does not assign a data class. Unless you specify otherwise in ALLOCxx, the default allocation amount is two tracks—far too small for most System Logger applications.

We recommend you explicitly specify an LS_SIZE for each log stream even if you are using an SMS data class, as the value specified in the data class may not be suited to your log stream.

Note 

Sizing offload data sets incorrectly can cause System Logger performance issues. For more information on using the LS_SIZE parameter, see "Sizing offload data sets" on page 264.

AUTODELETE

Specifies when System Logger can physically delete log data.

If you specify AUTODELETE(NO), which is the default, System Logger physically deletes an offload data set only when both of the following are true:

  • All the log data in the data set has been marked for deletion by a System Logger application using the IXGDELET service or by an archiving procedure like CICS/VR for CICS log streams or IFBEREPS, which is shipped in SYS1.SAMPLIB, for LOGREC log streams.

  • The retention period for all the data in the offload data set has expired.

If you specify AUTODELETE(YES) , System Logger automatically deletes log data whenever data is either marked for deletion (using the IXGDELET service) or the retention period for all the log data in a data set has expired.

Be careful when using AUTODELETE(YES) if the System Logger application manages log data deletion using the IXGDELET service. With AUTODELETE(YES), System Logger may delete data that the application expects to be accessible. If you specify AUTODELETE=YES with RETPD=0, data is eligible for deletion as soon as it is written to the log stream.

For more information on AUTODELETE and deleting log data, see 2.7, "Deleting log data" on page 62.

Important: 

Consult your System Logger application recommendations before using either the AUTODELETE or RETPD parameters.

RETPD

Specifies the retention period, in days, for log data in the log stream. The retention period begins when data is written to the log stream, not the offload data set. Once the retention period for an entire offload data set has expired, the data set is eligible for physical deletion.

The point at which System Logger physically deletes the data set depends on what you have specified on the AUTODELETE parameter and RETPD parameters:

  • RETPD=0 - System Logger physically deletes expired offload data sets whenever offload processing is invoked, even if no log data is actually moved to any offload data sets.

  • RETPD>0 - System Logger physically deletes expired offload data sets only when an offload data set fills and it switches to a new one for a log stream.

System Logger will not process a retention period or delete data for log streams that are not connected and being written to by an application.

For example, a RETPD=1 would specify that any data written today would be eligible for deletion tomorrow. RETPD=10 would indicate data written today is eligible for deletion in 10 days.

The value specified for RETPD must be between 0 and 65,536.

For more information on RETPD and deleting log data, see 2.7, "Deleting log data" on page 62.

HLQ

Specifies the high level qualifier for both the offload and staging data sets.

If you do not specify a high level qualifier, or if you specify HLQ(NO_HLQ), the log stream will have a high level qualifier of IXGLOGR (the default value). If you specified the LIKE parameter, the log stream will have the high level qualifier of the log stream specified on the LIKE parameter. The value specified for HLQ overrides the high level qualifier for the log stream specified on the LIKE parameter.

HLQ and EHLQ are mutually exclusive and cannot be specified for the same log stream definition.

EHLQ

Specifies the extended high level qualifier for both the offload and staging data sets. The EHLQ parameter was introduced by z/OS 1.4, and log streams specifying this parameter can only be connected to by systems running this level of z/OS or higher.

EHLQ and HLQ are mutually exclusive and cannot be specified for the same log stream definition.

When the EHLQ parameter is not explicitly specified on the request, the resulting high level qualifier to be used for the log stream data sets will be based on whether the HLQ or LIKE parameters are specified. If the HLQ parameter is specified, then that value will be used for the offload data sets. When no high level qualifier is explicitly specified on the DEFINE LOGSTREAM request, but the LIKE parameter is specified, then the high level qualifier value being used in the referenced log stream will be used for the newly defined log stream. If the EHLQ, HLQ, and LIKE parameters are not specified, then the default value "IXGLOGR" will be used.

When EHLQ=NO_EHLQ is specified or defaulted to, the resulting high level qualifier will be determined by the HLQ value from the LIKE log stream or using a default value.

See Example 2-1 on page 39 for usage examples.

CDS requirement: 

The active primary TYPE=LOGR CDS must be formatted at a HBB705 or higher level in order to specify the EHLQ keyword. Otherwise, the request will fail. See "LOGR CDS format levels" on page 14 for more information.

Example 2-4: Use of EHLQ

start example
 1) Assume the OPERLOG log stream (NAME=SYSPLEX.OPERLOG) is defined with "EHLQ=MY.OWN.PREFIX" specified. The log stream data set would be allocated as:                   MY.OWN.PREFIX.SYSPLEX.OPERLOG.suffix      where the suffix is up to an eight-character field provided by System Logger. 2) Assume the OPERLOG log stream (NAME=SYSPLEX.OPERLOG) is attempted to be defined with "EHLQ=MY.PREFIX.IS.TOO.LONG". Even though the EHLQ value is less than the maximum 33 characters, an offload data set cannot be allocated with an overall name greater than 44 characters. In this example, the define request would fail:                   MY.PREFIX.IS.TOO.LONG.SYSPLEX.OPERLOG.suffix      The above data set name is not valid because it is too long. 
end example

HIGHOFFLOAD

Specifies the point at which offload processing should start for the log stream. This is specified in terms of percent full for the staging data set. For example, if HIGHOFFLOAD is set to 80, offload processing will start when the staging data set reaches 80% full.

The default HIGHOFFLOAD value is 80%. If you omit the HIGHOFFLOAD parameter or specify HIGHOFFLOAD(0) the log stream will be defined with the default value.

The HIGHOFFLOAD value specified must be greater than the LOWOFFLOAD value.

For more information on offload thresholds and the offload process, see 2.6, "Offload processing" on page 57.

Note 

For System Logger application recommended HIGHOFFLOAD values, see the chapter in this book that describes the subsystem you are interested in, or consult the application's manuals.

LOWOFFLOAD

Specifies the point at which offload processing will end. This is specified in terms of percent full for the staging data set. For example, if LOWOFFLOAD is set to 10, offload processing will end when enough data has been moved so that the staging data set is now only 10% full.

If you specify LOWOFFLOAD(0), which is the default, or omit the LOWOFFLOAD parameter, System Logger continues offloading until the staging data set is empty.

The value specified for LOWOFFLOAD must be less than the HIGHOFFLOAD value.

For more information on offload thresholds and the offload process, see 2.6, "Offload processing" on page 57.

Note 

For System Logger application recommended LOWOFFLOAD values, see the chapter in this book that describes the subsystem you are interested in or consult the application's manuals.

MODEL

Specifies whether the log stream being defined is a model, exclusively for use with the LIKE parameter to set up general characteristics for other log stream definitions.

If you specify MODEL(NO), which is the default, the log stream being defined is not a model log stream. Systems can connect to and use this log stream. The log stream can also be specified on the LIKE parameter, but is not exclusively for use as a model.

If you specify MODEL(YES), the log stream being defined is only a model log stream. It can be specified only as a model for other log stream definitions on the LIKE parameter.

Programs cannot connect to a log stream name that is defined as a model (MODEL(YES)) using an IXGCONN request.

No log stream offload data sets are allocated on behalf of a model log stream.

The attributes of a model log stream are syntax checked at the time of the request, but not verified until a another log stream references the model log stream on the LIKE parameter.

Model log streams can be thought of as a template for future log stream definitions. They are defined in the System Logger policy and will show up in output from a "D LOGGER,L" command or an IXCMIAPU LIST LOGSTREAM report. Some applications (such as CICS) use model log streams to allow installations to set a common group of characteristics up for the application to define LIKE log streams based on.

See Example 2-2 on page 40 for usage information.

LIKE

Specifies the name of a log stream defined in the System Logger policy. The characteristics of this log stream (such as storage class, management class, high level qualifier and so on) will be copied for the log stream you are defining only if those characteristics are not explicitly coded on the referencing log stream. The parameters explicitly coded on this request, however, override the characteristics of the MODEL log stream specified on the LIKE parameter.

See Example 2-2 on page 40 for usage information.

Example 2-5: MODEL and LIKE usage

start example
 The following JCL will define a model log stream:   DATA TYPE(LOGR) REPORT(YES)    DEFINE LOGSTREAM NAME(CFSTRM1.MODEL) MODEL(YES)       STG_DUPLEX(NO) LS_SIZE(1000) HLQ(LOGHLQ) DIAG(YES)       HIGHOFFLOAD(70) LOWOFFLOAD(10)       AUTODELETE(YES) with these attributes: LOGSTREAM NAME(CFSTRM1.MODEL) STRUCTNAME() LS_DATACLAS()           LS_MGMTCLAS() LS_STORCLAS() HLQ(LOGHLQ) MODEL(YES) LS_SIZE(1000)           STG_MGMTCLAS() STG_STORCLAS() STG_DATACLAS() STG_SIZE(0)           LOWOFFLOAD(10) HIGHOFFLOAD(70) STG_DUPLEX(NO) DUPLEXMODE()           RMNAME() DESCRIPTION() RETPD(0) AUTODELETE(YES) OFFLOADRECALL(YES)           DASDONLY(NO) DIAG(YES) LOGGERDUPLEX(UNCOND) EHLQ(NO_EHLQ) Here is a LIKE log stream definition based on the above model: DATA TYPE(LOGR) REPORT(YES)    DEFINE LOGSTREAM NAME(CFSTRM1.PROD) LIKE(CFSTRM1.MODEL)       STRUCTNAME(LOG_TEST_001)       STG_DUPLEX(YES) DUPLEXMODE(UNCOND) LOGGERDUPLEX(COND)       EHLQ(LOG.EHLQ) OFFLOADRECALL(NO)       LOWOFFLOAD(20) AUTODELETE(NO) that is defined with the following attributes: LOGSTREAM NAME(CFSTRM1.PROD) STRUCTNAME(LOG_TEST_001) LS_DATACLAS()           LS_MGMTCLAS() LS_STORCLAS() HLQ(NO_HLQ) MODEL(NO) LS_SIZE(1000)           STG_MGMTCLAS() STG_STORCLAS() STG_DATACLAS() STG_SIZE(0)           LOWOFFLOAD(20) HIGHOFFLOAD(70) STG_DUPLEX(YES) DUPLEXMODE(UNCOND)           RMNAME() DESCRIPTION() RETPD(0) AUTODELETE(NO) OFFLOADRECALL(NO)           DASDONLY(NO) DIAG(YES) LOGGERDUPLEX(COND) EHLQ(LOG.EHLQ) Notice that while we inherited values from the MODEL log stream(HIGHOFFLOAD, LS_SIZE, etc), the values specified in the LIKE log stream definition will override them (EHLQ, STG_DUPLEX, LOWOFFLOAD, etc). 
end example

DIAG

Specifies whether or not dumps or additional diagnostics should be provided by System Logger for certain conditions.

If you specify DIAG(NO), which is the default, this indicates that no special System Logger diagnostic activity is requested for this log stream, regardless of the DIAG specifications on the IXGCONN, IXGDELET and IXGBRWSE requests.

If you specify DIAG(YES), this indicates that special System Logger diagnostic activity is allowed for this log stream and can be obtained when the appropriate specifications are provided on the IXGCONN, IXGDELET, or IXGBRWSE requests.

We recommend that you specify DIAG(YES) as the additional diagnostics it provides will provide IBM service with additional information to debug problems. Note that specifying it will cause a slight performance impact when certain problems occur for which System Logger will now collect additional information; however, for the rest of the time, specifying DIAG(YES) has no effect on performance.

OFFLOADRECALL

Indicates whether offload processing is to recall the current offload data set if it has been migrated.

If you specify OFFLOADRECALL(YES), offload processing will attempt to recall the current offload data set.

If you specify OFFLOADRECALL(NO), offload processing will skip recalling the current offload data set and allocate a new one.

If you use this option, you need to also consider how quickly your staging data sets get migrated, and how likely they are to be recalled. If they are likely to be migrated after just one offload, you should attempt to set LS_SIZE to be roughly equivalent to the amount of data that gets moved in one offload. There is no point allocating a 1 GB offload data set, and then migrating it after you only write 50 MB of data into it. Similarly, if the data set is likely to be recalled (for an IXGBRWSE, for example), there is no point having a 1 GB data set clogging up your DASD if it only contains 50 MB of data. If the offload data sets will only typically contain 50 MB of data, then size them to be slightly larger than that.

How your log stream is used is a factor when deciding what value to code for OFFLOADRECALL. In general, if you can't afford the performance hit of having to wait for a offload data set to be recalled, we suggest you code OFFLOADRECALL(NO).

Duplexing log data for DASD-only log streams

When an application writes data to the log stream, it is duplexed while in interim storage until the data is offloaded. Duplexing prevents log data being lost because of the loss of a single point of failure.

For DASD-only log streams, System Logger uses local buffers in System Logger's dataspace for interim storage. It then duplexes the data simultaneously to staging data sets. Unlike CF-Structure based log streams, you have no control over this processing; System Logger always uses this configuration for DASD-only log streams.

2.5.3 Updating log stream definitions

Log stream definitions are updated using the IXCMIAPU utility or the IXGINVNT service. When and how you can update the log stream definition varies depending on the level of the LOGR CDS you are using.

If you are running with an HBB6603 or lower LOGR CDS, most log stream attributes cannot be updated if there is any type of log stream connection, whether it is active or "failed-persistent". Use of the log stream in question needs to be quiesced before submitting the update requests. The exception to this is the RETPD and AUTODELETE parameters, which can be updated at any time, with the values taking effect when the next offload data set switch occurs (that is, when System Logger allocates a new offload data set). For CF-Structure based log streams, changing the CF structure a log stream resides in is a cumbersome process that involves deleting the log stream and redefining it to the new structure.

If you are running with an HBB7705 or higher LOGR CDS, System Logger allows updates to be submitted at any time for offload and connection-based log stream attributes. The updates become "pending updates" and are shown as such in the IXCMIAPU LIST LOGSTREAM report. The updates are then committed at different times, depending on which parameter is being changed. Log streams without any connections will have their updates go into effect immediately. Table 2-4 on page 54 shows which parameters can be changed, and when the change takes effect.

Table 2-4: System Logger logstream attribute dynamic update commit outline

Logstream attribute

Last disconnect or first connect to logstream in sysplex

Switch to New offload Data Set

CF Structure Rebuild

RETPD

yes

yes

no

AUTODELETE

yes

yes

no

LS_SIZE

yes

yes

yes

LS_DATACLAS

yes

yes

yes

LS_MGMTCLAS

yes

yes

yes

LS_STORCLAS

yes

yes

yes

OFFLOADRECALL

yes

yes (1)

yes

LOWOFFLOAD

yes

yes (1)

yes

HIGHOFFLOAD

yes

yes (1)

yes

STG_SIZE

yes

no

yes

STG_DATCLAS

yes

no

yes

STG_MGMTCLAS

yes

no

yes

STG_STORCLAS

yes

no

yes

STG_DUPLEX (CF)

yes

no

no

DUPLEXMODE (CF)

yes

no

no

LOGGERDUPLEX (CF)

yes

no

no

MAXBUFSIZE (DO)

yes

no

n/a

Notes:

1 - These attributes are only committed during switch to new offload data set activity for DASD-only log stream. These attributes are not committed at this point for CF structure-based log stream.

yes - Indicates the attribute is committed during the activity listed in the column heading.

no - Indicates the attribute is not committed during the activity

(CF) - Indicates the attribute is only applicable to CF structure-based log stream

(DO) - Indicates the attribute is only applicable to DASD-only-based log stream

Example 2-6 on page 55 contains a sample job that updates some log stream attributes.

Example 2-6: Example update job and output

start example
 Example UPDATE LOGSTREAM request: DATA TYPE(LOGR) REPORT(YES)    UPDATE LOGSTREAM NAME(SYSPLEX.OPERLOG)       DUPLEXMODE(UNCOND)       OFFLOADRECALL(YES) Example IXCMIAPU report output showing pending updates: LOGSTREAM NAME(SYSPLEX.OPERLOG) STRUCTNAME(LOG_TEST_001) LS_DATACLAS(LOGR24K)           LS_MGMTCLAS() LS_STORCLAS() HLQ(NO_HLQ) MODEL(NO) LS_SIZE(1024)           STG_MGMTCLAS() STG_STORCLAS() STG_DATACLAS(LOGR4K) STG_SIZE(0)           LOWOFFLOAD(0) HIGHOFFLOAD(80) STG_DUPLEX(YES) DUPLEXMODE(COND)           RMNAME() DESCRIPTION() RETPD(2) AUTODELETE(YES) OFFLOADRECALL(NO)           DASDONLY(NO) DIAG(NO) LOGGERDUPLEX(COND) EHLQ(IXGLOGR)       PENDING CHANGES:           OFFLOADRECALL(YES)           DUPLEXMODE(UNCOND) 
end example

This support (again, HBB7705 LOGR CDS required) also introduced the ability to dynamically change the CF structure the log stream was defined to. This can be done without first deleting the log stream. Note that the log stream cannot have any active or "failed persistent" connections for the update to be honored.

Migrating a log stream from DASD-only to CF-Structure based

It is possible to migrate a DASD-only log stream to a CF structure based one without deleting and redefining it; the log stream then is a CF-Structure based log stream and will operate as such. The migration is done using the IXCMIAPU utility to update the DASD-only log stream definition to refer to a CF structure by specifying the STRUCTNAME parameter as shown in Example 2-7 on page 56.

Example 2-7: Migrating a DASD-only log stream

start example
 Assume we have a DASD-only log stream defined called DOSTRM1.PROD We would then submit the following job to update the log stream to be defined in the CF structure called LOG_TEST_001 (assuming the request is valid; i.e. MAXBUFSIZE ok, no connections). DATA TYPE(LOGR) REPORT(YES)  UPDATE LOGSTREAM NAME(DOSTRM1.PROD)     STRUCTNAME(LOG_TEST_001) We can then use the D LOGGER,L command the verify the log stream has been migrated: IXG601I   16.36.43  LOGGER DISPLAY 533 INVENTORY INFORMATION BY LOGSTREAM LOGSTREAM                  STRUCTURE        #CONN  STATUS ---------                  ---------        ------ ------ DOSTRM1.PROD               LOG_TEST_001     000000 AVAILABLE 
end example

Note 

A DASD-only log stream automatically duplexes log data to DASD staging data sets. When you upgrade a DASD-only log stream to a CF-based log stream, you will still get duplexing to staging data sets unless you specify otherwise. You must specify STG_DUPLEX(NO) when you upgrade on the UPDATE LOGSTREAM request to get a CF-Structure based log stream that duplexes to local buffers rather than to DASD staging data sets.

For the update to be honored, all connections (active and "failed persistent") must be disconnected. Before updating the log stream definition, you should be familiar with all the concepts discussed in 2.5.1, "CF-Structure based log streams" on page 23. Also, the CF structure you intend to migrate the DASD-onlylog stream to must meet some requirements:

  • When defining DASD-only log streams, plan ahead for possible future upgrades by matching the MAXBUFSIZE value for the DASD-only log stream with the MAXBUFSIZE value of the structure you would assign it to on an upgrade request. The MAXBUFSIZE value on the DASD-only log stream definition must be the same size as, or smaller than, the MAXBUFSIZE value for the structure.

  • On the UPDATE request to upgrade a DASD-only log stream, specify a structure with a MAXBUFSIZE value that is as large as, or larger than, the DASD-only log stream MAXBUFSIZE value.

Restriction: 

You cannot issue an UPDATE request to reduce the MAXBUFSIZE value on a DASD-only log stream definition. You also cannot specify the MAXBUFSIZE parameter on an UPDATE request for a structure definition.

Note 

It is not possible to migrate from a CF-Structure based log stream to a DASD-only log stream without deleting and re-defining the log stream.

2.5.4 Deleting log stream and CF structure definitions

Both log streams and CF structures are deleted using either the IXCMIAPU utility or the IXGINVNT service. This section discusses how to use these tools to delete the System Logger construct, while 2.7, "Deleting log data" on page 62 will detail how log data within a log stream is disposed of.

Deleting log streams

The DELETE LOGSTREAM command requests that an entry for a log stream (complete with all associated staging and offload data sets) be deleted from the System Logger policy. The process is the same for both DASD-only and CF-Structure based log streams.

There is only one parameter on the DELETE LOGSTREAM request:

NAME

Specifies the name of the log stream you want to delete from the System Logger policy.

Example 2-8 shows a sample DELETE LOGSTREAM request.

Example 2-8: Sample log stream delete statements

start example
 DATA TYPE(LOGR) REPORT(YES)  DELETE LOGSTREAM NAME(CFSTRM2.PROD) 
end example

You cannot delete a log stream while there are any active connections to it. You also cannot delete a log stream that has a "failed-persistent" connection if System Logger is unable to resolve the connection. Remember that once you delete the log stream, all the data in any offload data sets associated with the log stream will also be gone. So, if you need that data, it is your responsibility to copy it from the log stream before you issue the DELETE.

Deleting CF structures

The DELETE STRUCTURE specification requests that an entry for a CF structure be deleted from the System Logger policy. Note that the structure will still be defined to the CFRM policy; this request only removes it from the System Logger policy.

There is only one parameter necessary on the DELETE STRUCTURE request:

NAME

Specifies the name of the CF structure you are deleting from the System Logger policy.

Example 2-9 shows a sample DELETE STRUCTURE request.

Example 2-9: Sample CF structure delete

start example
 DATA TYPE(LOGR) REPORT(YES)  DELETE STRUCTURE NAME(SYSTEM_OPERLOG) 
end example

You cannot delete a CF structure from the System Logger policy if there are any log stream definitions still referring to it.

[1]The composite structure view means that XES first determines the relationship between two System-Managed Duplexed structures and provides the composite view to the connector:

structure instance 1

structure instance 2 (duplexed copy of 1)

composite view to connecting system

failure independent

failure independent

failure independent

failure independent

failure dependent

failure independent

failure dependent

failure independent

failure independent

failure dependent

failure dependent

failure dependent



 < Day Day Up > 



Systems Programmer's Guide to--Z. OS System Logger
ASP.NET for Web Designers
ISBN: 738489433
EAN: 2147483647
Year: 2002
Pages: 99
Authors: Peter Ladka

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net