9.3 Problem determination scenarios

 < Day Day Up > 



9.2 Diagnostic information collection and analysis

Diagnostic information collection is fundamental for a problem determination and problem source identification (PD/PSI) process, especially when it is difficult to identify what kind of the problem it is by the superficial phenomena. Generally you could obtain the first-hand information from the diagnostic log files which are commonly available in a mature software products such as DB2 UDB and WebSphere Application Server. If the information from the diagnostic log files is still not sufficient to find out the root cause to resolve the problem, then further information for debugging purposes could be required. For example, taking a trace to obtain the details about the application execution procedure, dumping the stack trace back information to find out the failing function, etc.

Besides the diagnostic information that could be gathered through the existing mechanism built in the products, we could also add user-generated diagnostic information into some applications for a specific intention. There are often times that this kind of supplemental diagnostic information helps a lot to position the problematic code snippet, even in the failing line of the program source code. This kind of technique is most commonly used during the application development stage, and also is adopted popularly in the run-time production environment for many considerate software applications.

Diagnostic information collection is most useful for identifying then resolving functionality issues, but could also do good for performance issue investigation in many situations. For example, you could use diagnostic information to identify which step or steps are contributing the most time for a time-consuming application, then focus on the specific step or steps to find out the underlying reasons of the long execution time, and make adjustments accordingly.

In this section we introduce the available log files for both DB2 UDB and WebSphere Application Server V5. In addition, we also introduce the way to activate the trace function that is built in the products. Furthermore, we also discuss some general methods to analyze the information obtained by the above steps, thereby giving you some clues to of the diagnostic information to resolve the problem.

Note 

Operating System (OS) diagnostics also often help for problem determination and performance bottlenecks identification. We do not cover details about how to analyze OS diagnostics in this section. For more information about OS diagnostics, please refer to OS-specific documents.

9.2.1 DB2 UDB V8 diagnostic information collection and analysis

DB2 UDB provides comprehensive information and a variety of methods to assist you in problem determination and root cause identification. You could obtain the problem basic information from the returning code reported by DB2 UDB for your SQL operations or administrative commands, and acquire further details from the diagnostic log files, then taking possibly appropriate actions to resolve the problem.

You can start your investigation from the reported SQLCODE, as DB2 UDB informational messages are always returned in the form of CCCnnnnnS. The CCC identifies the DB2 component returning the message, the nnnnn is a four or five digit error code, and the S is a severity indicator.

The following are some of the SQL component identifiers that you might encounter when using DB2 UDB.

  • SQL: Database Manager messages

  • DB2: Command Line Processor messages

  • CLI: Call Level Interface messages

  • DBA: Control Center and Database Administration Utility messages

  • SQJ: Embedded SQLJ in Java messages

The easiest way to get more details regarding the DB2 return code is to use the DB2 command, as below:

 Kanaga:/home/db2inst1 >db2 "? SQL0289" 

In general, the output of the above command would return information including the basic meaning of the DB2 return code, the explanation about why the code comes out, and the recommended user response if it is not just an informational return code.

Besides the investigation on the returning code, DB2 UDB also has the built-in First-failure data capture (FFDC) mechanism which is very helpful for the problem troubleshooting. FFDC is a general term applied to the set of diagnostic information that DB2 captures automatically when errors occur. This information reduces the need to reproduce errors to get diagnostic information. The DIAGPATH parameter, specified in the database manager configuration, gives the fully qualified path to the FFDC storage directory. The DIAGLEVEL and NOTIFYLEVEL configuration parameters control the detail of information you receive in the logs.

The information captured by FFDC includes the following.

Administration Notification Logs

When significant events occur, DB2 writes information to the administration notification log. The information is intended for use by database and system administrators. Many notification messages provide additional information to supplement the SQLCODE that is provided. The type of event and the level of detail of the information gathered are determined by the NOTIFYLEVEL configuration parameter.

db2diag.log

Diagnostic information about errors is recorded in this text log file. This information is used for problem determination and is intended for DB2 customer support. The level of detail of the information is determined by the DIAGLEVEL configuration parameter.

Dump files

For some error conditions, extra information is logged in external binary files named after the failing process ID. These files are intended for use by DB2 customer support.

Trap files

The database manager generates a trap file if a DB2 process receives a signal or exception (raised by the operating system as a result of a system event) that is recognized by the DB2 signal handler. A trap file is generated in the DB2 diagnostic directory.

Core files (UNIX only)

When DB2 terminates abnormally, the operating system generates a core file. The core file is a binary file that contains information similar to the DB2 trap files. Core files may also contain the entire memory image of the terminated process.

Utilizing Messages files: 

Some DB2 utilities like BIND, LOAD, EXPORT, and IMPORT provide an option to dump out a messages file to a user-defined location. These files contain useful information to report the progress, success or failure of the utility that was run. You should take advantage of generating these files to ensure you are obtaining the most possible information in case of a problem.

Within the above files, the diagnostic log file db2diag.log is the most important and most often used for problem determination. Trap files, dump files and core files are generally required by DB2 support staff under complicated problem troubleshooting situations. If you are interested in the greater details about these files, such as the naming convention and how to generate these files manually, refer to the DB2 Information Center, which is reachable at the URL below:

http://publib.boulder.ibm.com/infocenter/db2help/index.jsp

DB2 UDB V8 also provides a very useful tool named db2support to help you collect diagnostic information such as the files described above, configuration files, DB2 product version information, operating system information, and so forth. We discuss this utility in more detail later in this section.

In addition to the FFDC information listed above, DB2 UDB also provides a trace facility that allows you to obtain further details about the process runtime information for debugging purposes.

The following provides more information about the db2diag.log file, db2support utility and DB2 traces.

DB2 diagnostic log file db2diag.log

This file is commonly used in complex problem determination scenarios. You can find this file in the DB2 diagnostic directory, defined by the DIAGPATH parameter in the database manager configuration. By default the directory is defined as follows:

 UNIX: $HOME/sqllib/db2dump 

Here $HOME is the DB2 instance owner's home directory. For example, if your instance owner is db2inst1 and its home directory is /home/db2inst1, the default location for the db2diag.log should be /home/db2inst1/sqllib/db2dump.

 Windows: <INSTALL PATH>\SQLLIB\<DB2INSTANCE> 

Here INSTALL PATH represents the directory where DB2 is installed. For example, a default db2diag.log for an instance owner of DB2 on Windows would exist in C:\Program Files\SQLLIB\DB2\db2diag.log.

The database manager configuration also controls how much information is logged to the db2diag.log through the use of the diagnostic level, or DIAGLEVEL parameter. Valid values can range from 0 to 4, as shown below:

  • 0 - No messages

  • 1 - Severe error messages

  • 2 - Only error messages

  • 3 - All error and warning messages (default)

  • 4 - All error, warning, informational, and internal diagnostic messages

The default diagnostic level of 3 is usually sufficient for problem determination. Setting it to 4 may cause performance issues due to the large amount of data recorded into the file. You should adjust the setting according to the problem you encounter, the amount of data you require to investigate, and the runtime environment where the problem occurs.

The following shows you an example of db2diag.log. For demonstration purposes, we try to create a table space on a non-existent device on AIX platform as below:

 db2 "create tablespace aaa managed by database using(device '/dev/raaa' 50M)" 

Then the following error messages are written to the db2diag.log, as shown in the Example 9-1.

Example 9-1: DB2 diagnostic log file db2diag.log

start example
 2003-11-17-14.24.04.887990   Instance:db2inst1   Node:000 PID:49060(db2agent (SAMPLE) 0)   TID:1   Appid:*LOCAL.db2inst1.0ADF97222248 oper system services  sqloopenp Probe:20   Database:SAMPLE errno: 0x2FF189D0 : 0x0000000D                                 .... PID:49060 TID:1 Node:000 Title: Path/Filename /dev/raaa 2003-11-17-14.24.04.925625   Instance:db2inst1   Node:000 PID:49060(db2agent (SAMPLE) 0)   TID:1   Appid:*LOCAL.db2inst1.0ADF97222248 buffer pool services  sqlbDMSAddContainerRequest Probe:820   Database:SAMPLE DIA8701C Access denied for resource "", operating system return code was "". ZRC=0x840F0001 2003-11-17-14.24.04.931528   Instance:db2inst1   Node:000 PID:49060(db2agent (SAMPLE) 0)   TID:1   Appid:*LOCAL.db2inst1.0ADF97222248 buffer pool services  sqlbDMSAddContainerRequest Probe:820   Database:SAMPLE Error acquiring container 0 (/dev/raaa) for tbsp 3.  Rc = 840F0001 ... 
end example

From the above diagnostic log file snippet, it is not difficult to find out the access denial to the device is the reason that creating tablespace failed. It also contains information about when the message was generated, the instance name, partition number, process name and ID, thread ID, application ID, related DB2 UDB components, function and internal probe, database name, and so forth.

Diagnostic information collection utility db2support

When it comes to collecting information for a DB2 problem, the most important DB2 utility you need to run is db2support. This utility is designed to automatically collect all related DB2 UDB and system diagnostic information available (including information described in previous pages). It has an optional interactive "Question and Answer" session available to help collect information for problems that you may want additional assistance investigating. Using db2support avoids possible user errors, as you do not need to manually type commands such as get dbm cfg" or list history all for <db_name>. Also, you do not require instructions on what commands to run or what files to collect, which makes information gathering for problem determination quicker.

This command has been in the DB2 product on Linux, OS/2®, Windows, and UNIX, since version 7 Fix Pack 4, and is continually being enhanced. Executing db2support -h brings up the complete list of possible options you can run the utility with. The following basic invocation is usually sufficient for collecting most of the information required to debug a problem (note that if the -c option is used the utility will establish a connection to the database):

 db2support <output path> -d <database name> -c -g -s 

If further information is required, you need to review which extra options could be used to help. The output is conveniently collected and stored in a compressed ZIP archive, db2support.zip, so it can be transferred and extracted easily on any system. Example 9-2 on page 346 shows you an example of using db2support utility.

Example 9-2: Using db2support to collect diagnostic information

start example
 Kanaga:/home/db2inst1 >mkdir log Kanaga:/home/db2inst1 >db2support ./log -d sample -c -g -s               _______   D B 2 S u p p o r t    ______ This program generates information about a DB2 server, including information about its configuration and system environment. The output of this program will be stored in a file named 'db2support.zip', located in the directory specified on the application command line. If you are experiencing problems with DB2, it may help if this program is run while the problem is occurring. ... Output file is "/home/db2inst1/log/db2support.zip" Time and date of this collection: "Mon Nov 17 14:58:06 PST 2003 PST" ... ... db2support is now complete.  An archive file has been produced: "db2support.zip" 
end example

Please be aware that the information provided above is incomplete and more details are removed as it is just for demonstration purpose.

DB2 traces

If the first failure data captured is insufficient to diagnose the problem, and if the problem you are experiencing is recurring or reproducible, then taking DB2 traces sometimes allows you to capture additional information.

In general, as there is additional processing incurred by activating the trace and the amount of information gathered by a trace grows rapidly, the process of performing a trace has a global effect on the behavior of a DB2 instance. The degree of performance degradation is dependent on the type of problem and on how many resources are being used to gather the trace information. When you take the trace, capture only the error situation and avoid any other activities whenever possible, if the trace facility supports that. When taking a trace, use the smallest scenario possible to reproduce a problem, as it could also reduce the impact to the performance.

DB2 UDB V8 provides different trace solutions for different problem situations. Some of the available DB2 traces are listed below:

  • db2trc

    The db2trc facility lets you trace DB2 internal events and records information about operations, dumps the trace data to a file, and formats the information into a readable form.

    Typically you will use the trace facility only when directed by DB2 Customer Support or by your technical support representative.

  • GUI trace

    It is helpful for GUI tools problem determination. For example, you could use db2cctrc command to do a trace for the DB2 Control Center.

  • db2drdat

    This allows the user to capture the DRDA data stream exchanged between a DRDA Application Requestor (AR) and the DB2 UDB DRDA Application Server (AS). Although this tool is most often used for problem determination, by determining how many sends and receives are required to execute an application, it can also be used for performance tuning in a client/server environment.

  • CLI trace

    The DB2 CLI and ODBC drivers offer comprehensive tracing facilities. By default, these facilities are disabled and use no additional computing resources. When enabled, the trace facilities generate one or more text log files whenever an application accesses the appropriate driver. This trace facility is also helpful when using DB2 Legacy JDBC Drivers, as the CLI layer is also involved in that case.

  • asntrc

    This trace facility assists you in the troubleshooting of replication related problems. It logs program flow information from Capture, Apply, and Replication Alert Monitor programs.

  • JDBC Trace

    DB2 JDBC Drivers offer comprehensive tracing facilities, and it has been continually enhanced, especially with the introduction of the DB2 Universal JDBC Driver. The activation method for the DB2 Legacy JDBC Driver and DB2 Universal JDBC Driver is different. For DB2 Legacy JDBC Driver, as the CLI layer is involved, it could be activated by updating CLI configuration, via the UPDATE CLI CFG command or edit db2cli.ini file directly. Example 9-3 shows you an example of using the JDBC trace for the legacy JDBC driver.

    Example 9-3: Using JDBC Trace for the Legacy JDBC Driver

    start example
     Kanaga:/home/db2inst1 > db2 update cli cfg for section common using JDBCTrace 1 JDBCTracePathName /home/db2inst1/jdbc/trc JDBCFlush 1 Kanaga:/home/db2inst1 >db2 get cli cfg for section common  Section: common  -------------------------------------------------    JDBCFlush=1    JDBCTracePathName=/home/db2inst1/jdbc/trc    JDBCTrace=1 /* Running your java application which uses the legacy DB2 JDBC Driver, the sample application shown below is TbRead which is shipped with DB2 UDB V8, it could be found under $HOME/sqllib/samples/java directory. */ Kanaga:/home/db2inst1/jdbc >java TbRead Kanaga:/home/db2inst1/jdbc >cd trc Kanaga:/home/db2inst1/jdbc/trc >ls -l total 1289 -rw-r--r--   1 db2inst1 db2grp1       8269 Nov 19 09:30 44182_1_Finalizer.trc -rw-r--r--   1 db2inst1 db2grp1     648987 Nov 19 09:30 44182_1_main.trc Kanaga:/home/db2inst1/jdbc/trc >head -n 40 44182_1_main.trc jdbc.app.DB2Driver -> DB2Driver() (2003-11-19 09:30:28.295) | Loaded db2jdbc from java.library.path | DB2Driver: JDBC 2.0, BuildLevel: s031027 jdbc.app.DB2Driver <- DB2Driver() [Time Elapsed = 0.0020] (2003-11-19 09:30:28.297) DB2Driver - connect(jdbc:db2:sample) jdbc.app.DB2Connection -> connect(sample, info, DB2Driver: JDBC 2.0 s031027, 0, false) (2003-11-19 09:30:28.3) | 10: conArg = | 10: connectionHandle = 1 jdbc.app.DB2Connection <- connect() [Time Elapsed = 0.432] (2003-11-19 09:30:28.732) jdbc.app.DB2Connection -> setAutoCommit2(false) (2003-11-19 09:30:28.916) | 10: Connection handle = 1 jdbc.app.DB2Connection <- setAutoCommit2() returns 0 [Time Elapsed = 0.039] (2003-11-19 09:30:28.955) jdbc.app.DB2Connection -> createStatement() (2003-11-19 09:30:28.955) | jdbc.app.DB2Statement -> DB2Statement(con, 1003, 1007) (2003-11-19 09:30:28.956) | | jdbc.app.DB2Statement -> checkResultSetType(1003, 1007) (2003-11-19 09:30:28.956) | | jdbc.app.DB2Statement <- checkResultSetType() [Time Elapsed = 0.0] (2003-11-19 09:30:28.956) | | 10: Peak statements = 1 | | 10: Statement Handle = 1:1 | jdbc.app.DB2Statement <- DB2Statement() [Time Elapsed = 0.0] (2003-11-19 09:30:28.956) jdbc.app.DB2Connection <- createStatement() [Time Elapsed = 0.0010] (2003-11-19 09:30:28.956) jdbc.app.DB2Statement -> executeQuery(SELECT deptnumb, location FROM org WHERE deptnumb < 25) (2003-11-19 09:30:29.024) | 10: Statement Handle = 1:1 | jdbc.app.DB2Statement -> getStatementType(SELECT deptnumb, location FROM org WHERE deptnumb < 25) (2003-11-19 09:30:29.024) | jdbc.app.DB2Statement <- getStatementType() returns STMT_TYPE_QUERY (24) [Time Elapsed = 0.0010] (2003-11-19 09:30:29.025) | jdbc.app.DB2Statement -> execute2(SELECT deptnumb, location FROM org WHERE deptnumb < 25) (2003-11-19 09:30:29.025) | | 10: StatementHandle = 1:1 | | 10: SQLExecDirect - returnCode = 0 | | 10: rowCount = 0 | jdbc.app.DB2Statement <- execute2() [Time Elapsed = 0.013] (2003-11-19 09:30:29.038) | jdbc.app.DB2Statement -> getResultSet() (2003-11-19 09:30:29.038) | | 10: Statement Handle = 1:1 | | jdbc.app.DB2ResultSetTrace -> DB2ResultSet(stmt, nCols=0) (2003-11-19 09:30:29.047) | | | 10: numCols = 2 | | jdbc.app.DB2ResultSetTrace <- DB2ResultSet() [Time Elapsed = 0.0] (2003-11-19 09:30:29.047) | | jdbc.app.DB2ResultSetTrace -> DB2ResultSetTrace(stmt,0) (2003-11-19 9:30:29.47) ... 
    end example

From the above example, it is not difficult to find out the details about the driver you are using, the time spent to obtain the connection, the SQL statement executed in the context, etc. If you encounter problems when using the Legacy DB2 JDBC Driver, the JDBC trace is generally very helpful to point out where the problem exists.

Due to the importance of DB2 Universal JDBC Driver, we discuss it in a separate subsection as below.

DB2 Universal JDBC Driver JDBC Trace

Before jumping into the JDBC trace, please make sure that your DB2 Universal JDBC Driver is correctly installed. Regarding how to install the DB2 Universal JDBC Driver, please refer to the topic Installing the DB2 Universal JDBC Driver in DB2 Information Center via the URL below:

http://publib.boulder.ibm.com/infocenter/db2help/index.jsp

In addition, please also make sure the right data source class is chosen for your runtime environment. The DB2 Universal JDBC Driver provides the following DataSource implementations:

  • com.ibm.db2.jcc.DB2SimpleDataSource

    This implementation does not support connection pooling. You can use this implementation with Universal Type 2 Driver or Universal Type 4 Driver.

  • com.ibm.db2.jcc.DB2DataSource

    This implementation supports connection pooling. You can use this implementation only with Universal Type 2 Driver. With this implementation, connection pooling is handled internally and is transparent to the application.

  • com.ibm.db2.jcc.DB2ConnectionPoolDataSource

    This implementation supports connection pooling. You can use this implementation with both Universal Type 2 Driver and Universal Type 4 Driver, but XA is not supported. It is the factory for PooledConnection objects. An object that implements this interface will typically be registered with a naming service that is based on the Java Naming and Directory Interface (JNDI). With this implementation, you must manage the connection pooling yourself, either by writing your own code or by using a tool such as WebSphere Application Server.

  • com.ibm.db2.jcc.DB2XADataSource

    This implementation supports distributed transactions and connection pooling. You can use this implementation with Universal Type 2 Driver and XA support, but be aware that Universal Type 4 Driver is not supported by this implementation. With this implementation, you must manage the distributed transactions and connection pooling yourself, either by writing your own code or by using a tool such as WebSphere Application Server.

Note 

The com.ibm.db2.jcc.DB2BaseDataSource class is the abstract data source parent class for all the DB2 DataSource implementations of the DB2 Universal JDBC Driver discussed above.

You can also use the DriverManager to get the connection of the DB2 Universal JDBC Driver. The related class is com.ibm.db2.jcc.DB2Driver. Using DriverManager to connect to a data source reduces portability because the application must identify a specific JDBC driver class name and driver URL. The driver class name and driver URL are specific to a JDBC vendor and driver implementation. If your applications need to be portable among data sources, it is highly recommended to use of the DataSource interface.

Start trace when using DataSource interface to get the connection

If the DataSource interface is used to connect to a data source, use one of the following methods to start the trace:

  • Method 1: Invoke the DB2BaseDataSource.setTraceLevel method to set the type of tracing that you need. The default trace level is TRACE_ALL. Then invoke the DB2BaseDataSource.setJccLogWriter method to specify the trace destination and turn the trace on.

  • Method 2: Invoke the javax.sql.DataSource.setLogWriter method to turn the trace on. With this method, TRACE_ALL is the only available trace level.

After a connection is established, you can turn the trace off or back on, change the trace destination, or change the trace level with the DB2Connection.setJccLogWriter method. To turn the trace off, set the logWriter value to null.

The logWriter property is an object of type java.io.PrintWriter. If your application cannot handle java.io.PrintWriter objects, you can use the traceFile property to specify the destination of the trace output. To use the traceFile property, set the logWriter property to null, and set the traceFile property to the name of the file to which the driver writes the trace data. This file and the directory in which it resides must be writable. If the file already exists, the driver overwrites it.

Start trace for using DriverManager interface to get the connection

If the DriverManager interface is used to connect to a data source, use the following method to start the trace: Invoke the DriverManager.getConnection method with the traceLevel property set in the info parameter or url parameter for the type of tracing that you need. The default trace level is TRACE_ALL. Then invoke the DriverManager.setLogWriter method to specify the trace destination and turn the trace on.

There is an example available in the section Example of tracing under the DB2 Universal JDBC Driver in DB2 Information Center. This example shows you how to program to take a JDBC trace when using the DB2 Universal JDBC Driver.

When using the DataSource interface in the WebSphere Application Server V5 environment, it is very convenient and easy for you to configure the trace properties in the WebSphere Application Server Administrative Console. The trace properties are part of the custom properties for the DB2 data source. Regarding how to configure custom properties for DB2 data source in WebSphere Application Server Administrative Console, refer to "The steps to create and configure DB2 Data Source" on page 144. The following shows you the related parameters to configure the DB2 Universal JDBC Driver trace.

  • traceFile

    Specifies the name of a file into which the DB2 Universal JDBC Driver writes trace information. The data type of this property is String. The traceFile property is an alternative to the logWriter property for directing the output trace stream to a file.

  • traceFileAppend

    Specifies whether to append to or overwrite the file that is specified by the traceFile property. The data type of this property is boolean. The default is false, which means that the file that is specified by the traceFile property is overwritten.

  • traceLevel

    Specifies what to trace. The data type of this property is int. You can specify one or more of the following traces with the traceLevel property.

    • TRACE_NONE = 0

    • TRACE_CONNECTION_CALLS = 1

    • TRACE_STATEMENT_CALLS = 2

    • TRACE_RESULT_SET_CALLS = 4

    • TRACE_DRIVER_CONFIGURATION = 16

    • TRACE_CONNECTS = 32

    • TRACE_DRDA_FLOWS = 64

    • TRACE_RESULT_SET_META_DATA =128

    • TRACE_PARAMETER_META_DATA = 256

    • TRACE_DIAGNOSTICS = 512

    • TRACE_SQLJ = 1024

    • TRACE_ALL = -1

    • TRACE_XA_CALLS (Universal Type 2 Driver for DB2 UDB for Linux, UNIX and Windows only)

After configuring the trace properties for the DB2 data source, you could try to test the connection to the DB2 data source in the WebSphere Application Server Administrative Console. Check the file that is set by traceFile property. You could find the content corresponding to the levels set by traceLevel is written into that file. For a detailed tracing example, which is using DB2 Universal JDBC Driver trace in the WebSphere Application Server V5 environment, please refer to "Connectivity scenario" on page 359.

If you want to acquire more information about other kinds of traces available in DB2 UDB V8, such as db2trc, CLI trace, and so forth, DB2 Information Center is a very good resource for that.

Besides the log files and traces investigation for the problem determination, SQL statement access plan analysis is also very useful to identify performance bottlenecks. For example, you could use event monitor to find out the typical or resource consuming SQL statements, then use SQL Explain utilities to do further analysis against specific SQL statements. Regarding more information about DB2 SQL Explain Facilities, refer to the Chapter 7, "SQL Explain facility," in Administration Guide: Performance, IBM DB2 Universal Database, Version 8, SC09-4821.

9.2.2 WAS V5 diagnostic information collection and analysis

Generally a typical J2EE application runtime environment consists of multiple tiers such as Web server, application server, and database servers, also including the communication between different tiers by using different components such as Web server plug-in, resource adapter, etc. As the J2EE runtime platform, WebSphere Application Server troubleshooting or problem determination encompasses a wide range of tasks that might need to be performed at any tier or any component in the runtime environment. To assist you in identifying in which component or tier the problem exists, WebSphere Application Server provides a variety of diagnostic logs and tools to make the problem determination easier.

Note 

As the IBM HTTP Server (IHS) commonly coexists with the WebSphere Application Server, we also discuss diagnostic information analysis about the IHS in this section.

As a starting point to resolve functional problems, we begin this section with components availability verification, so that we could gain a basic understanding if all the related tiers or components are functioning normally. After that we introduce the basic diagnostic information available in the WebSphere Application Server environment and some of the useful tools for problem determination.

Components availability verification

It is possible that the problem that you are studying is not derived from application server or its containers. For example, when you make an access request to an entity EJB, if the supporting database resources are not ready at that moment, your access request would fail. Or you might find that the page you requested could not be found when your Web server plug-in is not successfully loaded with the Web server process. Make sure that all the related tiers or components functioning normally is the basis for the deeper level problem troubleshooting.

Table 9-1 gives you some simple sample methods to verify the components availability. Please be aware that it is possible that the methods provided here are not working in your environment if you have changed the default configuration, or make no certain that the component is working normally. For both cases, you could have environment-specific methods to verify the component availability.

Table 9-1: Simple methods for component availability verification

Component

Function briefing

Simple methods for availability verification

Load Balancer

Providing intelligent workload dispatching between backend Web servers

Ping related IP address and cluster address.

IBM HTTP Server

Taking the role of Web server

Access Web Server with default configuration on HTTP://ipaddress or HTTP://localhost if local.

Web Server Plug-In for WebSphere Application Server

Routing requests from Web server to application server

Check plug-in log file, http_plugin.log by default.

WebSphere Application Server

Providing runtime platform for J2EE applications

Check log files and traces; more details cover later.

Web Container

Providing runtime environment for application Web module, part of WAS

Check HTTP Transport port availability, by default, 9080. Use netstat to see if it is in the listening status.

EJB Container

Providing runtime environment for application EJB module, part of WAS

Use simplified Java client to access EJB.

DB2 UDB

Taking the role as database

Use DB2 Command Line Processor to access related tables.

Diagnostic information analysis

WebSphere Application Server provides comprehensive diagnostic information to assist you in problem determination. For example, the console messages available in the WebSphere Status pane within the WebSphere Application Server Administrative Console could provide you the runtime messages and help you determine the WebSphere configuration problems. WebSphere Application Server also provides you with general purpose logs, such as the JVM logs, the process logs and the IBM service logs. In addition, in order to obtain more details about the component runtime information, the diagnostic trace is also available for your use. You could configure these log files and trace in the WebSphere Application Server Administrative Console, as shown in Figure 9-2.

click to expand
Figure 9-2: Logging and tracing configuration in WAS Administrative Console

Furthermore, there also exists some log files for IBM HTTP Server and the Web server plug-in, which can help you determine if the problem is related to IHS or the Web server plug-in.

The following provides more details about these logs and trace.

JVM logs

The JVM logs are created by redirecting the System.out and System.err streams of the JVM to independent log files. WebSphere Application Server writes formatted messages to the System.out stream. In addition, applications and other code can write to these streams using the print() and println() methods defined by the streams. Some JDK built-ins such as the printStackTrace() method on the Throwable class can also write to these streams. Typically, the System.out log is used to monitor the health of the running application server. The System.out log can also be used for problem determination, but it is recommended to use the IBM Service log and the advanced capabilities of the Log Analyzer instead. The System.err log contains exception stack trace information that is useful when performing problem analysis.

Since each application server represents a JVM, there is one set of JVM logs for each application server and all of its applications, located by default in the installation_root/logs/server_name directory. In the case of a WebSphere Application Server Network Deployment configuration, JVM logs are also created for the deployment manager and each node manager, since they also represent JVMs. The default setting for the JVM logs is listed below:

 System.out Stream: ${SERVER_LOG_ROOT}/SystemOut.log System.err Stream: ${SERVER_LOG_ROOT}/SystemErr.log 

See the WebSphere Variables page in the WebSphere Application Server Administrative Console for the definition of SERVER_LOG_ROOT. By default, it is the directory installation_root/logs/server_name.

Process logs

The process logs are created by redirecting the stdout and stderr streams of the process to independent log files. Native code, including the Java Virtual Machine (JVM) itself writes to these files. As a general rule, WebSphere Application Server does not write to these files. However, these logs can contain information relating to problems in native code or diagnostic information written by the JVM.

As with JVM logs, there is a set of process logs for each application server, since each JVM is an operating system process, and in the case of a WebSphere Application Server Network Deployment configuration, there is also a set of process logs for the deployment manager and each node manager. The default setting for the process logs is listed below:

 Stdout File Name: ${SERVER_LOG_ROOT}/native_stdout.log Stderr File Name: ${SERVER_LOG_ROOT}/native_stderr.log 

You could refer to the JVM logs above for the SERVER_LOG_ROOT definition.

IBM Service logs

The IBM service logs contain both the WebSphere Application Server messages that are written to the System.out stream and some special messages that contain extended service information that is important when analyzing complex problems. There is one service log for all WebSphere Application Server JVMs on a node, including all application servers. The IBM Service log is maintained in a binary format and requires a special tool to view. This viewer, the Log Analyzer, provides additional diagnostic capabilities. The default setting for the IBM service logs is listed below:

 File Name: ${LOG_ROOT}/activity.log 

See the WebSphere Variables page in the WebSphere Application Server Administrative Console for the definition of LOG_ROOT. By default, it is the directory installation_root/logs.

Diagnostic trace

You could use trace to obtain detailed information about the execution of WebSphere Application Server components, including application servers, clients, and other processes in the environment. Trace files show the time and sequence of methods called by WebSphere Application Server base classes, and you can use these files to pinpoint the failure. The default setting of the trace output is stored in ${SERVER_LOG_ROOT}/trace.log. Trace details could be found in this trace file after the trace is activated. You could refer to the JVM logs above for the SERVER_LOG_ROOT definition.

Attention: 

Tracing is very demanding on system resources. Remember to turn the trace off once you have finished the trace task.

IBM HTTP Server logs

IBM HTTP Server (IHS) maintains log files to help you monitor fulfilled requests and encountered errors. You could use IHS Administration Server to configure the setting log files or changing httpd.conf directly if you are familiar with that. By default, the log files of IHS on the Windows platform can be found at:

 Access Log File: <IHS_INSTALLATION_ROOT>\logs\access.log Error Log File: <IHS_INSTALLATION_ROOT>\logs\error.log 

For Linux and UNIX-based platforms:

 Access Log File: <IHS_INSTALLATION_ROOT>/logs/access_log Error Log File: <IHS_INSTALLATION_ROOT>/logs/error_log 

Here IHS_INSTALLATION_ROOT represents where the IHS is installed.

Web Server Plug-in logs

If you are having problems with the HTTP plug-in component (the component that sends requests from your HTTP server to the WebSphere Application Server), you could find some clues by reviewing the plug-in log file. By default, it is located in install_dir/logs/http_plugin.log. Try to look up any error or warning messages in the message table for possible cause of the problem. You can also change the LogLevel for the plug-in log to a higher level, for example, Trace, to obtain further details about the plug-in operation. This could be configured in the file installation_root/config/plugin-cfg.xml (you could check HttpServer/conf/httpd.conf for the real plug-in configuration file location).

Besides the log files and traces discussed above, WebSphere Application Server V5 also provides a variety of other log files. For example, the StartServer.log and StopServer.log located under <WAS_INSTALL_ROOT>/logs/<server_name> (here the variable WAS_INSTALL_ROOT represents the home directory where WAS is installed) could help you acquire information about when the application server is started or stopped, and if the starting or stopping activity is successful. Another example, the First Failure Data Capture (FFDC) tool preserves the information generated from a processing failure and returns control to the affected engines. The captured data by FFDC is saved under the directory WAS_INSTALL_ROOT>/logs/ffdc and it is intended primarily for use by IBM service personnel. For a more complete description about WebSphere Application Server message logs and trace, please refer the section Monitoring and Troubleshooting in the WebSphere Application Server InfoCenter.

Using troubleshooting tools

There are a number of troubleshooting tools bundled with the WebSphere Application Server product. These tools are designed to help you isolate the source of problems. Some of these tools are discussed below.

Log Analyzer

The Log Analyzer takes one or more service or activity logs, merges all of the data, and displays the entries. Based on its symptom database, the tool analyzes and interprets the event or error conditions in the log entries to help you diagnose problems. Log Analyzer has a special feature enabling it to download the latest symptom database from the IBM Web site. Besides using Log Analyzer to view the service or activity logs, WebSphere Application Server diagnostic trace output could also be dumped in the Log Analyzer format, then you could use the Log Analyzer to help you analyze the trace. The Log Analyzer could be invoked by the command waslogbr.bat on Windows systems or waslogbr on UNIX systems.

In case of the absence of a graphical interface to use the Log Analyzer, the service or activity logs could be viewed via an alternate tool, showlog. This utility could help you dump the service or activity log into a file or stdout in text format.

Collector

The Collector tool gathers information about your WebSphere Application Server installation and packages it in a Java archive (JAR) file that assists you in determining and analyzing the problem. You can also send it to IBM Customer Support for further help when requested. Information in the JAR file includes logs, property files, configuration files, operating system and Java data, and the presence and level of each software prerequisite.

For Linux and UNIX-based platforms, the collector tool could be invoked by the collector.sh command. For Windows platforms, the corresponding command is collector.bat.

WebSphere Application Server products include an enhancement to the Collector tool beginning with Version 5.0.2, known as the collector summary option. Run the Collector tool with the -Summary option to produce a lightweight text file. You can use the collector summary option to retrieve basic configuration and prerequisite software level information when starting a conversation with IBM Support.

In addition to the tools introduced above, there are still a variety of tools available in WebSphere Application Server to help you get more information for problem analysis and performance monitoring and tuning. For example, you could use the name space dump utility (dumpNameSpace) to dump the contents of a name space accessed through a name server, or use Tivoli Performance Viewer to monitor the current application server running status, etc. For more information, you could refer to the WebSphere Application Server InfoCenter.



 < Day Day Up > 



DB2 UDB V8 and WebSphere V5. Performance Tuning and Operations Guide2004
DB2 UDB V8 and WebSphere V5. Performance Tuning and Operations Guide2004
ISBN: N/A
EAN: N/A
Year: 2004
Pages: 90

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net