Database Access

Overview

At this point in this book, you have been exposed to much of the Java language, and should be getting comfortable with Java's syntax and capabilities. However, the primary activity in all of your RPG code is accessing data. Almost all AS/400 RPG applications revolve around database access. They are "based" on "data." Clearly, then, to put Java to use-either on the client or on the server-you need to know how Java programs can access DB2/400 data.



Database Access in RPG IV

Processing database files has not changed much from RPG III to RPG IV. The operations are still the same (CHAIN, SETLL, SETGT, READ, READP, READE, READPE, WRITE, UPDATE, DELETE, LOCK, UNLOCK, FEOD, OPEN, and CLOSE), although the longer names are now spelled correctly! However, the syntax of the F-spec for declaring database files is different, since F-spec has been redesigned. Here is what the F-spec for a database file looks like in RPG IV:

678901234567890123456789012345678901234567890123456789012 
FFilename++IPEASFRlen+LKlen+AIDevice+.Keywords+++++++++++ 
FProddtl UF E K Disk

This declares an updatable, full-procedural, externally described, keyed file named PRODDTL. Note the keyword area from column 44 on. This area allows RPG IV to do away with continuation specifications. Rather, keywords are used to specify additional information, and you can continue coding keywords in positions 44 through 80 on as many subsequent F-spec lines as needed (these lines will have positions 7 through 43 blank). The valid keywords for database files include BLOCK, COMMIT, EXTIND, IGNORE, INCLUDE, INFDS, INFSR, PREFIX, RENAME, RECNO, and USROPN. Some you probably recognize from RPG III, while others such as INCLUDE and PREFIX are new for RPG IV.

V5R1 of RPG IV offers some exciting new capability, via its new EXTFILE and EXTMBR F-spec keywords. With these keywords, you can specify the library-qualified file name and the explicit member to open, respectively. In both cases, the name can be explicitly given as a quote-delimited literal or as a field name. For the latter, when not using USROPEN to defer opening of the file until runtime, the field's value must be available at pre-runtime by using the INZ keyword, or passing the value in as an entry parameter. If no library is given for EXTFILE, then *LIBL is used. For EXTMBR, the special names *ALL and *FIRST are allowed. For explicit library, file, and member name literals or field values, the name must be in uppercase to work properly.

When processing the file, the op-codes now use optional extenders instead of the overloaded column 53 in RPG III. For example, READ(N) means read with no lock. There is also a new E extender that simplifies error processing. When specified, you can do away with result indicators and instead simply test if the %ERROR built-in function equals one (an error) or zero (no error), and use %STATUS to get the file status.

Some RPG programs today still use the RPG cycle to process files. However, this is generally considered an unstructured and obsolete form of database access. If you are writing ILE RPG IV modules in which you specify the NOMAIN keyword on the H-spec, you do not even have the option of using the RPG cycle because it is not permitted. (Note that VisualAge for RPG does not support the RPG cycle at all.)

RPG's built-in database support is sometimes called direct record access. It is noteworthy in the database industry because most relational database access is done through some form of SQL (Structured Query Language) statements, as you will learn.



RPG Database Access Under Commitment Control

If you are conducting sensitive multiple-database transactions, where each transaction consists of multiple database updates, you might also use commitment control by following these steps:

  • Prepare for commitment control using the CL commands CRTJRN (Create Journal), CRTJRNRCV (Create Journal Receiver) and STRJRNPF (Start Journal Physical File).
  • Start commitment control for your application using the CL command STRCMTCTL (Start Commitment Control).
  • Declare the files on the F-spec as being under commitment control, using the COMMIT keyword. You can specify an optional runtime flag for dynamically controlling the use of commitment control.
  • Open the files, either automatically or by specifying the USROPN keyword on the F-spec and using the OPEN operation code.
  • Work with the files' records using the usual RPG operation codes.
  • Commit the files' changes using the COMMIT operation code, or cancel the files' changes if any error occurs, using the ROLBK (Roll Back) op-code.
  • End commitment control using the CL command ENDCMTCTL (End Commitment Control).

Commitment control can be scoped to either the job or an ILE activation group. It allows you to treat multiple database operations as a single atomic unit of work-either they all succeed, or none succeed. This is the fundamental way in which complex transactions ensure database integrity: you do not want "partial" transactions to corrupt the state of your data.



Accessing Data with SQL

SQL is a database industry-standard method of accessing a relational database (one containing records and fields). The current standard is ANSI SQL99. Your SQL code and skills are transferable to other databases and other systems. As you will see, these skills are also directly portable to Java.

The SQL language has a well-defined syntax for creating and manipulating databases. If you use it to create databases on the AS/400, those databases are identical and interchangeable with database files created with DDS. If you have an existing database created with traditional DDS, you can still access that database with SQL's database manipulation statements, as though the database were created with SQL statements, and vice versa. (Note that there are some differences.) One restriction, however, is that SQL cannot be used to access multiple-record-format logical files. SQL assumes one format per file and has no syntax for qualifying format names when reading or writing a database.

SQL uses terminology that is different from what you might be used to: rows are records, columns are fields, and tables are files.

When using SQL, you do not use RPG's record-oriented style to access your data. Rather, you use statements where you specify record filter criteria to retrieve, update, or delete all rows meeting those criteria. In RPG, you would code a loop to read through all records (hence record-level access) you wish to view or update, while in SQL you first retrieve all pertinent rows using a filter. You then iterate through each row. Here is an example of a filter:

SELECT * FROM CUSTOMER WHERE STATE='PA'

This retrieves records from the CUSTOMER table where the STATE field is equal to PA. Note that for each row, all (*) the columns will be returned. Alternatively, you could specify an explicit list of comma-separated field names.

The WHERE clause in this statement is called the predicate. It can get as complex as required, using boolean logic and numerous operators to explicitly identify the exact records you are looking for, and functions to apply to the matching rows. For example, you can specify a COUNT column function to count and report the number of rows, or an AVG column function to compute and report the average value for all columns in all applicable rows. There is support for retrieving from multiple tables, for doing "total" calculations, and more.

A SELECT statement can have an ORDER BY clause to sort the returned rows. It can have a GROUP BY clause to apply column functions to sub-groups of the rows. It can also have a HAVING clause to further filter the rows in the result based on applying a function such as AVG and comparing the output of it. The following is from DataBase Design and Programming for DB2/400 by Paul Conte (29th Street Press, 1996), a book we recommend:

SELECT ShpCity, COUNT( * ), AVG( Discount)
 FROM Customer
 WHERE Discount IS NOT NULL
 GROUP BY ShpCity
 HAVING AVG ( Discount ) > .01

Rather than returning rows of data from the database, this statement returns calculated information based on that data. You get back information about the customers who have an average discount greater than 0.01. This information is grouped by the customer's city, so you'll get back one row of information per city that has customers meeting the criteria. The information you get per row is the name of the city, the number of customers in that city, and the average discount for those customers, as specified on the SELECT statement. Note that customers with no discount value will not be included in the calculations.

As you explore SQL, you will be amazed at the power of the SELECT statement for doing queries and calculations. The result of a query is a temporary result set containing all of the returned rows, and there are SQL statements for iterating through each row. (In RPG, FETCH moves you to the next record and places the field values for that row into specified fields). To update records, there is no need for iteration. You simply specify the filter criteria for the affected records and the new value for the field or fields you want to change, like this:

UPDATE CUSTOMER SET RATE=2 WHERE STATE='PA'

In this case, all records in the CUSTOMER database file where the STATE field is PA have their RATE field changed to two. Is this easier than writing loops?

Deleting records is similar:

DELETE FROM CUSTOMER WHERE STATE='WY' AND RATE=2

This deletes all records from the CUSTOMER database where the STATE is WY and the RATE field has a value of two. For these types of "apply to all records of this criteria" operations, SQL can offer some coding advantages. In fact, it is similar to creating a logical file view over the database, on the fly. There is also a way to update and delete the current record being processed in a result set that comes from a SELECT statement. This is called positioned update and delete.

To insert records into a database you use the INSERT INTO statement, like this:

INSERT INTO CUSTOMER (CUSTOMER, STATE, RATE) VALUES('Bobs Bait', 'MA', 1)

This inserts a single row into the CUSTOMER table, with values specified for the columns CUSTOMER, STATE, and RATE. There is also syntax to insert multiple rows into the database in a single statement.

Interactive SQL versus embedded SQL

SQL consists of statements such as SELECT, UPDATE, DELETE, and INSERT. On the AS/400, these statements can be run interactively, with the result shown immediately, via the STRSQL (Start SQL) CL command. This places you in a shell for submitting SQL statements. You can also code your SQL statements into a source member and run them from there via the RUNSQLCMD (Run SQL Command) CL command.

Another option is to embed SQL statements directly into your RPG, COBOL, C, C++, PL/I, or FORTRAN source. Even REXX supports embedded SQL. In these cases, you use special syntax to distinguish between your native source and your SQL source. All SQL source is bracketed by EXEC SQL and END-EXEC, for example:

C/Exec SQL
C+ UPDATE CUSTOMER
C+ SET RATE=2
C+ WHERE STATE='PA'
C/End-Exec

All SQL statements are coded on C-specs. To continue an SQL statement over multiple lines, use a plus sign in position 7, as shown in the example above with "C+". The SQL statements entered are free-format and case-insensitive.

In embedded SQL, you do not use a SELECT statement directly to retrieve a set of rows. Rather, you have to declare a cursor by using the DECLARE CURSOR statement, and specify SELECT there as a statement clause. Then, you can OPEN the cursor (this reads the records from the database that meet the SELECT criteria), FETCH the individual records from the result set (one at a time), process the columns of the currently fetched row, and CLOSE the cursor.

Rather than declaring a file and processing file records as you do in native RPG code, with SQL, you declare a cursor on the file and process the resulting set of records produced by the cursor. The DECLARE CURSOR statement requires you to name the cursor, and this name is subsequently used on the OPEN, FETCH, and CLOSE statements.

When you embed SQL source in RPG source, you no longer use the RPG compiler command. Instead, the compilation is done using the appropriate SQL preprocessor command, such as CRTSQLRPGI for RPG IV. This preprocessor works in two steps. First, it processes the embedded SQL into native RPG statements and an SQL access plan. Then, it compiles the generated intermediate RPG using the traditional compiler, such as CRTRPGMOD. (See the DB2 UDB for AS/400 SQL Programming manual for more details.)

Static SQL versus dynamic SQL

SQL statements can be either static or dynamic. SQL statements you embed in a programming language such as RPG are static. Indeed, static statements can only be embedded in another language and precompiled. Dynamic statements are issued in several ways: interactively, in their own source member, or via APIs. Dynamic statements can be prepared and executed dynamically (depending, say, on runtime information the program has), while static statements are hard-coded into the source program.

Although the statements themselves are hard-coded for static SQL, you can still use program variables to define the parameter values to the statement dynamically. To do this, you specify your program variable name right in the SQL statement, but prefix it with a colon. However, the boolean logic for the filter is still hard-coded. For example, while you can specify a variable for PA in the earlier examples (e.g., WHERE STATE=:MYVAR), you cannot turn STATE= into a variable. For this, you can use embedded SQL EXECUTE IMMEDIATE static statement. This statement submits a character field containing a dynamic statement to the database.

Thus, you can programmatically build up an entire statement, such as an UPDATE or SELECT, based, perhaps, on user-supplied information. Because the preprocessor has no way of knowing the content of that statement field, no optimization is possible on it. It is truly dynamic. This might be slow if you execute statements many times, so there is a way to speed this up. You can first use PREPARE to tell the database about your statement, and then call EXECUTE on it to run it. Calling EXECUTE many times on the same statement can speed things up, as the cycles to verify the statement and build an access path are done only once at PREPARE time.

What are these dynamic statements that you can pass into the EXECUTE IMMEDIATE statement? You will see a list soon, but they are functionally (and often syntactically) the same as those you can statically define in your RPG source.

Dynamic SQL gives you flexibility, allowing you to decide at runtime which SQL statement to run. If your program operates on a specific file, but allows the user or caller to decide what fields will be used in the query, update, delete, or insert action, you should choose dynamic over static SQL.

Dynamic SQL versus CLI APIs

As an alternative to running SQL statements dynamically from RPG using the EXECUTE statement, you could use APIs supplied by the system. These APIs are based on the standard X/Open SQL CLI (Call Level Interface) specification, and are similar to Microsoft's ODBC (Open Database Connectivity), which is also based on (and extends) the evolving X/Open standard. Basically, APIs allow you to pass in SQL statements to be executed, although a few APIs perform other functions. For example, there are APIs for connecting and disconnecting from a local or remote database. There are also APIs for querying information about the database itself, such as lists of files and stored procedures.

CLI APIs are designed to be more database-vendor independent than embedded SQL (a more "standard" standard). A CLI program can potentially be written to easily target multiple vendor databases by dynamically connecting to the database, querying database capability information, and passing in SQL statements on the fly. CLI APIs give you the most flexibility. While dynamic statements allow you to decide at runtime which SQL statement to run, CLI APIs go further by allowing you to decide at runtime which local or remote file to operate against.

If your program allows the user to decide the file and the query or manipulation statements to run, you would choose CLI over dynamic SQL. Furthermore, if your program is designed to operate against multiple databases (such as DB2, Oracle, Sybase, and Microsoft) via end-user control, you will need to use CLI.

Table 13.1 summarizes the various "flavors" of SQL.

Table 13.1: The SQL Alternatives

SQL Statement

Type Run Interactively?

Embed in RPG Source?

Invoke via CLI API?

Static

No

Yes

No

Dynamic

Yes

Yes

Yes

Embedded Static

Embedded Dynamic

CLI APIs

  • Best performance
  • Need to use prepared statements to get performance
  • Hardest to program
  • Easiest to code
 
  • Similar to embedded dynamic in performance
  • Hard-coded file and field names
  • Hard-coded file names, but dynamic field names
  • Total flexibility in choice of file and field names

Local versus distributed SQL

SQL can be used by a program to access a database either locally (on the same system) or remotely (on a system other than where the program resides). The latter is called distributed SQL. Distributed SQL involves the same set of statements already discussed, plus a few unique statements for accessing remote data, such as CONNECT for connecting to the remote database (something you must do before processing SQL statements for that database).

There are two types of distributed access, as dictated by IBM's Distributed Relational Database Architecture (DRDA, or "Dr. DA"). The first, called Remote Unit of Work (ROW), allows you to access a single database on a remote system. The second, called Distributed Unit of Work (DOW), allows you to access multiple databases on one or more systems, and synchronize transactions (as delimited by COMMIT and ROLLBACK) across the multiple databases. You might know this as "two-phase commit."

Note that ODBC programs, VisualAge RPG, and Java programs running on the client accessing a DB2/400 database on a single AS/400 server do not need to use distributed SQL, despite what you might expect. Rather, they support direct database access as though the database were local, using a database server program that runs on the AS/400 accepting these remote (client) database requests. You only need to use distributed SQL from the client if you want to access databases across multiple AS/400 servers, or if you want to access DB2/400 data by going through another database such as DB2/NT.

That is, when using distributed SQL embedded in your RPG program, for instance, the compiled SQL statements are not stored locally by the precompiler as they are for non-distributed SQL. Rather, they are stored on the remote system where the database exists. How are they stored? By using a package object. You identify to the database that your embedded-SQL RPG source is using distributed SQL by specifying a remote or local database name on the RDB (Relational Database) parameter of the CRTSQLRPGI preprocessor command. At that time, you also name the package you wish to create from your embedded SQL statements on the SQLPKG (SQL Package) parameter. This package object is merely a persistent way to store the processed SQL statements (the access plan) from your RPG source.

SQL terminology

As you have seen, SQL uses its own terminology that differs from the native AS/400 terminology. The differences are summarized in Table 13.2.

Table 13.2: AS/400 Versus RPG Versus SQL Terminology

AS/400 Term

RPG Term

SQL Term

DB2/400

n/a

Database

Library

n/a

Collection

Library + objects

n/a

Schema

Physical file

Disk file

Table

Logical file (no key)

Disk file

View

Logical file (with key)

Keyed disk file

Index

Record format

Format

Metadata

Record

Data structure

Row

Field

Field

Column

Why the new terms? Because this is a database- and operating system-independent standard language. In SQL, you have one or more databases, each with one or more collections. Each collection has one or more tables, and each table has one or more columns. On the AS/400, there is just one database (DB2/400), while in the rest of the world there are many databases. This makes common terminology important. Using this terminology, DB2/400 has collections (libraries) of tables (physical files), and has views (logical files) as well as indexes (keyed logical files). Each table contains one or more columns (fields). When you read from the database, you read rows (records) of data.

Views are like traditional logical files (and they are implemented as logical files). They allow you to filter one or more files. You can access a view from your program as though it were a physical file. However, views provide significantly more filtering capability than logical-file DDS can offer.

Indexes are also implemented as logical files but, unlike views, you cannot access them from your program as though they were physical files. Rather, an index is a tool that you can use to define a permanent access path to your database file for performance reasons. Your code still refers to the original file, but the system recognizes when it is appropriate for an existing index to be used. This saves the overhead of creating a temporary access path on the fly. Mind you, indexes do require some additional system overhead in terms of maintenance, so there is a tradeoff to be considered.

SQL languages DDL versus DML

You have seen the various flavors of SQL statements, but you have not yet seen what SQL statements actually do. SQL is divided into two sub-languages: DDL (Data Definition Language) and DML (Data Manipulation Language). DDL statements create and alter entire tables or files. They are equivalent to DDS for defining your database definitions, versus your data. DML statements, in contrast, manipulate actual data-for example, reading, updating, inserting, and deleting rows or records.

Table 13.3 lists common DDL statements, while Table 13.4 lists common DML statements. As you get into Java and its database support, you will use these often, so they are important to know.

Table 13.3: Common DDL Statements

Statement

Description

CREATE SCHEMA/COLLECTION

Create a new schema/collection (library).

CREATE TABLE

Create a new table (file).

CREATE INDEX

Create a new index (keyed logical file).

CREATE VIEW

Create a new view (non-keyed logical file).

ALTER TABLE

Add/delete/change a column (field definition), add constraints, add/delete a key.

DROP XXX

Delete a collection/schema, table, index, view, or package.

Table 13.4: Common DML Statements

Statement

Description

DECLARE CURSOR

Declare a cursor (embedded SQL only).

OPEN

Open a cursor (embedded SQL only).

FETCH

Read the next row of an opened cursor (embedded SQL only).

SELECT

Read one or more rows (CLI only).

INSERT

Insert one or more rows.

UPDATE

Update one or more rows.

DELETE

Delete one or more rows.

CLOSE

Close a cursor (embedded SQL only).

COMMIT

Commit previous transactions (e.g., INSERT, UPDATE, DELETE), with commitment control.

ROLLBACK

Undo previous transactions, with commitment control.

What you need for SQL on the AS 400

To use SQL on the AS/400, including high-level languages like RPG, you must have the IBM product "DB2 Query Manager and SQL Development Kit for OS/400," program number 5716-ST1. You do not need this product to run SQL applications, only to develop them. It includes a Query Manager, the SQL preprocessors, the interactive SQL utility, and the SQL statement processor. This product is not required to use CLI APIs, either; they come free with the operating system. Nor do you need this product for Java database access, although you might find the interactive SQL command very handy for learning SQL.



Why the Introduction to SQL?

As you get into Java and its database support, you will be using all these concepts, so they are important to know. SQL has undeniably become the industry standard for database access and is supported by every major relational database vendor. If you are not using it already, you almost certainly will be.

When the Java engineers went searching for a standard, portable database access language, they didn't look very far! SQL was the answer, and as you will see, knowing SQL is essential to accessing relational data from Java. There is an alternative to SQL for accessing DB2/400 data from Java, however, which is covered in Appendix A. If you use it, though, you are tied completely to OS/400 and to DB2/400. If that is okay (we don't mind!), you might prefer to jump straight to Appendix A, versus reading further here.



Accessing Data with Java

Java's built-in support for relational database access is patterned after Microsoft's ubiquitous ODBC standard. ODBC is a C language set of APIs that abstracts-out database access. It allows you to write database-accessing code once and easily target multiple database vendors.

ODBC is a successful attempt by Microsoft to design a standard set of APIs to which programs can code, independent of the database vendor. It is based on the CLI API standard, not embedded SQL, so code written with it can connect to any vendor's database.

Java's database support is patterned after ODBC, but is written entirely in Java rather than C. Like ODBC, it has a framework for database access in which:

  • Developers write their database-vendor-neutral code using language-supplied syntax and support.
  • The database vendor supplies a piece to be "plugged in" to the infrastructure, allowing database-neutral code to access that particular vendor's product.

Java's framework is called JDBC. (It is actually a trademarked name, but it is often referred to as Java Database Connectivity). JDBC involves the following:

  • A Java package (java.sql) with classes and methods for database connectivity, manipulation, queries, and more.
  • A Java JDBC database driver manager, similar to ODBC's driver manager, that comes with the language.
  • Database-vendor-supplied JDBC database drivers, which are unique to each database vendor. These "snap in" to allow a single JDBC application to access multiple databases with minimal code changes. The Java-supplied driver manager essentially passes on all SQL statements to the snapped-in database driver supplied by the vendor. While JDBC is database-neutral, you can still exploit unique database vendor functions, if desired.
  • A Java-supplied JDBC/ODBC bridge to allow, in the short term, Java access to any database via its ODBC database driver. You will only need and want to use this, however, if your database vendor is one of the few left that does not yet supply a pure Java JDBC driver. This bridge will often be slow, and might only be as portable as the ODBC DLLs it requires.

The key is that you need to write your Java code to the java.sql package, not the database vendor APIs. The classes and interfaces in this package pass on the requests to the underlying database on your behalf, as shown in Figure 13.1. This way, you can easily swap in the particular database with minimal impact to your code.


Figure 13.1: The Java JDBC framework

Database vendors can supply any of four types of JDBC drivers:

  • Type 1: JDBC-ODBC bridge. This is actually supplied with the JDK, and can be used to connect to any existing ODBC driver.
  • Type 2: JDBC native driver. This is supplied by the database vendor, and under the covers, it uses Java Native Interface (JNI, which is the Java-supplied way of calling C functions from Java) to call existing C APIs for that particular database. It must be run on the same system as the database, but if the local database can access remote databases, the Java JDBC wrapper will inherit this capability.
  • Type 3: JDBC net drivers. A pure Java implementation of the JDBC interfaces, this is for use in remote clients. It requires a piece of code running on the server (also supplied by the database vendor) to "listen" for requests sent by the client driver.
  • Type 4: JDBC thin drivers. This is also a pure Java implementation of the JDBC interfaces, for use in remote clients. The difference from type 3 is that there is no listener code required on the server, as the client communicates directly with the database engine on the server.


JDBC Drivers for DB2 400

For DB2/400, two JDBC drivers are available. The first is a type-2 driver that comes with the OS/400 JDK. It is a Java JDBC wrapper to the DB2/400 CLI (Call Level Interface) APIs that are part of the operating system. Java code you write using this driver can only run on OS/400 itself, but it is very efficient. Even though it must run on OS/400, it can be used to access databases on other AS/400s. Because this driver comes with the JDK, there is nothing unique you have to do to install it, other than install the JDK (which is on your stack tape, as of V4R2 or higher). The JDK product number is 5769-JV1 for Version 4 releases.

The second driver is a type-4 driver that comes with the AS/400 Toolbox for Java, which is a collection of pure Java classes offering access to many OS/400 services, including data via this JDBC driver. All the Toolbox classes are designed to run either on OS/400 itself or on any client that has a JVM. No prerequisites are needed on any client (and a Web browser is a valid client), other than TCP/IP, which is built into most operating systems these days.

If you are writing client code to connect to OS/400 remotely, this is the driver for you. Because it is Java, it will also run fine on OS/400 itself, but because it uses TCP/IP to talk to the database engine, it is not as efficient as the type-2 driver that comes with the JDK.

To use this driver, you must ensure the Toolbox jt400.jar file is on your CLASSPATH statement. See the installation instructions for the AS/400 Toolbox for Java for details. You can get the Toolbox and documentation by going to www.ibm.com/iseries/java and following the appropriate link. It also comes with your system as of V4R2 or higher, but it must be installed. The Toolbox product number is 5768-JC1 for Version 4 releases, and 5722-JC1 for Version 5 releases.

The jt400.jar file is installed into your Integrated File System, but can then be copied to your workstation, placed on your classpath there, and used to remotely connect to DB2/400, either for production or for testing purposes. You will find it in the IFS directory, /QIBM/ProdData/HTTP/Public/jt400/lib. Of course, it can also be used for code running on the AS/400 itself. For this, set your classpath there to point to it. The details to do this are discussed in Appendix A. Briefly, you create a file named .profile ("dot profile") in your IFS home directory, and put the following statement in it:

export -s CLASSPATH=.:/QIBM/ProdData/HTTP/Public/jt400/lib/jt400.jar

The Toolbox comes with IBM's WebSphere Development Tools for the iSeries, and it is placed on your classpath automatically.

Appendix A also mentions an alternative direct-record-access option for database access that the Toolbox also supplies. Once the Toolbox has been installed, it is time to start writing your Java database code.



Java JDBC Programming

The java.sql package that comprises JDBC is Java's equivalent to CLI. Recall that CLI APIs are a set of APIs for executing dynamic SQL statements. For JDBC, these are a set of classes for executing dynamic SQL statements.

The java.sql package that comes with Java consists of a number of public interfaces. (Recall that these are like abstract classes-only method signatures are supplied, not method implementations.) Your code, however, uses them as though they were fully implemented classes because each database-vendor-supplied JDBC driver simply implements each of these interfaces in its own classes. However, by using factory methods, your code only works with the interface names, not the vendor's class names. The main interfaces in the java.sql package are listed in Table 13.5.

Table 13.5: The Main Interfaces in JDBC's java.sql Package

The java.sql Interface

Description

Driver

The main interface, this supplies the connection method for accessing a database. It is the one interface your code never sees directly.

Connection

This represents a specific session with a specific database.

Statement

This is used to execute explicit SQL statements.

PreparedStatement

This is used to prepare repeatedly executed SQL statements and execute them. It extends Statement.

ResultSet

Returned from an SQL query or stored procedure call, this contains the list of returned rows (records) and methods for traversing them, and extracting columns (fields).

CallableStatement

This is used to call stored procedures. It extends PreparedStatement.

ResultSetMetaData

This is used to dynamically determine the column (field) definitions for a ResultSet. It saves hard-coding field attributes.

DatabaseMetaData

This is used to dynamically determine information about the database itself. That is, it finds information about conventions and limits of the database (DB2/400, in your case), as well as information stored in the database's catalog, such as a list of "registered" stored procedures.

In addition to these interfaces that the JDBC driver-provider writes classes to implement, a DriverManager class supplied by Java starts the whole process. It contains static methods for registering a particular driver and connecting to that driver's database. The latter method is defined to return a Connection interface object, and in reality returns an instance of the JDBC provider's class that implements that interface. Indeed, it is by calling the methods that are defined to return interfaces that you get objects for all the JDBC provider's classes. This is how your code is shielded from explicitly coding the names of the JDBC provider's classes. This way, switching from JDBC driver to JDBC driver is easy.

To write JDBC code, you of course have to import the java.sql package, which contains the interfaces and the DriverManager class. You then follow the steps in Table 13.6 to work with the database.

Table 13.6: Steps to Writing JDBC Code

Step

Description

Example

1. Register the driver.

Tell Java about your JDBC driver.

DriverManager.registerDriver(new com.ibm.as400.access. AS400JDBCDriver());

2. Connect to the database.

Open a connection. You are allowed more than one.

Connection conn = DriverManager.getConnection ( "jdbc:as400://MYSYSTEM");

3. Create statement objects.

Use factory methods to return the appropriate statement object.

Statement stmt = conn.createStatement(); // or PreparedStatement pstmt = conn.prepareStatement("...");

4. Run SQL statements.

Use methods in the statement object to run the SQL statement.

ResultSet rs = stmt.executeQuery("SELECT ..."); int res = stmt.executeUpdate("INSERT ..."); // or pstmt.executeQuery/Up- date();

5. Process the results.

For queries, walk the returned result set.

while (rs.next()) String column1 = rs.getString(1);

6. Close the statements and the connection.

Use the close method to free up resources as soon as you are done with them.

rs.close(); stmt.close(); conn.close();

For every single JDBC statement you execute, you have to monitor for SQLException, which every single method call can throw. The following sections examine the steps in Table 13.6 in more detail.

Steps 1 and 2 Registering and connecting

The only difference between the Toolbox JDBC driver and the AS/400 Java JDBC driver, from a programming point of view, is the parameters passed to register the JDBC driver and to subsequently connect to the database. Here is how to register each driver:

DriverManager.registerDriver(new 
 com.ibm.as400.access.AS400JDBCDriver()); // Toolbox driver 
DriverManager.registerDriver(new 
 com.ibm.db2.jdbc.app.DB2Driver()); // Native AS/400 Java driver

Here is how to connect using each driver:

Connection conn =
 DriverManager.getConnection("jdbc:as400://system"); // Toolbox 
Connection conn =
 DriverManager.getConnection("jdbc:db2"); // Native

You must tell Java the database driver you wish to work with. This is the role of the registerDriver method of DriverManager. It takes an instance of a JDBC Driver class as input, and records the address. You need only do this once-when your Java program first starts up, say. You can register more than one JDBC driver. When you subsequently use the getConnection method of DriverManager to start a connection, the DriverManager class will pass the URL (Uniform Resource Locator) given as a parameter to each registered driver, and if the one that returns recognizes that URL, it will be asked to perform the connection. Thus, each JDBC driver is designed to recognize unique URLs, which provide a unique-to-the-world way of identifying something.

So, you register a driver once, but can connect to it more than once. Why connect more than once to the same driver? You might consider this if your code is running on the AS/400 itself, and you want to have one connection per user for security and scalability reasons.

Table 13.6 shows that after registering your driver, you connect to the database using the getConnection method, passing a URL that identifies the driver to connect to, as well as the AS/400 system to connect to. For the Toolbox driver, this is the TCP/IP host name of the system, while for the AS/400 Java JDBC driver, this is the *LOCAL entry from the WRKRDBDIRE table. For either driver, with code running on the AS/400, you can just supply the special name localhost to represent the current system you are running on (or let it default to this for, the native driver). In addition to this URL, the getConnection method optionally takes two more parameters for the user ID and password. If you specify these, your connection will run under this profile. If you do not specify them, what happens depends on whether your code is running on the AS/400 itself or remotely on a client. On the AS/400, it simply uses the user ID and password of the current job; in other words, it is equivalent to specifying the special value *current for both. For the client, it prompts the user for a user ID and password.

Let's look at a class that does the connection and registration for you, shown in Listing 13.1. It uses the Toolbox driver, but also shows in the comments how to use the native driver instead. It supports three constructors: one takes a system name, user ID, and password; one takes only a system name and defaults or prompts for the user ID and password; and one takes nothing and defaults or prompts for the system name, user ID, and password. Whether the code defaults to the current job or prompts for the missing information depends on whether it is running on the AS/400 itself or on a client machine.

Listing 13.1: A Class for Registering a JDBC Driver and Connecting to the AS/400

import java.sql.*; 
public class DB2400 
{ 
 private static boolean registered = false; 
 private Connection conn = null;
 
 public DB2400(String systemName, String userId, String password) 
 throws SQLException
 {
 if (!registered) // register driver if not already
 DriverManager.registerDriver( 
 new com.ibm.as400.access.AS400JDBCDriver()); 
 //new com.ibm.db2.jdbc.app.DB2Driver()); 
 registered = true;
 String connectURL = "jdbc:as400";
 // String connectURL = "jdbc:db2"; 
 if (systemName != null) 
 connectURL += "://" + systemName; 
 if (userId != null)
 conn = DriverManager. getConnection(connectURL,userId,password); 
 else 
 conn = DriverManager.getConnection(connectURL);
 } // end constructor 
 public DB2400(String systemName) throws SQLException 
 {
 this(systemName, null, null); 
 } 
 public DB2400() throws SQLException 
 {
 this(null, null, null); 
 } 
 public void disconnect() 
 { 
 try 
 { 
 conn.close(); 
 } catch (SQLException exc) {} 
 }
 public Connection getConnection() 
 {
 return conn;
 }
} // end class DB2400

You see that the registration need only be done once per JVM session, while you may have multiple connections per session. Thus, the class uses a static variable to tell if you have registered yet, ensuring it is only ever done once.

There is a problem if you want to use the native JDBC driver: you won't be able to compile it on Windows, if that is where you are doing your development. Why? Because the JDBC's driver class will not be found on your CLASSPATH. To get around this, you can use a mapped drive to the AS/400 and add a CLASSPATH entry to the .jar file containing the AS/400 system's Java classes, or use another technique. One alternative is to register the driver by specifying a parameter on the java command, like this:

java "-Djdbc.drivers=com.ibm.db2.jdbc.app.DB2Driver" MyApp

This is an easy way to do the registration for the native driver, and is also valid for the Toolbox driver. You could supply a CL program for users to run, and specify this value on the JAVA CL command. You can also preset this parameter inside VisualAge for Java on the properties for your main class.

A second way to do this is to dynamically load the appropriate driver after determining if you are running on an AS/400 or not. This is the best solution for two reasons. First, it allows you to run the same code on either the AS/400 or a client, and picks the best driver for each situation on the fly. Second, by dynamically loading the driver class versus explicitly instantiating it, you avoid the compiler verifying the class exists, so you can compile on Windows, yet deploy and run on the AS/400 (or Windows).

How can you determine if you are running on an AS/400? By using the getProperty static method of the System class and passing it the string os.name. This property is hard-coded by each JVM. For AS/400 Java, it is set to OS/400, so you can simply test for this. How do you dynamically load a class in Java, versus using new to instantiate it? By using the forName static method of the Class class, which loads the class into memory at runtime, and then using the newInstance method in it to instantiate an object from the class.

You do not need to understand it fully, but Listing 13.2 gives a revised version of the DB2400 class, which registers and subsequently loads the appropriate JDBC driver depending on which operating system you are running on. Only the first constructor is shown, as the rest of the class is unchanged from Listing 13.1. The changes are shown in bold. Notice how all JDBC driver classes implement the generic Driver interface, and so an object is cast to this. Also notice how the system name is set to localhost if none is given and you are running on the AS/400, as is required there. This is a good class to tuck away and keep!

Listing 13.2: A Better Class for Registering and Connecting to the Appropriate JDBC Driver

import java.sql.*; 
public class DB2400 
{ 
 private static boolean registered = false; 
 private static boolean nativeDriver = false; 
 private Connection conn = null;
 
 public DB2400(String systemName, String userId, String password) 
 throws SQLException
 {
 String connectURL = null; 
 if (!registered) // register driver if not already 
 {
 String osName = System.getProperty("os.name"); 
 String driverName = null; 
 if (osName.equals("OS/400"))
 {
 driverName = "com.ibm.db2.jdbc.app.DB2Driver"; 
 connectURL = "jdbc:db2"; 
 nativeDriver = true;
 }
 else 
 {
 driverName = "com.ibm.as400.access.AS400JDBCDriver"; 
 connectURL = "jdbc:as400";
 } 
 try {
 Driver driver = (Driver)
 (Class.forName(driverName).newInstance()); 
 DriverManager.registerDriver(driver); 
 } catch (Exception exc) { 
 throw new SQLException(exc.getMessage()); } 
 registered = true;
 } 
 // connect to the database 
 if (systemName != null) 
 connectURL += "://" + systemName;
 else if (nativeDriver) 
 connectURL += "://localhost"; 
 if (userId != null) 
 conn = DriverManager.getConnection( 
 connectURL,userId,password); 
 else 
 conn = DriverManager.getConnection(connectURL); 
 } // end constructor
} // end class DB2400

The registration and connection are done in the constructor, and the caller can subsequently call getConnection to get the resulting Connection object. Note that both the registration and the connection can throw an SQLException if anything goes wrong (for example, the driver is not found, the system is not found, or the password not valid). the constructor simply percolates these to the caller, so that anyone instantiating it will have to monitor for SQLException. Here is an example of this, from the main method of the DB2400 class, which is for testing purposes:

public static void main(String args[]) 
{
 System.out.println("Testing DB2/400 connection..."); 
 try {
 DB2400 me = new DB2400(); 
 System.out.println("Connected ok!"); 
 } catch (SQLException exc)
 { 
 System.out.println("Error: " + exc.getMessage());
 }
 System.exit(0); 
}

You have to use System.exit(0) because the AS/400 Toolbox for Java spawns non- daemon threads. When running on the AS/400, this code will connect to the current system with the current job's user ID and password. When running on the client, this code will pop up a dialog box, as shown in Figure 13.2.


Figure 13.2: A prompt dialog for the AS/400 Toolbox for Java connection

The Toolbox will pop up other dialogs if the user types in invalid information or if the user's password has expired. Note that the Toolbox uses QUSER under the covers, so you have to ensure this user ID is enabled and the password has not expired. Also make sure you have run STRHOSTSRV *ALL on your system to start the system servers, and ensure you have all latest cumulative PTF tapes, especially for earlier releases of OS/400.

The actual syntax for the JDBC connection to DB2/400 is as follows:

jdbc:as400://systemName" 
"jdbc:db2://systemName"

The system name identifies which AS/400 you are going to connect to. The optional defaultSchema is the default AS/400 library to access in subsequent operations that do not specify a qualified file name. If no default is specified here, the default library list of your user ID is used to find the file. (Yes, "schema" equals "library" in the JDBC terminology for the AS/400.) The optional list of properties is a semicolon-separated list of property=value specifications.

A long list of properties is supported by the DB2/400 JDBC drivers, summarized in Table 13.7. These properties can be specified as part of the getConnection URL, or they can be specified using a Properties object (from java.util) that is passed as the second parameter on the getConnection method call. The Properties class extends Hashtable and supports put and get methods that only support String keys and values.

Table 13.7: JDBC Driver Connection Properties

Property

Values

AS/ 400

Tool box

user

User name to use in the sign-on

yes

yes

password

Password to use in the sign-on

yes

yes

naming

sql or system: lib.file or library/file
Default: sql

yes

yes

access

all (all SQL statements allowed)
read call (SELECT and CALL allowed)
read only (only SELECT allowed).
Default: all

yes

yes

errors

basic (normal AS/400 server error messages)
full (detailed AS/400 server error messages)
Default: basic

no

yes

blocking

true or false
For result set retrieval
Default: true

yes

no

block criteria

0 (no blocking)
1 (if FOR FETCH ONLY)
2 (unless FOR UPDATE)
Default: 2

no

yes

block size

0/8/16/32/64/128/256/512
Rows per block
Default: 32

yes

yes

data compression

true or false
Whether to compress the result set data
Default: false

no

yes

trace

true or false
For simple problem determination
Default: false

yes

yes

Property

Values

AS/ 400

Tool box

transaction isolation

none
read committed
read uncommitted
repeatable read
serializable
For commitment control
Default: none

yes

yes

escape processing

true or false
Whether to recognize SQL escape sequences;
can be faster if you turn it off
Default: true

yes

no

libraries

Library used as default for unqualified statements
A single name for the native driver, a comma-separated list for the Toolbox
Default: *LIBL

yes

yes

translate binary

true or false
Treats BINARY and VARBINARY data values as
CHAR and VARCHAR
Default: false

yes

yes

prompt

true or false
Whether a GUI prompt should be shown if no user
ID and password given
Default: true

no

yes

date format

mdy
dmy
ymd
usa
iso
eur
jis
Default: job's date format value

v4r5

yes

date separator

/
-
.
,
" " (space)
Default: job's date separator value.

v4r5

yes

decimal separator

.
,
Default: job's decimal separator

v4r5

yes

Property

Values

AS/ 400

Tool box

time format

hms
usa
iso
eur
jis
Default: job's value

v4r5

yes

time separator

:
.
,
" " (space)
Default: job's time separator value

v4r5

yes

prefetch

true or false
Prefetches rows for SELECT statements
Default: true

no

yes

extended dynamic

true or false
Caches SQL statements in package on server
Default: false

no

yes

package

For extended dynamic, name of package

no

yes

package library, cache, clear, add, error

except
warning
none
For extended dynamic, library of package, whether to cache package in memory, whether to clear package when full, whether to add to existing packages, how to handle package-related errors
Default: warning

no

yes

remarks

sql or system
Where to get metadata remarks
Default: system

no

yes

sort

hex
job
language
table
To sort result sets
Default: job

no

yes

Property

Values

AS/ 400

Tool box

sort language, weight

shared or unique
For sort=language, the three-character ID of the language to use, and how to handle case
Default: shared

no

yes

sort table

For sort=table, the qualified name of a sort sequence table

no

yes

data truncation

true or false
Throw exceptions if data has to be truncated when writing to the database
Default: false

no

yes

proxy

server hostname:port
Middle tier where proxy server is running.

no

yes

secondary URL

JDBC URL for middle-tier redirection.
Allows access to another JDBC driver, hence another database.

no

yes

secure

true or false
Whether to use SSL for connection
Default: false

no

v4r4

lob threshold

Maximum kilobytes that can be retrieved for a large object

no

v4r4

As you see, the user ID and password for the AS/400 connection can be specified in one of three ways:

  • Specify it in the connection URL as ";user=uuuuuuuu;password=pppppppp".
  • Specify it in the getConnection method call as the second and third parameters.
  • Do not specify it at all, in which case the user will automatically be prompted for it when the Toolbox driver is used on the client, or the job's values will be used when using either driver on the AS/400 itself.

Another important property is naming. The default naming convention is naming=sql, but you can also specify naming=system. Using the latter to, for example, qualify a table reference on an SQL SELECT statement, you would specify COLLECTION/TABLE, as in MYLIB/MYFILE. Using naming=system also means that an unqualified table name reference, as in TABLE, is equivalent to *LIBL/TABLE.

As you know already, your library list is used to find the table (that is, the default library list of the user ID that is specified for the connection). If you are not concerned with portability, this is fine. However, if you are thinking about portability to other databases, you might want to leave the default of naming=sql. This makes your qualified name syntax consistent with all other databases. This convention is COLLECTION.TABLE, as in MYLIB.MYFILE.

It sounds like a painless enough change-just use a dot instead of a slash. Be careful, however! This also changes where the system looks for unqualified table names. Now, for TABLE, the system will not look in the library list for file TABLE. Instead, it will only look in the library with the same name as the user ID specified in the connection. Therefore, using naming=sql implies that you will fully qualify all your table references, or specify a default schema.

Some of the properties can also be specified or at least queried by calling methods on the resulting Connection object. Specifically, there are methods for setting and querying commitment-control-related properties, and there are setReadOnly and isReadOnly methods. You might find it valuable to use setReadOnly instead of access=read only, since it is more portable. In either case, an entire connection that is read-only is a good idea to ensure no attempt is made to update a database, and possibly to help the database engine with performance tuning. The alternative is to specify the FOR READ ONLY clause on your SELECT statements, but that only prevents positioned updates and deletes, not explicit UPDATE and DELETE statements.

Step 3 Creating SQL statement objects

To use JDBC, you must use SQL statements. The java.sql package includes two classes for creating instances to represent SQL statements. When you execute an SQL statement, the database performs two steps:

  1. Prepare the statement by effectively compiling it.
  2. Run the prepared statement.

The first step can be explicitly done in advance for multiple-use statements, which saves time in the second and subsequent execution of the statement. Thus, the two flavors of SQL statement classes in JDBC are:

  • Statement. Use this class to execute statements on the fly. If you have an SQL statement that is to be run only once, this is the class to use. Its executeXXX methods take any string object assumed to contain a valid dynamic SQL statement.
  • PreparedStatement. Use this class to prepare a statement that will be executed multiple times. Note that it allows substitution variables, or markers, in the form of question marks (?), so that the same statement can be reused multiple times with different values. See the SQL documentation for where you are allowed to use these markers, but essentially they can be used in SELECT statements for the value of a column name (as in WHERE STATE=?), in the SET clause of an UPDATE statement, and in the VALUES clause of an INSERT statement. They cannot be used to substitute in the table or column names themselves, however. That would change the access path of the statement, so it cannot be turned into a variable. You must use separate statements for this. If you cannot hard-code these values, you will have to use a Statement object built up dynamically just prior to execution, instead of a PreparedStatement object.

To instantiate instances of these classes, you do not use the new operator. Instead, you use the createStatement or prepareStatement methods of your previously allocated Connection object to create these objects for you, for example:

Statement stmt = conn.createStatement(); 
PreparedStatement pstmt = 
 conn.prepareStatement("SELECT * FROM CUSTOMER WHERE CUSTNO=?");

Notice how Statement objects do not specify the statement to execute at allocation time, whereas prepared statements do. This is an indication of the longevity of prepared statements. At the time prepared statements are created, they are verified and "compiled" at the DB2/400 server. You must couch your prepareStatement method call in a try/catch block for SQLException, and pay attention to the exception-it usually implies that you have specified an improper SQL statement, or you have not yet connected to the database. It might also be that you are preparing an update statement, and you do not have update authority to the database. Generally, the exception getMessage text is enough to figure it out. However, SQLException also has getErrorCode and getSQLState methods to help programmatically determine the cause of the error.

Having created an instance of Statement or PreparedStatement, you can subsequently execute that statement. Steps 4 and 5, executing an SQL statement and processing the results, are the interesting part, so we'll defer an example until we cover these.

Step 4 Running the SQL statements

Prior to running your prepared statements (PreparedStatement class objects), you have to specify the values for all the "markers" (question marks) in the statement. To do this, use the overloaded setXXX(int, value) method in the PreparedStatement class. The exact method to use depends on the type of the value you want to substitute, which, in turn, depends on the type of the database field or column whose value you are supplying. For example, if you are specifying a value for a decimal field, you would use setBigDecimal. For character database field values, you would use setString.

All of the setXXX variables take two parameters: the relative one-based position of the marker to substitute, and the value to be substituted. These methods are the functional opposite of the getXXX methods you will see in the next section. For example, to set the substitution value for the SQL statement "SELECT * FROM CUSTOMER WHERE CUSTNAME=?" you might specify this:

pstmt.setString(1,"George Farr");

The number 1 for the first parameter indicates you are substituting for the first question mark. You must substitute for all the question marks before executing the prepared statement. These substitutions stay in effect either for the life of the PreparedStatement object, or until reset, or until cleared via clearParameters. The setXXX methods include those listed in Table 13.8. The "SQL Types" column in that table refers to constants defined in the java.sql.Types class.

Table 13.8: PreparedStatement set Methods for Marker Substitution

Method

SQL Type

DDS Type

SetBigDecimal

NUMERIC

Decimal

SetBoolean

SMALLINT

binary (4,0)

setByte

SMALLINT

binary (4,0)

setBytes

VARBINARY

character, CCSID(65535)

setDate

DATE

Date

setDouble

DOUBLE

float, FLTPCN(*DOUBLE)

setFloat

FLOAT

float, precision depending on value

setInt

INTEGER

binary (9,0)

setLong

INTEGER

binary (9,0)

setNull

NULL

for null-capable fields (ALWNULL)

setShort

SMALLINT

binary (4,0)

setString

CHAR, VARCHAR

character, VARLEN character

setTime

TIME

Time

setTimestamp

TIMESTAMP

Timestamp

setObject

Any

Any type; specified by one of the java.sql.Types constants

To actually execute an SQL statement, you call one of two methods on the object:

  • The executeQuery method if you are running an SQL SELECT statement:

    ResultSet rs=stmt.executeQuery(String sqlStmt): // Statement 
    ResultSet rs=pstmt.executeQuery(); //PreparedStatement
    
  • This method call sends the statement to the database and returns a ResultSet object. This object has methods for traversing it which are covered in the next section.
  • The executeUpdate method if you are running an SQL INSERT, UPDATE, DELETE statement, or SQL DDL (Data Definition Language) statements like CREATE TABLE:

    int res = stmt.executeUpdate(String sqlStmt): // Statement 
    int res = pstmt.executeUpdate(); //PreparedStatement
    
  • This method sends the statement to the database, where it is run. It then returns a count of the affected rows, or zero for DDL statements.

There is also a generic execute method that can return multiple-result sets. This is exclusive to stored procedure calls, however, and is covered later in this chapter.

You use the SELECT statement with the executeQuery method whether you are retrieving one row or multiple rows. The former is just a special case of the latter. For example, the following retrieves all the rows (and all the columns because of the * character) from the MYFILE file:

SELECT * FROM MYFILE

The following, on the other hand, retrieves a single row from MYFILE-one where the unique key field has a value of A:

SELECT * FROM MYFILE WHERE KEYFIELD = 'A'

This is equivalent to RPG's CHAIN op-code.

What SQL statements can you execute in your execute and executeUpdate methods? Any dynamic one. However, you will primarily use a SELECT statement via executeQuery to read data, and INSERT, UPDATE, and DELETE to manipulate data.

Step 5 Retrieving the information in result sets

After executing an SQL SELECT statement by using the executeQuery method, you have a populated ResultSet object. It contains all the rows that met the criteria specified in the SELECT statement. You iterate sequentially through the result set using the next method, which bumps the implicit cursor ahead by one row.

It is important to note that initially, the cursor is positioned before the first row; you must do one next call to read the first row. The next call will return true as long as there is another row to advance to. When it returns false, you have reached the end of the rows (that is, the end of the file). For example, to iterate through a multiple-row SELECT statement's result set, you code the following:

ResultSet rs = stmt.executeQuery("SELECT * FROM MYFILE"); 
while ( rs.next() ) // while more
 // do processing

For single-row SELECT statements, you are expecting exactly one returned row, so your logic might look like this:

ResultSet rs = stmt.executeQuery(
 "SELECT * FROM MYFILE WHERE KEYFIELD='A'"); 
if ( !rs.next() ) // no rows?
 // issue error message 
else
 // do processing

How do you extract the values for each column of the current row? The trick is to know the following:

  • The relative position of the column (the first column is position 1) or the explicit name of the column (the field's DDS name).
  • The data type of that column, and its equivalent Java data type.

With this information, you can declare a variable of the appropriate Java data type and assign it by using the appropriate getXXXX method of the ResultSet class. There is one get method for each data type, and it returns the specified column's value of the current row as that data type. All methods support two versions: one in which the first parameter is an integer representing the column position to retrieve, and the other in which the first parameter is a string representing the name of the column to retrieve. These are the functional opposite of the PreparedStatement class's setXXX methods discussed previously, and are listed in Table 13.9.

Table 13.9: The get Methods for Retrieving Column Data from a Result Set

getXXX Method

SQL Types

Description

getBigDecimal (int/String,int)

DECIMAL, NUMERIC[1]

The second parameter is scale (decimal digits); returns a BigDecimal object

getBoolean

NUMERIC[1]

Returns a boolean value of true for nonzero; otherwise false

getByte

SMALLINT[1]

Returns a byte value (one byte, signed)

getBytes

BINARY, VARBINARY

Returns a byte array

getBinaryStream

BINARY, VARBINARY

Returns a java.io.InputStream object

getAsciiStream

CHAR, VARCHAR, BINARY, VARBINARY

Returns a java.io.InputStream object

getUnicodeStream

CHAR, VARCHAR, BINARY, VARBINARY

Returns a java.io.InputStream object

getDate

CHAR, VARCHAR, DATE, TIMESTAMP

Returns a java.sql.Date object

getDouble

FLOAT, DOUBLE[1]

Returns a double value

getFloat

REAL[1]

Returns a float value

getLong

INTEGER[1]

Returns a long value

getShort

SMALLINT[1]

Returns a short value

getString

any

Returns a String object

getTime

CHAR, VARCHAR, TIME, TIMESTAMP

Returns a java.sql.Time object

getTimeStamp

CHAR, VARCHAR, DATE, TIMESTAMP

Returns a java.sql.Timestamp object

getObject

any

Java decides what java.sql.Types type the column is and creates the appropriate default object.

[1] The recommended type(s) are shown, but this method can actually be used for SQL types SMALLINT, INTEGER, REAL, FLOAT, DOUBLE, DECIMAL, NUMERIC, CHAR, and VARCHAR (and TINYINT, BIGINT, BIT, and LONGVARCHAR, although DB2/400 does not support these SQL types). For more information on data types, see the section "More on data types" later in this chapter.

The Date, Time, and Timestamp classes in java.sql all extend the java.util.Date class discussed in Chapter 8. The Date and Time classes only write the date or time portion in JDBC, versus both values, which their parent java.util.Date holds. Also, Time and Timestamp add nanoseconds, which the SQL standard requires.

As previously mentioned, you can either specify the name or the one-based position of the column. It is recommended that, if your SELECT statement uniquely identifies the columns (as in SELECT COL1, COL2 FROM TABLE), you should use the name again on the getXXX method call. If you specify that you want all columns (as in SELECT * FROM TABLE), use the column numbers because the names are probably not known. If you know the column name but want to specify the column number, you can use the ResultSet method findColumn(String). It is always more efficient to use this once outside the result-set processing loop, and then use the column number versions of the get methods inside the loop. The numeric version is faster than the name version of these methods.

Notice the getBinaryStream, getASCIIStream, and getUnicodeStream methods. They are intended for very large fields, such as bitmap images. (Actually, getUnicodeStream has been deprecated in favor of a new getCharacterStream method discussed later in this chapter.) These methods are alternatives for retrieving these values in smaller, fixed-size blocks, rather than as a single block with the getBytes or getString methods. To use them, you must access the returned java.io.InputStream objects immediately. Otherwise, you face losing them on the next rs.getXXX or rs.next call. The following example is from the JDBC Guide: Getting Started document included in the JDK documentation:

java.sql.Statement stmt = con.createStatement(); 
ResultSet r = stmt.executeQuery("SELECT x FROM Table2"); 
// Now retrieve the column 1 results in 4 K chunks: 
byte buff = new byte[4096];
while (r.next())
{
 Java.io.InputStream fin = r.getAsciiStream(1); 
 for (;;)
 {
 int size = fin.read(buff); 
 if (size == -1) // at end of stream 
 break;
 // Send the newly filled buffer to some ASCII output stream: 
 output.write(buff, 0, size);
 }
}

If your database uses null values for unset fields (DDS keyword ALWNULL), then the code that processes the result set might deem it important to know if the value for a retrieved column is null or not. To test this, use the ResultSet method wasNull. It returns true if the last column read via a getXXX method call is null.

Recall that when using SQL in RPG programs, your processing for SELECT involves using DECLARE CURSOR and specifying the SELECT statement as a clause of this declaration (using the FOR clause). Thus, in RPG, you name your cursor, whereas in JDBC it is implicit. The ResultSet class and its next and getXXX methods are equivalent to RPG SQL's FETCH NEXT and FETCH NEXT INTO statements. Using JDBC, there might be times when you want to process an UPDATE or DELETE on the current row, instead of specifying a row selection criteria. This is called a positioned update and delete. To do this, you need some way in your subsequent UPDATE or DELETE statement to specify the result set and the current row-position within it.

Keep in mind that the result sets are actually implemented and stored in the database. Your Java program only requires memory for one row (or one block of rows if blocking is specified and applicable). The result sets are maintained in the database, so they are given names by the database, even when you do not do so in JDBC. To get a name, you can specify it on your UPDATE or DELETE statement, using the ResultSet method getCursorName. Here is an example:

while ( !rs.next() ) 
{
 ... 
 stmt.executeUpdate("DELETE FROM MYFILE WHERE CURRENT OF " + 
 rs.getCursorName());
 ... 
}

There is also a setCursorName method, if you choose to explicitly name the result set. This method is part of the Statement class, however, not the ResultSet class. When you do a positioned update or delete as shown here, you must use a different Statement object than the one used to produce the result set.

A simple query example: Query PDM options

Listing 13.3 is the simplest of examples. It is raw code in a main method that simply connects to the AS/400, prepares a "query all" statement for the PDM user-defined options file QAUOOPT in library QGPL, and then executes that statement and displays the two columns of each retrieved row in the result set.

Listing 13.3: A Class for Querying and Displaying the PDM User-Defined Options File

import java.sql.*; 
public class QueryPDMOptions 
{
 public static void main(String args[]) 
 { 
 System.out.println("Welcome to QueryPDMOptions"); 
 try
 {
 System.out.println("Connecting to AS/400..."); 
 DB2400 db2400 = new DB2400(); //prompt for sys,Id,pwd 
 Connection conn = db2400.getConnection(); 
 System.out.println("Preparing statements..."); 
 PreparedStatement queryAll = 
 conn.prepareStatement(
 "SELECT OPTION, COMMAND FROM QGPL.QAUOOPT");
 System.out.println("Doing query..."); 
 System.out.println(); 
 ResultSet rs = queryAll.executeQuery(); 
 while (rs.next())
 System.out.println(rs.getString(1) + " " + rs.getString(2)); 
 System.out.println(); System.out.println("Closing everything..."); 
 rs.close();
 queryAll.close();
 conn.close(); 
 System.out.println("Done."); 
 } catch (SQLException exc)
 {
 System.out.println("SQL Error......: " +
 exc.getMessage()); 
 System.out.println("SQL Error State: " + 
 exc.getSQLState()); 
 System.out.println("SQL Error Code.: " + 
 exc.getErrorCode());
 }
 System.exit(0); 
 } //end main
}

Notice the DB2400 class from Listing 13.2 does the registration and connection. The getConnection method does not specify a system, user ID, or password, so when running on Windows, you get prompted for this information. The QAUOOPT file has only two fields, each of type character, so this query is easy. There are no markers to replace, and you simply use the getString method to retrieve each column's data inside the ResultSet processing loop. If you run this simple class either on Windows or AS/400, you get the following output:

Welcome to QueryPDMOptions 
Connecting to AS/400... 
Preparing statements... 
Doing query...
 
AP ADDPRJLIBL PRJ(&ZP) GRP(&ZG) SCAN(&ZH) SCHPTH(&ZS)
 C CALL &O/&N
CC CRTBNDC PGM(&L/&N) SRCFILE(&L/&F) OUTPUT(*PRINT) DBGVIEW(*ALL) 
DEFINE(TRACE)
CD STRDFU OPTION(2) 
CE CALL QCODE/EVFCFDBK PARM('37' 'Y' 'OS400' ' 
CODEEDIT "&L/&F(&N)"')
CL CHGCURLIB CURLIB(&N) 
CM STRSDA OPTION(2) SRCFILE(&L/&F) ??SRCMBR() 
CP CRTPGM2 PGM(&L/&N) MODULE(*PGM) ACTGRP(*DFTACTGRP) 
CS STRSDA OPTION(1) SRCFILE(&L/&F) ??SRCMBR() 
DL DSPLIBL
DM DSPMSG
DO DMPOBJ OBJ(&L/&N) OBJTYPE(&T) 
EA EDTOBJAUT OBJ(&L/&N) OBJTYPE(&T)
GO GO &L/&N
IM IMPPART ??OBJ(&L/&F) ??OBJTYPE(*FILE) ??MBR(&N) ??PART(&N) 
??LANG(&S) TEXT(&X) 
IO IMPPART ??OBJ(&L/&N) ??OBJTYPE(&T) ??PRJ() ??GRP() ??TYPE(&S) 
??PART(&N)
JL DSPJOBLOG
MK CRTCMOD MODULE(&L/&N) SRCFILE(&L/C) OUTPUT(*PRINT) 
DBGVIEW(*SOURCE) DEFINE(TRACE NONUNIX NONRS6000 ILE AS400) 
PL WRKPARTPDM PRJ(&ZP) GRP(&ZG) TYPE(*ALL) PART(*ALL) LANG(*ALL) 
PARTL(&N)
RP RMVPRJLIBL
SL SBMJOB ??CMD(SAVLIB LIB(&N)) 
SM SBMJOB ??CMD(SAVOBJ OBJ(&F) LIB(&L) OBJTYPE(*FILE) FILEMBR((&F (&N))))
SO SBMJOB ??CMD(SAVOBJ OBJ(&N) LIB(&L)) 
SP WRKSPLF
TD STRSDA OPTION(3) TSTFILE(&L/&N) 
WS WRKSBMJOB
 
Closing everything... 
Done.

A more detailed example: Working with PDM options

The previous example shows how to use the JDBC syntax for connecting and querying data. It's reasonably easy, but it's not the type of robust Java code you probably want you to write. Rather, it is better to put everything that deals with accessing the file via JDBC into a separate class. This way, you have only one place to change if the database changes or you decide to use something other than JDBC. Also, it can make it much easier to write an application that uses the returned data in many places.

Let's look at an example of reading and updating a database, and how you might design the classes to encapsulate that. It again uses the PDM user-defined options database, as everyone has that on their systems. That file is encapsulated in a class named  PDMOptions, and its constructor takes a previously created DB2400 object to prepare some statements. Then, methods are supplied for reading the database and populating an internal list, for returning the elements from that list as an Enumeration, for adding a record to the database, for updating a record in the database, and for deleting a record in the database. While not keyed, the reality is no two records can have the same two-character option, and so this enforcement is left to the client code to enforce, which it does.

To encapsulate one record, you need two other classes. The first, named PDMOptionKey here, contains a String object for the option. This class has the usual hashcode and equals methods to allow it to be stored in vectors and hashtables, and have searches done on those. The second, PDMOption, includes a PDMOptionKey object and a String object for the command part of the record. This class has an equals method so that searches can be done for a command, and some helper methods for displaying and printing the record contents. So, the  PDMOptions class will create a vector of PDMOption objects on a query, and it will expect a PDMOption object as input to the ADD, UPDATE, and DELETE methods. These will search the vector for matching records, via the indexOf method from Vector.

First, Listing 13.4 shows the PDMOptionKey class. Notice how it supplies a static verify method, so code that takes a new option as user input can test that it is valid. It is a very good idea to encapsulate such error checking in the class for a field, so that it is reusable everywhere.

Listing 13.4: The PDMOptionKey Class, Encapsulating the Option Part of a PDM Option Record

public class PDMOptionKey 
{ 
 private String option;
 
 public PDMOptionKey(String option) 
 {
 this.option = option;
 } 
 public String getOption() 
 {
 return option;
 } 
 public void setOption(String option) 
 {
 this.option = option;
 } 
 public boolean equals(Object other) 
 { 
 if (other instanceof PDMOption) 
 return option.equalsIgnoreCase(
 ((PDMOption)other).getKey().getOption()); 
 else if (other instanceof PDMOptionKey) 
 return option.equalsIgnoreCase(
 ((PDMOptionKey)other).getOption()); 
 else
 return false;
 } 
 public int hashCode() 
 {
 return ((String)option).hashCode();
 }
 public static String verify(String newOption) 
 {
 String errMsg = null; 
 if (newOption.length() == 0) 
 errMsg = "No option specified"; 
 else if (newOption.length() > 2) 
 errMsg = "Option more than 2 characters"; 
 else if (Character.isDigit(newOption.charAt(0))) 
 errMsg = "First character of option must not be numeric"; 
 return errMsg;
 }
 public String toString() 
 {
 if (option.length() == 2) 
 return option;
 else 
 return " " + option;
 } 
} // end PDMOptionKey

Next, Listing 13.5 is the PDMOption class.

Listing 13.5: The PDMOption Class, Encapsulating Both Parts of a PDM Option Record

import java.io.*; 
public class PDMOption 
{ 
 private PDMOptionKey key; 
 private String command;
 
 public PDMOption(PDMOptionKey key, String ommand) 
 {
 this.key = key; 
 this.command = command.trim(); 
 } 
 public PDMOptionKey getKey() 
 {
 return key; 
 } 
 public String getCommand() 
 {
 return command; 
 } 
 public void setCommand(String command) 
 {
 this.command = command; 
 } 
 public boolean equals(Object other) 
 { 
 if (other instanceof PDMOption) 
 return command.equals(((PDMOption)other).getCommand()); 
 else if (other instanceof PDMOptionKey) 
 return getKey().equals(other);
 else
 return false;
 } 
 public String toString() 
 { 
 return key.toString() + " " + command; 
 }
 public void display() 
 {
 System.out.println("-------------"); 
 System.out.println("Option......: " + key);
 System.out.println("Command.....: " + command);
 }
 public void print(PrintWriter writer) 
 { 
 writer.println("-------------"); 
 writer.println("Option......: " + key);
 writer.println("Command.....: " + command);
 writer.println(); writer.flush();
 }
}

Finally, Listing 13.6 is the  PDMOptions class. (Note that the listing doesn't show all the constructors, since they are just variants of the main one shown.) The helper methods containsKey and getKeyIndex test if a given PDMOption object exists in the file and return its index position in the file. You don't actually have to check the file to do this; you just have to check the internal Vector list, using the indexOf method. You pass it the key part of the option, and Java calls the equals method on this key object for every element in the vector, until one returns true. That is why the PDMOptionKey class's equals method has to recognize a PDMOption object as the parameter and compare keys (the option field) appropriately.

The PDMOptionKey object is used instead of the PDMOption object in the call to indexOf because you want to find a match on the option part, not the command part. You would- n't need this special version of equals that expects a PDMOption object or PDMOptionKey object if you used Hashtable instead of Vector to store the list of returned rows. However, you will find Hashtable a poor choice for this because its elements method returns the items in a different order than they were placed in the list. Vector preserves the order.

Listing 13.6: The PDMOptions Class

 import java.sql.*;
 import java.util.*;
 import java.io.*;
 public class PDMOptions
 {
 protected DB2400 As400Conn = null;
 protected Connection conn = null;
 protected PreparedStatement queryAll, addRecord,
 updateRecord, deleteRecord;
 protected Vector optionList = null;
 protected String fileName, librName;
 
 public PDMOptions(DB2400 As400Conn, String library,
 String filename)
 throws SQLException
{
 this.As400Conn= As400Conn;
 conn = As400Conn.getConnection();
 this.fileName = filename;
 this.librName = library;
 String libFile = library + "." + filename;
 try
 {
 queryAll = conn.prepareStatement(
 "SELECT OPTION, COMMAND FROM " + libFile);
 Statement lockStatement = conn.createStatement();
 lockStatement.executeUpdate(
 "LOCK TABLE " + libFile + " IN SHARE MODE");
 addRecord = conn.prepareStatement(
 "INSERT INTO " + libFile + " VALUES(?,?)");
 updateRecord = conn.prepareStatement (
 "UPDATE " + libFile +
 " SET COMMAND = ? WHERE OPTION = ?");
 deleteRecord = conn.prepareStatement (
 "DELETE FROM " + libFile + " WHERE OPTION = ?");
 } catch(SQLException exc)
 {
 if (queryAll != null) queryAll.close();
 if (addRecord != null) addRecord.close();
 if (updateRecord != null) updateRecord.close();
 if (deleteRecord != null) deleteRecord.close();
 queryAll=addRecord=updateRecord=deleteRecord=null;
 throw(exc); // rethrow
 }
 }
 
 public int getAll()
 {
 optionList = new Vector();
 try
 {
 ResultSet rs = queryAll.executeQuery();
 while (rs.next())
 {
 PDMOptionKey nextKey = new PDMOptionKey(
 rs.getString(1));
 PDMOption nextValue = new PDMOption(nextKey,
 rs.getString(2));
 optionList.addElement(nextValue);
 }
 rs.close();
 } catch (SQLException exc) {}
 return optionList.size();
 }
 public Enumeration elements()
 {
 return optionList.elements();
 }
 public boolean containsKey (PDMOption option)
 {
 return (getKeyIndex(option) != -1);
 }
 public int getKeyIndex(PDMOption option)
 {
 return optionList.indexOf(option.getKey());
 }
 public String addOption(PDMOption option)
 {
 String errMsg = null;
 if (getKeyIndex(option) != -1)
 return "Option " + option.getKey() + " already exists";
 try {
 addRecord.setString(1,option.getKey().getOption());
 addRecord.setString(2,option.getCommand());
 int rowsAffected = addRecord.executeUpdate();
 if (rowsAffected == 1) // better!
 optionList.addElement(option);
 else
 errMsg = "No record added";
 } catch (SQLException e) {errMsg = e.getMessage();}
 return errMsg;
 }
 
 public String updateOption (PDMOption option)
 {
 String errMsg = null;
 int matchIndex = getKeyIndex(option);
 if (matchIndex == -1)
 return "No matching option found";
 try {
 updateRecord.setString(1,option.getCommand());
 updateRecord.setString(2,option.getKey().getOption());
 int rowsAffected = updateRecord.executeUpdate();
 if (rowsAffected == 1) // better!
 optionList.setElementAt(option, matchIndex);
 else
 errMsg = "No record updated";
 } catch (SQLException e) {errMsg = e.getMessage();}
 return errMsg;
 }
 public String deleteOption(PDMOption option)
 {
 String errMsg = null;
 int matchIndex = getKeyIndex(option);
 if (matchIndex == -1)
 return "No matching option found";
 try {
 deleteRecord.setString(1,option.getKey().getOption());
 int rowsAffected = deleteRecord.executeUpdate();
 if (rowsAffected == 1) // better!
 optionList.removeElementAt(matchIndex);
 else
 errMsg = "No record deleted";
 } catch (SQLException e) {errMsg = e.getMessage();}
 return errMsg;
 }
} // end class

Due to space constraints, Listing 13.6 doesn't show everything in the class. For example, there is an important close method that closes all the statement objects. There is also support via one of the constructors for a read-only mode, which does not lock the table or prepare the non-query statements.

You can see in the code we do show that, when this object is constructed, all the potentially often-repeated SQL statements are prepared (SELECT, INSERT, UPDATE, and DELETE). Also, the LOCK TABLE command is used to lock the file member (which is always just the first one in SQL), since this could being used in an edit session, and you can't afford to have the data change underneath you while you work on a local copy of it.

The intended use of this class is to first call the getAll method, and then call the addOption, updateOption, and deleteOption methods as many times as desired, and finally call the close method. The ADD, UPDATE, and DELETE methods will first update the database and, if successful, update the local Vector list, so it is kept in synch. Note these methods all take a PDMOption object, which you might create from option (opt) and command (cmd) strings like this:

PDMOption newOption = new PDMOption(new PDMOptionKey(opt), cmd);

This self-contained class for working with a PDM options file is now ready to be used in one or more applications. To test it, we created a full "work with" GUI for remotely editing any PDM options file. All the code for this is on the CD-ROM included with this book, in Chapter 13's  PDMOptions subdirectory.

If you test-drive this application, first copy your PDM options file and open that copy, as this application really does change the file! That is, it actually works, so be careful. Start the application by typing java  PDMOptionsWindow. Figure 13.3 shows the main window after opening a file.

click to expand
Figure 13.3: A GUI application that uses the PDMOptions class for database I/O

Note that a JTable holds the list. Chapter 12 did not describe how to code these, but you can read about them in the comments in files  OptionsTable.java and  OptionsTableModel.java on the CD-ROM. The GUI we wrote simply stores and shows the list of PDMOption objects you get by calling getAll and elements of the  PDMOptions class. It then uses the appropriate methods, with the selected objects, for processing the add, copy, update, delete, and import options (for the add option, a new PDMOption object is created).

This example provides a reasonably sized application that works with remote data, yet all database I/O has been encapsulated inside a single class ( PDMOptions), rather than being distributed throughout the application. Further, the next application you write can use the same class. As well as encapsulating the I/O in one class, this example encapsulates a single record in one class, and the key part of that record in another (which the first contains). Often, we even go further and encapsulate each field in its own class and then put instances of each in the record class and the key class. This allows us to put the error checking for the fields in one place. However, since the fields in this file are so simple, the basic String class can be used to hold them.

Reading rows a few at a time

A popular question is: "Can I read the database xx rows at a time, as I am used to with my subfile processing?" For example, your file probably often contains more than the few dozen records in the PDM options file. The answer is yes, but you have to code this yourself. You simply code your result-set processing to process only the next n records at a time. For example, you might have a method named readNextNRows:

public short readNextNRows(short n) throws SQLException 
{ 
 short readRows = 0; // return how many rows actually read
 for (boolean more = rs.next();
 more && readRows < n; 
 readRows++, more = rs.next())
 {
 // process the current row 
 }
 return readRows; 
}

It is not always necessary to bother with this, though. Instead, you may take a shortcut by placing the result-set processing in a separate thread, so that it runs in the background and can be interrupted by the user. In most cases, this effectively removes the need to only retrieve n records at a time because the user is not forced to wait for all records to be read before continuing. Chapter 11 discusses this concept of threads. Simply place the database retrieval code in a class that implements Runnable, and have the run method do your database retrieval. Also, include a private boolean variable that can be set to true to stop any operation.

Putting your database access code into a thread is important if the access is done on the client and from a GUI because your GUI will be locked waiting for the operation to complete, and users won't know what is going on. For this reason, the PDM options example is threaded by creating a  PDMOptionsThreaded class and duplicating all the database access methods from  PDMOptions in it (constructor, qetAll, addOption, updateOption, deleteOption, close). These end up calling the corresponding methods in the  PDMOptions class, but they do so from a thread.

The example uses a common trick in GUI applications for this: it pops up a status dialog showing progress and provides a Stop button so the user can stop the action. (We don't burden you with all the code required to do this, but if you are interested, you will find it on the CD-ROM in the  PDMOptionsThreaded directory of Chapter 13.) The basic steps are to first create (but don't show) the dialog, then start the thread running, and then show the dialog. This last step won't return control to you until the dialog is closed, and this is done by the thread as its last step.

In the example, a dialog class named DBAccessStatusDialog shows the name of the operation being performed, along with a Stop push button. A JProgressBar gives the idea of something happening, as shown in Figure 13.4.


Figure 13.4: The status dialog shown while accessing the database

If you do decide to process your database access in a thread, you should look at two methods in the Statement, PreparedStatement, and CallableStatement classes that will be of use:

  • The cancel() method can be called by another thread to cancel a long-running query prior to executeQuery returning.
  • The setMaxTimeout(int seconds) method can set an upper limit in seconds for how long to wait for the SQL query to end. The default is zero, meaning no time limit.

Another potentially useful method is setMaxRows(int max) to set an upper limit on how many records to be retrieved in a result set. Again, the default is zero, no limit.

Handling warnings

There are cases where JDBC (actually SQL) issues warnings, not errors, due to data truncation, implicit commits, and so on. To be thorough, or for SQL statements that you know could result in SQL warnings, you can check for warnings after executing an SQL statement (via execute, executeQuery, or executeUpdate) using the method getWarnings in the Statement class. This returns the first warning in the form of an instance of the SQLWarning exception class. This object has a getNextWarning method for retrieving additional warning messages, if any. You should always call your Statement or PreparedStatement object's clearWarnings method before executing to be sure that any subsequent warnings are actually issued for that statement execution:

stmt.clearWarnings(); // stmt == Statement object 
stmt.execute(sqlString); // execute sqlString SQL statement 
SQLWarning warning = stmt.getWarnings(); 
while (warning != null)
{ 
 warning = warning.getNextWarning(); 
 System.out.println("Warning: " + warning.getMessage());
}

Rather than checking the warnings at the statement object level, you can check them at the Connection level itself, to retrieve all the warnings posted for a number of statement executions. Note that the method names and usage are identical:

conn.clearWarnings(); // conn == Connection object 
// execute one or more statements 
SQLWarning warning = conn.getWarnings(); 
while (warning != null) 
 warning = warning.getNextWarning();

If this is still not enough flexibility, you can even check them on the ResultSet object, again using the identical methods.

Truncation warnings actually have their own Java exception class, DataTruncation, which inherits from the SQLWarning class. In fact, SQLWarning also inherits from SQLException, so you get access to all the methods in all these classes for DataTruncation exceptions. The DataTruncation class includes methods such as getDataSize to return how many bytes should have been processed, and getTransferSize to return how much data actually was processed (on a read or write). Another useful method is getIndex, which returns the column number of the truncated field. If you suspect a truncation warning, you can use the inherited getSQLState method of your SQLWarning object to check for truncation state 01004, and then cast your object to a DataTruncation object to access the unique methods in that class:

if (warning.getSQLState().equals("01004")) 
{ 
 DataTruncation t = (DataTruncation)warning; 
 int colIdx = t.getIndex(); 
 int truncd = t.getTransferSize(); 
 System.out.println("Column " + colIdx + " truncated at "
 + truncd + "bytes");
}

This exception can happen if you use the setMaxFieldSize(int) method on a State ment or PreparedStatement object. This method places an upper limit on how many bytes will be read or written to the database for any column.

Step 6 Close, close, close

There's nothing new here, just a gentle reminder to always close your result set, statement, and connection objects as soon as you are done with them! They hold valuable database resources, and you don't want them held any longer than absolutely necessary. (Any currently open ResultSet is closed implicitly when you execute another statement with the same Statement or PreparedStatement object.)

More on data types

When using SQL, whether embedded in RPG code through CLI APIs or through JDBC, you use predefined SQL data types. These ensure consistency and portability from one database vendor to another. When using SQL in RPG, these data types are mapped to and from RPG-native data-type variables by the database.

The operation is similar for JDBC. You have seen how the getXXX methods in the ResultSet class allow you to equate an SQL column's data type to a Java basic data type or a Java object data type (like java.math.BigDecimal). You have also seen how the setXXX methods in the PreparedStatement class allow you to equate a Java basic data type or Java object. The conversion between the AS/400 EBCDIC values and Java's ASCII values happens automatically for you, including codepage conversions.

Table 13.10 summarizes the SQL data types and their AS/400 DDS and RPG equivalents. (The data-type mapping shown is merely the preferred or most common mapping. You can easily map to and from various types-as you saw, for example, in the setXXX methods.)

Table 13.10: SQL, DDS, and Java Data Types

SQL Type

DDS Type (Len)

Default

Java Type

BIGINT[1]

B=Binary (9,0)

B (9,0)

int

BINARY[2] (n)

A=Character (n) CCSID(65535)

A (1) CCSID (65535)

String

BIT[1]

B=Binary (4,0)

B (4,0)

short

CHAR (n)

A=Character (n)

A (1)

String

DATE

L=Date (10)

L

java.sql.Date

DECIMAL (n, m)

P=Packed (n,m)

P (5,0)

java.math.BigDecimal

DOUBLE (n)

F=Float FLTPCN(*DOUBLE) (x,x-1)

F (17,16) *DOUBLE

double

FLOAT (n) n=1-14

F=Float FLTPCN(*SINGLE) (x,x-1)

F (17,16) *DOUBLE

float

FLOAT (n) n=25-53

F=Float FLTPCN(*DOUBLE) (x,x-1)

F (17,16) *DOUBLE

double

INTEGER

B=Binary (9,0)

B (9,0)

int

LONGVARBINARY[1]

A (n=max size) VARLEN CCSID(65535)

A (n=max size) VARLEN CCSID(65535)

String

LONGVARCHAR[1]

A (n=max size) VARLEN

A (n=max size) VARLEN

String

NUMERIC (n, m)

S=Zoned (n,m)

S (5, 0)

java.math.BigDecimal

REAL

F=Float FLTPCN(*SINGLE) (8, 7)

F (8,7) *SINGLE

float

SMALLINT

B=Binary (4,0)

B (4,0)

short

TIME

T=Time (8)

T

java.sql.Time

TIMESTAMP

Z=Timestamp (26)

Z

Java.sql.Timestamp

TINYINT1

B=Binary (4,0)

B (4,0)

short

VARBINARY[2] (n) ALLOCATE (m)

A =Character(n) CCSID(655634) VARLEN(M)

A (1) CCSID(65535) VARLEN

String

VARCHAR (n) ALLOCATE (M)

A+Character (n) VARLEN(m)

A (1) VARLEN

String

[1] 1These data types are not actually supported by JDBC for DB2/400 (as you will see by perusing the SQLTypes.html documentation file in the Toolbox). However, they are mapped to reasonably equivalent SQL types, which is reflected here. The BIGINT type is actually supported in V4R5 or higher.
[2] DB2 for AS/400 does not actually support BINARY and VARBINARY SQL types; however, the JDBC for DB2/400 driver does tolerate them simply by equating them to DB2/400's CHAR/VCHAR CCSID(65535) values, which is perfectly reasonable.
 

In addition to those in Table 13.10, DB2/400 supports three SQL types that JDBC does not explicitly support: GRAPHIC, VARGRAPHIC, and LONGVARGRAPHIC for "pure DBCS" fields. These map to the G (Graphic) data type in DDS, with VARLEN where appropriate. For Java, you should map to Java's String class type.

Null value considerations in JDBC

As you probably know, both DB2/400 and SQL allow "null-capable" fields, which you can explicit set to null to identify an un-set value. This is in place of the old method, which involved using your own hard-coded "special" values to indicate un-set values. Of course this implies that your program needs a way to indicate a column should be set to null on a write (or on a query, if it is null on a read). RPG IV has this now, and of course JDBC also has it.

In Java's JDBC, you can call the ResultSet method wasNull for the previously read column (via getXXX method) of the current row. It will return true if that column value is null.

To write a null value to a field, use PreparedStatements and the setNull method. This method takes as the first parameter the integer position of the marker, and the SQL type of the column that marker represents, for example:

pstmt.setNull(1,Types.INTEGER);

Further, any of the setXXX methods that take an object accept a null parameter for that object, and will write an SQL null to the database for that column in that row.

Commitment control in JDBC

JDBC supports commitment control, of course, because it is a major part of the SQL standard. To use commitment control with JDBC, you first must specify a transaction isolation level. By default, this is none, but you can specify it either as a property on the connect statement, or later via a call to the Connection object's setTransactionIsolation method. The property values correspond to the COMMIT parameter on the CRTSQLxxx preprocessor commands, while the setTransactionIsolation parameter is a constant integer defined in the java.sql.Connection class. The former is a DB2/400-friendly way to do it, while the latter is a database-portable way. The transaction isolation affects how database objects, like selected rows, are locked for the life of a transaction (one or more SQL statements executed between calls to commit), relative to other concurrent transactions. The transaction isolation options are listed in Table 13.11. (See the DB2/400 SQL reference documentation for more information on these.)

Table 13.11: Commitment-Control Transaction Isolation Options

The Transaction Isolation Property

setTransactionIsolation Method

CRTSQLXXX COMMIT Parameter

none

TRANSACTION_NONE

*NONE or *NC

Read committed

TRANSACTION_READ_COMMITTED

*CS

Read uncommitted

TRANSACTION_READ_UNCOMMITTED

*CHG or *UR

Repeatable read

TRANSACTION_REPEATABLE_READ

*RS

Serializable

TRANSACTION_SERIALIZABLE

*RR

Once you choose and specify a transaction isolation level, you must next decide whether to do explicit commit/rollback calls, or use the default setting of autoCommit. In the default mode, all SQL statements are automatically committed after their last result set row has been read (for executeQuery) or the statement has been executed (for executeUpdate). A statement is also implicitly committed when you execute another statement, if it has not been already. You can, and probably will, turn off autoCommit mode by calling setAutoCommit(false) on your connection object. In this case, you explicitly code your own calls to the commit or rollback methods on your connection object.

Retrieving database metadata in JDBC

One thing you are sure to miss in Java is the lack of externally described files. That is, you cannot simply code the equivalent of an F-spec and have all the file's field names automatically defined in your program. In terms of this function, Java goes back a number of years in usability, into a world where you have to hard-code field names, types, lengths, and so on into your program, and manually keep those in sync with the database. There is no comforting level-check support to ensure that you have done that job properly. And you cannot rely on clever "impact analysis" tools to identify programs affected by a field definition change. Furthermore, because there is no externally described language for user interfaces like display files offer, you lose the benefits of reference fields, as well. This is not Java's fault! It is just that this exceptional productivity and quality advantage is unique to the AS/400 and DB2/400. So what's a poor portable language to do?

Don't despair. There is something Java can do, and it has done it. JDBC, and in fact SQL in general, provides a way to at least query this information at runtime, so you do not have to hard-code it. This information is known as metadata, and includes everything you need to know about a field or column definition. Of course, using this dynamic approach results in a performance hit, but if you are really concerned about maintenance and integrity, you might decide that it is worth the sacrifice. Actually, if you are writing a truly dynamic application where you cannot predict the file or fields that will be used, you will have to use this technique.

You need to use the ResultSet class's getMetaData method, which returns a ResultSetMetaData object containing information about the columns of the result set. To get the information out of it, you use the isXXX and getXXX methods, including those listed in Table 13.12. As you can see, all the methods except getColumnCount take an integer parameter to identify the column number (one-based) to return the information about. If you want to try this out, look at the class TestMetaData on the CD-ROM, which queries and displays this information for the file CUSDATA in library QPDA.

Table 13.12: The ResultSetMetaData Methods

Method

Description

isCaseSensitive(int)

Always true for text types such as CHAR, and always false for numeric types like DECIMAL.

isNullable(int)

True if null-capable (ALWNULL).

getColumnCount()

The number of columns.

getColumnDisplaySize(int)

For decimal columns, the total length plus two for point and sign.

getColumnName(int)

The column name (DDS field name).

getColumnType(int)

The column data type, from java.sql.Types .

getColumnTypeName(int)

Same as above, but as a readable string.

getPrecision(int)

The total length (including decimals).

getScale(int)

Decimal positions.

The idea behind metadata is that you can query this information, rather than hard-coding it. That allows your Java code to automatically handle any changes in your database definition.

Another class, named DatabaseMetaData, offers a vast amount of information about the database itself (that is, DB2/400). An instance of this is returned using the Connection object's getMetaData method. The information returned in the large number of methods in this class can be categorized as in Table 13.13. The last two categories are the ones you might need. The others are only needed for completely portable, database-neutral, and dynamic applications, such as a generic database query tool.

Table 13.13: The DatabaseMetaData Methods

Category

Example

Description

Database support information

supportsFullOuterJoins(), getDateTimeFunctions()

The SQL functionality this database (DB2/400, in our case) supports.

Database terminology information

getCatalogTerm(), getProcedureTerm()

What terminology this database uses for common items.

Database name and version information

getDatabaseProductVersion() , getDriverVersion()

The version of this database or JDBC driver.

Cataloged information

getProcedures(), getTables(), getSchemas()

The database-related objects from the system.

Specific information

getColumns(), getProcedureColumns(), getCrossReference(), getExported/ImportedKeys(), getIndexInfo()

Explicit information about a given database object.

The method you might need to use the most is getColumns, if you want to retrieve the field description (TEXT column in DDS) for a column. This method returns a ResultSet object containing the metadata for every column (by default) in the specified table.

For every column in the file, you get one row in the result set. The row contains 17 pieces of information (one piece in each column of the result set), much of it useful: (1) catalog name, (2) schema name, (3) table name, (4) column name, (5) data type, (6) type name, (7) column size, (8) buffer length, (9) decimal digits, (10) numeric precision radix, (11) nullable, (12) remarks, (13) column definition, (14) SQL data type, (15) SQL date/time substitution, (16) character octet length, and (17) ordinal position. Phew! To get the field description, use column 12, the REMARKS column, like this:

DatabaseMetaData dbmd = conn.getMetaData(); 
rs = dbmd.getColumns(null, "QPDA", "QCUSDATA", null); 
while (rs.next())
 String description = rs.getString(12);

This returns the TEXT value, not the COLHDG value. There is no metadata method for retrieving this, as COLHDG is an AS/400-unique value. However, the DDS compiler will default the TEXT value to the COLHDG value if TEXT is not specified.



The Good Stuff Stored Procedures, Triggers, And RI

As part of the evolving SQL standard, new database constructs have been defined, including these three:

  • Constraints, which are rules about key validation, also called Referential Integrity or RI.
  • Triggers, which are programs called when records are added, changed, or deleted.
  • Stored procedures, which are programs called directly via SQL statements.

These are all part of an effort to allow you to move your database rules out of each of your programs and into the database itself. This is a very important concept. It means that you can increase the number of ways in which your users can access your data, without compromising the data's integrity.

Currently, much of the input validation for a database is done by your RPG program logic. This logic is often duplicated across multiple programs and is subject to multiple maintenance problems. (ILE can help here by allowing you to code a single, reusable *MODULE or *SRVPGM object that all your programs can share.) It also means that, to allow access to the data from, say, a Java client or even a Java server program, you have to either duplicate that logic again, or code your Java programs to call your RPG programs in order to manipulate the data. By centralizing all these validation rules into the database itself, you can ensure that, no matter how your data is accessed, it will remain valid.

With DB2/400 (and the rest of the DB2 family), stored procedures and triggers can be written in any high-level language, and can include embedded SQL. Further, stored procedures can also be written in SQL itself. (The SQL stored procedure language adds basic HLL constructs such as looping and variable assignment.) Or, as of V4R5, stored procedures can be written in Java!

For constraints and triggers, there is nothing to say in JDBC except that if they exist, then they will be executed when you access your data from JDBC. For stored procedures, however, you need to learn how to call them from JDBC.



Stored Procedure Calls in JDBC

Stored procedures are simply programs on your server that you can call via SQL statements. This means, ultimately, that the database itself actually calls the program. Stored procedures can be passed parameters, those parameters can be updated, and the new values made available to the calling program. Stored procedures can also be coded to return an SQL result set to the calling program. They are often used by enterprising programmers for two reasons:

  • They provide a standard (that is, portable) means of calling any existing non-interactive program on the server, even if that program was not designed as a stored procedure.
  • They provide a terrific way to improve database access performance compared to multiple remote SQL statements.

To call an AS/400 program object as a stored procedure, use the class CallableStatement, which extends PreparedStatement. You acquire an object instance of this via the Connection class's prepareCall method. When you call an AS/400 program object as a stored procedure, you can pass in parameters to the program as follows:

  • Input-only. Read, but not updated, by the called program.
  • Output-only. Updated, but not read, by the called program.
  • Input-output. Read and updated by the called program.

Further, your stored procedure program can optionally return a result set if it is used to retrieve a list of rows from the database. Some databases support a return code from a stored procedure call, but DB2/400 does not. Instead, simply update one of the output or input/output parameters.

To invoke a stored procedure program from JDBC, first create a CallableStatement object by calling prepareCall, specifying an SQL CALL statement as the parameter, and specifying a marker (a question mark) for each parameter:

CallableStatement proc1 = 
 conn.prepareCall("CALL MYLIB/MYPROG(?, ?, ?)");

The syntax of the program name is lib/program for system naming, or lib.program for SQL naming. The resulting object can be executed multiple times. For each execution, do the following:

  • Specify the parameter data types and values for each input-capable (input or input-output) parameter, using the setXXX methods that your CallableStatement object inherits from the PreparedStatement class:

    proc1.setInt(1, 1); 
    proc1.setString(2, parm2String);

    The first parameter is a relative number of the marker you are setting, and the second parameter is the value to set it to. These values remain until you change or clear them via proc1.clearParameters();).

  • Specify the parameter data types for each output-capable (output or input-output) parameter, using the registerOutParameter method:

    proc1.registerOutParameter(2, java.sql.Types.CHAR);

    The first parameter is the relative number of the marker you are registering, and the second parameter is the SQL type of the parameter. For input and output parameters, you must do this and the previous step, and the types specified for both must be consistent.

  • Call the stored procedure program using the execute or executeQuery methods that your CallableStatement object inherits from the Statement class. Use the former only if your stored procedure returns multiple result sets. Use the latter if it returns one result set or no result set (the result set will simply be empty):

    ResultSet rs = proc1.executeQuery();
  • Retrieve the values of each output-capable parameter using the appropriate getXXX method if the stored procedure program does not return a result set:

    String returnedValue = proc1.getString(2);
  • Process the result set as usual if the stored procedure does return a result set:

    while (rs.next())
     System.out.println("Returned value: " + rs.getString(2));
    

One last thing to note: You cannot just call any program object this way, without first registering the program in the DB2/400 SQL catalog. This is done using the SQL statement CREATE PROCEDURE. This need only ever be done once, either from your Java program by executing a Statement object with this command, or through interactive SQL on the AS/400 itself.

Your program can both return a result set and update parameters for maximum flexibility. In fact, your program can return multiple result sets! This is handled in JDBC by using the execute method and iterating through the result sets via the methods getResultSet and getMoreResults that the CallableStatement class inherits from the Statement class.

There is a good example of using stored procedures, including a sample RPG stored procedure program, in the IBM redbook Accessing the AS/400 with Java, SG24-2152.



What s New in JDBC 2 0

What you have read about in this chapter to this point is JDBC as it was originally designed and known as JDBC 1.0. Coincident with JDK 1.2.0, Sun made available JDBC 2.0 (built into the JDK 1.2.0). This adds some exciting new functionality to JDBC 1.0, summarized briefly here. For more details on both the 1.0 and the 2.0 specifications, look at the HTML file  index.html in the JDK subdirectory docsguidejdbc.

Data type enhancements

In JDBC 2.0, the following SQL type constants are added to the java.sql.Types class:

  • BLOB (Binary Large Objects). Often used to store graphics and other multimedia files.
  • CLOB (Character Large Objects). Often used to store documents.
  • ARRAY. An SQL array.
  • DISTINCT. A user-defined type (UDT), based on a built-in data type.
  • STRUCT. A UDT that can have any attributes.
  • REF. An SQL reference object.
  • JAVA_OBJECT. Used to store Java objects in your database.

To support these new types, there are new setXXX methods for substituting the markers in PreparedStatement objects and for setting the input, output, or both parameters of stored procedures in CallableStatement objects. These are setBlob, setClob, setArray, and setRef. There are also new corresponding getXXX methods in the ResultSet and CallableStatement classes. Note that DISTINCT types are really existing built-in types, so appropriate existing setXXX/getXXX methods are sufficient. Also, STRUCT types simply use the existing setObject/getObject methods.

To support the new get and set methods, new classes have been defined in java.sql named Blob, Clob, Array, and Ref. These are actually interfaces, not classes, but the JDBC driver suppliers implement these and supply real classes. If you decide to use these new types, have a look at the JavaDoc for help.

ResultSet enhancements

The new methods in Table 13.14 are added to the ResultSet class. As you can see, some significant capability has been added to result sets. You can scroll any direction, jump to any row, easily insert/update/delete the current row in the database, refresh the current row's contents from the database, and work with new data types. Phew!

Table 13.14: The ResultSet Enhancements in JDBC 2.0

Method

Change

Description

absolute, relative

New

Moves the cursor to an explicit one-based row number, or a given number of rows forward or backward (using a negative number), respectively.

afterLast, beforeFirst

New

Moves the cursor to just after the last row or just before the first row, respectively.

cancelRowUpdates

New

Cancels updates made to the row via new updateXXX methods but before updateRow.

deleteRow, updateRow, insertRow

New

Deletes or updates the current row from the database or inserts the current row into database.

first, last

New

Positions the cursor at the first or last row.

getArray, getBlob, getCharacterStream, getClob, getRef

New

Supports new data types, as mentioned.

getBigDecimal

Updated

No second parameter is needed for the scale anymore.

getDate

Updated

You can now specify a GregorianCalendar object as the second parameter, which is used to construct a millisecond value if the database does not store time zone information.

getObject

Updated

You can now specify a Map object as the second parameter.

getFetchDirection, setFetchDirection

New

Returns or sets the fetch direction: FETCH_FORWARD, FETCH_REVERSE or FETCH_UNKNOWN.

getFetchSize, setFetchSize

New

Returns or sets the fetch size for this result set. This is a hint of the block size to the database.

getStatement

New

Returns the Statement object that produced this result set.

getType

New

Returns the type of the result set: TYPE_FORWARD_ONLY, TYPE_SCROLL_INSENSITIVE / SENSITIVE.

isAfterLast, isBeforeFirst, isFirst,isLast

New

Returns true if the cursor is after the last row, before the first row, on the first row, or on the last row, respectively.

previous

New

Moves the cursor to the previous row.

refreshRow

New

Refreshes the current row from the database.

rowDeleted, rowInserted, rowUpdated

New

Returns true if the row has been deleted, inserted, or updated.

updateXXX

New

Just like all the getXXX methods, but these update the values in the result set. The database is not updated until updateRow or insertRow is called.

Something new Batch updates

We think batch updates are pretty cool. The Statement, PreparedStatement, and CallableStatement classes all now support a new method named executeBatch. This method is similar to execute, but rather than simply executing the one statement represented by the statement object, it executes all the statements that have been specified with the addBatch method of the statement object. The following example is from the JDBC 2.0 specification:

conn.setAutoCommit(false); 
Statement stmt = conn.createStatement(); 
stmt.addBatch("INSERT INTO employees VALUES (1000, 'Joe Jones')"); 
stmt.addBatch("INSERT INTO departments VALUES (260, 'Shoe')"); 
stmt.addBatch("INSERT INTO emp_dept VALUES (1000, 260)"); 
// submit a batch of update commands for execution 
int[] updateCounts = stmt.executeBatch();

This example was for unprepared statements, while the following is for prepared statements:

conn.setAutoCommit(false); 
PreparedStatement stmt = conn.prepareStatement(
 "INSERT INTO employees VALUES (?, ?)"); 
stmt.setInt(1, 2000); stmt.setString(2, "Kelly Kaufmann");
stmt.addBatch();
stmt.setInt(1, 3000); stmt.setString(2, "Bill Barnes");
stmt.addBatch();
// submit the batch for execution 
int[] updateCounts = stmt.executeBatch();

Only DDL and DML SQL statements that do not return result sets are valid candidates for batch execution. The batched statements are executed until the first one fails. To determine how many succeeded, call getUpdateCounts to get an integer array. The number of elements in the array tells you how many batch statements succeeded.

Optional JDBC 2 0 enhancements

A new package named javax.sql has been designed as well, to support some advanced functionality, such as database connection pools and interfaces to other optional Java initiatives like Java Naming and Directory Services (JNDS) and Java Transaction Services (JTS). This package is not part of the core JDK as of this writing. Instead, it is separately downloadable from the JavaSoft Web site. It is only of value if a database vendor implements it. You will probably see advanced implementations as part of the Enterprise JavaBeans offerings, such as in IBM's WebSphere Application Server product, Advanced and Enterprise editions.



What s New in JDBC 3 0

With the release of the Java 2 SDK, Standard Edition, version 1.4, Sun released version 3.0 of JDBC. This third version of JDBC adds yet more new functionality. The following is from the JDK documentation for JDBC 3.0:

"The JDBC 3.0 API, comprised of packages java.sql and javax.sql, provides universal data access from the Java programming language. Using the JDBC 3.0 API, you can access virtually any data source, from relational databases to spreadsheets and flat files. JDBC technology also provides a common base on which tools and alternative interfaces can be built.

New features include the ability to set savepoints in a transaction, to keep result sets open after a transaction is committed, to reuse prepared statements, to get metadata about the parameters to a prepared statement, to retrieve keys that are automatically generated, and to have multiple result sets open at one time. There are two new JDBC data types, BOOLEAN and DATALINK, with the DATALINK type making it possible to manage data outside of a data source. This release also establishes the relationship between the JDBC Service Provider Interface and the Connector architecture."



Embedded SQL in Java SQLJ

You have seen the Java-supplied and industry-supported way of accessing data: JDBC. This has proven popular every since Sun (with help from IBM, Oracle, and others) defined it and made it available in JDBC 1.0, with JDK 1.1.0. However, some have found it a bit unnatural for programmers accustomed to accessing their data via embedded SQL in other languages. To this end, Oracle began work on an embedded SQL specification for Java. Today, it is available not as part of the standard JDK, but by database vendors who wish to offer some value-added. Even though it is not part of the JDK, it is still reasonably portable, as more and more database vendors support it. IBM supports it across its DB2 family, for example.

Embedded SQL in Java is called SQLJ. It is very similar to embedded SQL in other languages, but we think it is still valuable for you know to know JDBC even if you choose to use SQLJ. Further, you can mix both SQLJ statements and JDBC calls in the same program.

SQLJ allows you to embed SQL statements directly inside Java source code, prefixed by the special string #sqlj. Such source code must exist in files that end in .sqlj versus .java. They are preprocessed into .java files by the SQLJ preprocessor (called a translator), which also produces a .ser file, which is known as a profile. This contains information about the embedded SQL statements, such as the database schema and the particular data that is accessed. As of V4R5, this is subsequently used to produce a database plan.

The manual Application Programming Guide and Reference for Java (S/390), SC26-9018, lists the following differences between JDBC and SQLJ:

  • SQLJ follows the static SQL model, and JDBC follows the dynamic SQL model.
  • SQLJ source programs are smaller than JDBC because much of the JDBC code is generated.
  • SQLJ does more data type checking during the program preparation process.
  • SQLJ allows use of embedded Java host expressions inside SQL statements versus separate JDBC calls for variable binding.
  • SQLJ executes statements under the authorization ID of the plan or package owner, checked at bind time. By contrast, JDBC does not know the authorization ID until runtime and so no checking is done until then.

Although this is all true, JDBC is quite easy to use and is getting better. We find the added simplicity of SQLJ to be offset by the fact it is yet another thing to learn (again, knowing JDBC anyway helps), and it requires more complexity to compile and run because of the extra step needed to preprocess the .sqlj file into a .java file, and ultimately a database plan. Obviously, we haven't cornered the market on this opinion, though, or SQLJ would not be so popular! So, it's your choice.

SQLJ is available as of V4R4 of OS/400, but that release only converts the embedded SQL statements into JDBC. As of V4R5, it actually creates database access plans, so it is in fact a better performer than JDBC because it is truly embedded static SQL versus dynamic SQL.

The following is a very brief introduction to SQLJ. The Java documentation for the AS/400 Developer Kit for Java offers more information. Follow the links for the ADK from www.ibm.com/java. Also, see www.sqlj.org for a reference manual and user guide.

Writing embedded SQL in Java

Your .sqlj files will all start by importing the JDBC and SQLJ packages, like this:

import java.sql.*; 
import sqlj.runtime.*; 
import sqlj.runtime.ref.*;

You must also declare a connection context, often right after the import statements:

#sql context MyCtx;

Think of this as defining a new class, which represents a database connection. However, to instantiate an instance of this class, you need to do a traditional JDBC driver registration and connection first, passing the Connection object to the context constructor, as follows:

DB2400 db400 = new DB2400(); 
Connection conn = db2400.getConnection(); 
conn.setAutoCommit(false); // required for sqlj 
MyCtx ctx = new ctx(conn);

To subsequently work with embedded SQL, for the most part, you simply embed your SQL statements by prefixing them with #sql and using braces. You can also use the traditional :variable syntax to bind Java variables to variable parts of the SQL statements, for example:

try {
 String state = "PA"; 
 #sql { DELETE FROM CUSTOMER WHERE STATE=:state }; 
} catch (SQLException exc) {
 System.out.println("Failed: " + exc.getMessage();
}

Notice that you must still put all your statements inside try/catch blocks for SQLException. Further, you should subsequently use the Connection's commit method to commit your transactions, and ultimately close the Connection object.

By default, the SQL statements are associated with your connection context. However, you can create multiple connection-context objects. If so, you must identify which one you want to use for each SQL statement, like this:

#sql [ctx] { DELETE FROM CUSTOMER WHERE STATE=:state };

Note that the :variable can actually be a Java expression if you want to compute the value at runtime.

Where things get more interesting is with queries via the SELECT statement (multiple row queries, not single row SELECT INTO queries, which can simply be embedded). For this, you need to first declare an iterator, which is very similar to a cursor in embedded SQL inside RPG. This is also done at the top of the file, after the import statements and before the class definition, and looks like this:

#sql iterator MyIter(String name, BigDecimal balance);

This also effectively declares a new class, which will have methods matching the variable names you declare as parameters. You declare one variable per column you will be accessing in your database, and the methods are used to retrieve the data for that column in the current result set row, as you will see. The names are arbitrary. They do not need to match the column names, but they should be similar for easy maintenance. To create an instance of your iterator class, you embed a SELECT statement:

MyIter myIterObj; 
#sqlj myIterObj = { SELECT NAME, BALANCE FROM CUSTOMER };

This actually does the query, and under the covers, gets you a result set. To walk that result set, use the supplied next method in the iterator object, and for each element, use the methods you identified in the declaration to get each column's data:

while (myIterObj.next()) 
{
 String name = iter.name(); 
 BigDecimal balance = iter.balance(); 
}
myIterObj.close();

Got it? Note the iterator object must be closed by you, too.

If you don't want to use column names, there is a variant of iterators known as positioned iterators that we leave to you to explore. Further, if you wish to do positioned updates or deletes to the current result set row, you use updatable iterators.

There is a lot more detail to be discussed with SQLJ, but we do not cover it here in the same detail as JDBC. Rather, we leave the detailed reading to you, if you decide to use it. However, a full example of what is covered here is shown in Listing 13.7. It is the same example as in Listing 13.3, redone with SQLJ instead of JDBC.

Listing 13.7: A SQLJ Program for Querying the PDM Options File

 import java.sql.*;
 import sqlj.runtime.*;
 import sqlj.runtime.ref.*;
    #sql context PDMctx;  // connection context declaration
    #sql iterator PDMiter (String option, String command);
 
 public class QryPDMOptions
 {
 public static void main(String args[])
 {
 System.out.println("Welcome to QryPDMOptions");
 try
 {
 System.out.println("Connecting to AS/400...");
 DB2400 db2400 = new DB2400();
 Connection conn = db2400.getConnection();
 conn.setAutoCommit(false);
 System.out.println("Preparing context...");
 PDMctx connCtx = new PDMctx(conn);
 System.out.println("Doing query...");
 PDMiter iter;
 #sql [connCtx] iter =
 { SELECT OPTION, COMMAND FROM QGPL.QAUOOPT };
 System.out.println("Displaying results...");
 while (iter.next())
 {
 String option = iter.option();
 String command = iter.command().trim();
 System.out.println(option + " " + command);
 }
 System.out.println("Closing everything...");
 conn.commit();
 iter.close();
 conn.close();
 System.out.println("Done.");
 } catch (SQLException exc) {
 System.out.println("SQL Error......: " +
 exc.getMessage()); }
 System.exit(0);
 } // end main
 }

Preparing and compiling embedded SQL in Java

Having coded a .sqlj file, you run the translator to get the runnable class and its required profile:

java sqlj.tools.Sqlj QryPDMOptions.sqlj

For this to work, you must ensure your CLASSPATH includes the following:

/QIBM/ProdData/Java400/ext/runtime.zip 
/QIBM/ProdData/Java400/ext/translator.zip

With this successfully done, you are ready to run your Java class as usual:

java QryPDMOptions

We did this on our AS/400 and got the same results as for Listing 13.3 (Here's a tip for working inside QSHELL: use redirection to capture output, as in java QryPDMOptions > pdm.out.)



Summary

This chapter, admittedly, covered lots of different topics, including the following:

  • A brief discussion of RPG and SQL database access and commitment control.
  • The architecture of JDBC for database access in Java: Java supplies a driver manager, and database vendors supply their own JDBC database drivers. You code the Java-supplied JDBC classes only; the database drivers "snap in" underneath them.
  • The steps needed to use JDBC classes in Java programs to access and manipulate DB2/400 data: load the driver, connect to the database, create statement objects, execute the statements, process the results, and finally close the necessary objects.
  • An overview of what is new in JDBC 2.0 and 3.0, the new generations of JDBC.
  • An overview of SQLJ, IBM, and Oracle's support for embedded SQL inside Java.

You might not be an expert yet at programming database access in Java, but you are certainly on your way!



References

For more information on the subjects discussed in this chapter, you are referred to one of the many competent references, including these:

  • Accessing the AS/400 with Java. IBM redbook: SG24-2152-00.
  • AS/400 Client/Server Performance Using the Windows Clients. IBM redbook: SG24-4526-01.
  • AS/400 Java home page: www.ibm.com/iseries/java.
  • Conte, Paul. 1997. Database Design and Programming for DB2/400 Duke Press, ISBN 1882419065.
  • DB2 Product Family home page: www.ibm.com/software/data/db2.
  • Fisher, Maydene, et al. 1999. JDBC API Tutorial and Reference, Second Edition: Universal Data Access for the Java 2 Platform. Addison-Wesley, ISBN 0201433281.
  • Hamilton, Cattell, and Fisher. 1997. JDBC Database Access with Java-A Tutorial and Annotated Reference. Addison-Wesley, ISBN 0201309955.




Java for RPG Programmers
Java for RPG Programmers, 2nd Edition
ISBN: 193118206X
EAN: 2147483647
Year: 2002
Pages: 191

Flylib.com © 2008-2020.
If you may any questions please contact us: flylib@qtcs.net