A rowset is a container for tabular data, encapsulating a number of rows that have been retrieved from a data source. Rowsets were introduced as a standard extension in JDBC 2.0. The javax.sql.RowSet interface extends the java.sql.ResultSet interface with support for additional capabilities, such as using a data source to populate the rowset and assigning listeners that can be notified when the rowset is manipulated.
WebLogic comes with its own implementation of a disconnected rowset, sometimes called a cached rowset. This rowset can be populated in two ways: either from an existing ResultSet or using an SQL query that is executed against a data source. The rowset then can be sent to some remote client, updated at the client's end, and later returned to the server and synchronized with the underlying database. Because the population of the rowset and the synchronization of the rowset data with the database typically will occur in separate transactions, WebLogic provides a number of nifty optimistic concurrency features that you can employ to control the synchronization. You also can export the rowset and its metadata to an XML document and schema, respectively. Likewise, XML data can be imported into the rowset. WebLogic's rowset implementation can be found in the weblogic.jdbc.rowset package.
To illustrate the features of WebLogic's cached rowsets, we shall refer to a database table, stockitems, which is defined as follows:
CREATE TABLE stockitems ( id integer primary key, name varchar(50), available integer )
Moreover, we assume that you've configured a data source with the JNDI name myds.
5.4.1 Creating and Populating Rowsets
A disconnected rowset is manufactured from a RowSetFactory instance. The following example shows how to create a WLCachedRowSet instance:
import weblogic.jdbc.rowset.RowSetFactory; import weblogic.jdbc.rowset.WLCachedRowSet; //... RowSetFactory factory = RowSetFactory.newInstance( ); WLCachedRowSet rs = factory.newCachedRowSet( );
Once you have created an empty cached rowset, you need to populate it with data. Often, this can be done using the rows of an existing ResultSet object, obtained from an earlier JDBC call:
javax.sql.DataSource ds = ctx.lookup("myds"); java.sql.Connection con = ds.getConnection( ); java.sql.Statement st = con.createStatement("select id,name,available from stockitems"); ResultSet resultSet = st.executeQuery( ); rs.populate(resultSet); con.close( );
Another way to populate the rowset is to configure a data source and then issue an SQL query that returns one or more rows:
javax.sql.DataSource ds = ctx.lookup("myds"); rs.setDataSource(ds); rs.setCommand("select id,name,available from stockitems"); rs.execute( );
In this case, when you invoke the execute( ) method on the rowset, WebLogic uses the configured data source to obtain a connection from the pool, and then uses the SQL query to load the rowset with the returned rows. Later in this chapter in Section 5.4.6 we shall see how to also import the data from an XML document into a rowset.
Once a rowset is populated, it no longer holds on to any open cursors or connections with the database. The rowset is disconnected from the database, and provides an in-memory copy of the data held in the database at the time of population. No locks are held, either by the database or by the rowset.
5.4.2 Manipulating Rowsets
Because the RowSet interface extends the standard ResultSet interface, you can iterate over the rows using the next( ) method, and use the appropriate getXXX( ) methods to read the columns of each row. The following example illustrates this straightforward approach:
rs.beforeFirst( ); while (rs.next( )) { pw.println("Got " + rs.getInt("id") + ": " + rs.getString("name") + ":" + rs.getInt("available")); }
Inserting rows into an existing rowset is equally easy. The following example shows how to insert rows into our cached rowset:
rs.moveToInsertRow( ); // move to the special "insert" row rs.updateInt("id", count++); rs.updateString("name", "carrots"); rs.updateInt("available", 3+count); rs.insertRow( ); // mark the row for insertion rs.updateInt("id", count++); rs.updateString("name", "pears"); rs.updateInt("available", 3+count); rs.insertRow( ); // mark the row for insertion rs.acceptChanges( ); rs.moveToCurrentRow( ); // return to the current position in the rowset
Because the rowset represents an in-memory copy of data held in the database, any updates you make to the rowset occur in memory. For this reason, you must invoke the acceptChanges( ) method on the rowset to flush any rowset changes back to the database. In the preceding example, the synchronization is handled by the rowset implementation, which issues the appropriate SQL INSERT commands to the underlying database.
Because the cached rowset implements the java.io.Serializable interface, you can send the cached rowset either as an RMI parameter or a return value. If you do send it to a remote client, typically the client will not need to invoke the acceptChanges( ) method. Instead, it will modify the rowset and send it back to the server, which will perform a more advanced synchronization.
To update or delete existing rows within a rowset, you often need to manipulate the metadata associated with the rowset. In general, when you populate the rowset using an SQL query, the rowset implementation relies on the ResultSetMetaData interface to learn about the tables and column names of the data in the rowset. Unfortunately, not all JDBC drivers will supply the metadata for the set of rows returned after issuing the SQL query. For instance, a JDBC driver may return an empty string when asked for the table name that is associated with a particular column. Without the table name, the rowset can be used only in a read-only fashion. In such cases, you often will need to step in and fill in some of the gaps in the rowset metadata. For example, suppose you wish to update the second row loaded into the rowset:
// Updates the second row of the rowset rs.beforeFirst( ); rs.next( ); rs.next( ); rs.updateString("name", "peanuts"); rs.updateRow( ); // always call updateRow after update rs.acceptChanges( );
Even though this code looks pretty bland, you will get the following exception when you execute this code against our setup:
java.sql.SQLException: Unable to determine the table name for column: 'id'. Please ensure that you've called WLRowSetMetaData.setTableName to set a table name for this column.
A similar situation arises if you try and delete rows from the rowset:
rs.beforeFirst( ); rs.next( ); // Find the row to delete rs.deleteRow( ); // Mark it for deletion rs.next( ); // Find the next one to delete rs.deleteRow( ); // Mark it for deletion rs.acceptChanges( ); // Perform the deletions
The reason for this is clear sadly, our JDBC driver has taken the liberty of supplying incomplete metadata for the rowset. To rectify this situation, and to ensure that your code works across JDBC drivers, you need to manipulate the rowset metadata so that any in-memory row updates or deletes can be synchronized properly with the database. The following code shows how to retrieve the metadata for the rowset, set the table name for all columns in the rowset, and identify the primary key column:
WLRowSetMetaData metaData = (WLRowSetMetaData) rs.getMetaData( ); metaData.setTableName("stockitems"); metaData.setPrimaryKeyColumn("id", true);
Given these changes to the rowset metadata, the earlier row updates and deletes will now succeed. Note that when you set a column to be the primary key, that column is made read-only. Thus, if you need to manipulate the primary key column, you must call the setReadOnly( ) method to make it read-write:
metadata.setReadOnly("id",false);
Fortunately, you also can minimize the amount of metadata hacking by invoking the executeAndGuessTableName( ) method on the WLCachedRowSet instance. Not only does this method populate the cached rowset, but it also parses the SQL query and extracts the table name (the first word following the SQL FROM keyword within the query). The executeAndGuessTableNameAndPrimaryKeys( ) method, in addition, looks up the table in java.sql.DatabaseMetaData and tries to determine the table's primary key columns. The following code sample shows how to avoid metadata hacking and eliminate the earlier SQLException:
rs.setDataSource(ds); rs.setCommand("select id,name,available from stockitems"); rs.executeAndGuessTableNameAndPrimaryKeys( ); // find the row you want to delete, then rs.updateString("name", "peanuts"); rs.updateRow( ); rs.acceptChanges( );
Finally, if the SQL query used to populate a rowset uses a column alias, you need to update the rowset metadata and specify the name of the actual column that should be associated with the alias so that the rowset implementation knows which table column to update. Here is how you specify the true column name for the alias:
metadata.setWriteColumnName("aliasName", "tableColumnName");
5.4.3 Transactions and Rowsets
WebLogic's rowsets can participate in transactions, just like any other JDBC resource. If the rowset needs to participate in a distributed transaction, you must ensure that the rowset is assigned an XA-aware data source. In this case, any optimistic conflicts that arise during synchronization will cause the distributed transaction to roll back. If the rowset is not being used within a distributed transaction, the rowset will use a simple local transaction. During the synchronization, the rowset implementation will first execute a setAutoCommit(false) on the connection obtained from the pool, execute any SQL statements to reflect the in-memory changes to the rowset, and finally invoke the commit( ) on the same connection. If an optimistic conflict occurs in this scenario, the local transaction rolls back, and of course, the database remains unchanged.
5.4.4 Synchronizing Rowsets
WebLogic's RowSet objects implement the java.io.Serializable interface. This means you can pass rowsets as an argument to an EJB method, or define an EJB method that returns a rowset populated with data from a data source. For example, you could define an EJB method that returns all items that need to be restocked. The EJB method easily could use a RowSet object as the container for the data. Now imagine a supplier that remotely updates the rowset. When the rowset is sent back to the EJB so that it can update its stock availability, it must synchronize the data in the rowset with the actual data in the database. Of course, you have no guarantee that the database hasn't changed since the rowset was last populated.
WebLogic provides a number of optimistic concurrency schemes that it can use to synchronize the rowset. Some of these schemes are reminiscent of the optimistic concurrency setting for EJBs, discussed in Chapter 11. Optimistic concurrency requires WebLogic to conditionally update the database, based on SQL that matches the rowset data and database data. By default, the rowset implementation performs a consistency check against all modified columns. In the rest of this section, we look at the various optimistic concurrency schemes for rowsets using a number of SQL updates as examples. The same discussion holds for any calls to the deleteRow( ) method on the rowset as well.
You need to access the rowset metadata and set the optimistic concurrency policy before you synchronize the rowset changes with the database. As usual, you must invoke the acceptChanges( ) method on the WLCachedRowset instance to execute the synchronization. The constants that define the different optimistic concurrency schemes are available in the WLRowSetMetaData interface. Here is an example:
try { WLRowSetMetaData metaData = (WLRowSetMetaData) rs.getMetaData( ); metaData.setOptimisticPolicy(WLRowSetMetaData.VERIFY_READ_COLUMNS); rs.acceptChanges( ); } catch (OptimisticConflictException e_oc) { // Handle this in whichever way is suited to your application }
If the rowset implementation detects that the underlying data in the database has changed since the time the rowset was populated, the synchronization attempt fails with an OptimisticConflictException, as indicated earlier. The choice as to which optimistic concurrency should be used will depend on the needs of your application. Your decision will be a simple trade-off between data consistency and the probability of an optimistic conflict.
Assume that you've read a single row from the stockitems table, updated the value of the available field, and now need to persist these changes back to the database. The following code sample illustrates this scenario:
//Assume that for id=1, name="melons", available=4 rs.setCommand("select id,name,available from stockitems where id=1"); rs.executeAndGuessTableNameAndPrimaryKeys( ); rs.next( ); rs.updateInt("available", 10); rs.updateRow( ); rs.acceptChanges( );
Now we are in a position to examine the different optimistic concurrency schemes.
5.4.4.1 VERIFY_READ_COLUMNS
This option is the default optimistic concurrency policy. Any updates or deletes that occur during synchronization must ensure that none of the columns in the rowset has changed. This scheme provides you with the strongest consistency check during synchronization. Refer to the VERIFY_AUTO_VERSION_COLUMNS policy for a cheaper implementation. For the example, this means that the rowset implementation will synchronize the update by issuing the following SQL:
UPDATE stockitems SET available=10 WHERE id=1 AND name='melons' AND available=4
5.4.4.2 VERIFY_AUTO_VERSION_COLUMNS
This optimistic concurrency policy assumes that the underlying database table has an additional version column, of integer type. It uses the specified version column to check for consistency, and automatically increments the version column as part of the database update. The next example shows how to enable this scheme. It assumes that a version column called version is present in the stockitems table:
WLRowSetMetaData metaData = (WLRowSetMetaData) rs.getMetaData( ); metaData.setOptimisticPolicy(WLRowSetMetaData.VERIFY_AUTO_VERSION_COLUMNS); metaData.setAutoVersionColumn("version", true); metaData.acceptChanges( );
Thus, if the version column was set to 1 when the rowset was populated, the rowset update will be synchronized using the following SQL:
UPDATE stockitems SET available=10, version=2 WHERE id=1 AND version=1
In this way, auto-versioning provides you with a very strong consistency check, at little cost. However, any other clients that update the stockitems table must also update the version column appropriately.
5.4.4.3 VERIFY_VERSION_COLUMNS
This policy behaves just like the VERIFY_AUTO_VERSION_COLUMNS policy, except that the rowset implementation does not increment the version column. Choose this optimistic concurrency policy, for instance, if you have assigned a database trigger that automatically increments the version column when the row is updated. In our example, the rowset implementation will use the following SQL to synchronize the update:
UPDATE stockitems SET available=10 WHERE id=1 AND version=1
5.4.4.4 VERIFY_MODIFIED_COLUMNS
This optimistic concurrency policy ensures that only the updated columns are checked for consistency. Of course, the primary key columns also are matched. This scheme ignores any nonprimary key columns that were not updated. For our example, this means that the rowset will be synchronized using the following SQL:
UPDATE stockitems SET available=10 WHERE id=1 AND available=4
5.4.4.5 VERIFY_SELECTED_COLUMNS
This optimistic concurrency policy matches the primary key columns and enforces the consistency checks on a selected number of columns only. This scheme ignores any nonprimary key columns and columns not selected for consistency checks. You must call the setVerifySelectedColumn( ) method to explicitly select those columns that you want verified during synchronization. The next example shows how to force the verification of the available column:
WLRowSetMetaData metaData = (WLRowSetMetaData) rs.getMetaData( ); metaData.setOptimisticPolicy(WLRowSetMetaData.VERIFY_SELECTED_COLUMNS); metaData.setVerifySelectedColumn("available", true); rs.acceptChanges( );
This results in the following SQL being used to synchronize the rowset update:
UPDATE stockitems SET available=10 WHERE id=1 AND available=4
5.4.4.6 VERIFY_NONE
This optimistic concurrency policy matches only the primary key columns. No consistency checks are made, and the rowset implementation blindly overwrites changes to the table without verifying whether the underlying data has changed. For our example, this means the following SQL is used to synchronize the rowset update:
UPDATE stockitems SET available=10 WHERE id=1
5.4.4.7 Caveats
Regardless of the optimistic concurrency policy you choose, the following caveats always hold true during synchronization:
The easiest way to debug synchronization problems is to view the SQL that is being generated. Call the WLRowSetMetaData.setVerboseSQL( ) method to enable this debugging capability. When this debugging is enabled for the rowset, the rowset implementation sends the generated SQL to System.out during synchronization.
5.4.4.8 Group deletes and JDBC batching
WebLogic provides two ways to optimize how a rowset synchronizes its in-memory updates with the database:
Group deletes
Normally when you delete multiple rows from a rowset, the rowset implementation emits an SQL DELETE statement for each deleted row during synchronization. By enabling group deletes, you can determine the number of delete operations that should be issued in a single SQL DELETE statement. In the case of the earlier examples, depending on your optimistic policy, the rowset implementation will generate the following SQL command:
// Deletes two items in a single delete DELETE FROM stockitems WHERE (id=1 AND version = 1) OR (id=2 AND version = 1)
The following code enables this feature, and sets the group delete size for the rowset:
metaData.setGroupDeletes(true); metaData.setGroupDeleteSize(42);
JDBC batching
WebLogic's rowset implementation also supports JDBC 2.0 statement batching. Instead of sending each individual statement to the JDBC driver, the rowset implementation sends a collection of statements in a batch. To enable statement batching, you should invoke the setBatchInserts( ), setBatchDeletes( ), or setBatchUpdates( ) method on a WLCachedRowSet instance. By default, statement batching is disabled. These methods must be invoked before you synchronize any rowset changes using the acceptChanges( ) method.
For Oracle JDBC drivers, the rowset implementation handles statement batching slightly differently. Here, batch updates automatically generate an additional SELECT query, which is used to determine whether an optimistic conflict occurred. Furthermore, batch deletes in Oracle are executed as a group delete.
5.4.5 Rowsets Spanning Multiple Tables
The data in a rowset could be populated from a join of a number of tables. For example, suppose your database also includes the following table:
CREATE TABLE itemprice ( id integer primary key, price double )
In that case, you could populate the rowset with the following SQL query:
rs.setCommand("select i.id,i.name,p.price from stockitems as i join itemprice as p on i.id = p.id"); rs.execute( );
When a rowset spans multiple tables, any in-memory updates and deletes as well as the optimistic concurrency policy all work slightly differently:
Often, you will need a much simpler delete policy. For example, if you have a one-to-many relationship between two tables, you may want to delete only the records in the table on the "many" side of the relationship. To achieve this delete behavior, you simply need to call the WLRowSetMetaData.setWriteTableName( ) method. The following example shows how a deleteRow( ) causes only the related record in the itemprice table to be deleted during synchronization:
WLRowSetMetaData metaData = (WLRowSetMetaData)rs.getMetaData( ); metaData.setTableName("price", "itemprice"); // Help the metadata metaData.setWriteTableName("itemprice"); // Only touch the itemprice table rs.beforeFirst( ); rs.next( ); // Delete the first row rs.deleteRow( ); rs.acceptChanges( );
The setWriteTableName( ) method effectively marks any column that doesn't belong to the write table as read-only.
5.4.6 XML and Rowsets
The data in a rowset can be exported to an XML document. In addition, the rowset metadata can be exported to an XML schema that describes the structure of the XML document. This provides an alternative way to serialize a rowset. An XML document and schema also can be used to construct a rowset, thereby providing an alternative way to populate a rowset. Use the writeXML( ) method on a WLCachedRowSet instance to export the rowset to an XML document, and use the writeXMLSchema( ) method on the WLRowSetMetaData instance to export the metadata to an XML schema. These methods use WebLogic's XML Streaming API, described later in Chapter 18. The following example shows how to create a schema and XML output file from a rowset:
XMLOutputStreamFactory xof = XMLOutputStreamFactory.newInstance( ); XMLOutputStream xos = null; FileOutputStream os = new FileOutputStream("/tmp/stockitems.xsd"); xos = xof.newDebugOutputStream(os); metaData.writeXMLSchema(xos); os.close( ); xos.close( ); os = new FileOutputStream("/tmp/stockitems.xml"); xos = xof.newDebugOutputStream(oss); rs.writeXML(xos); os.close( ); xos.close( );
Here is an example of the generated XML that represents the state of the rowset:
1 carrots 52
Populating a rowset from an XML document and schema is equally easy. Use the loadXMLSchema( ) method to load the rowset metadata, and use the loadXML( ) method to load the data into the rowset itself:
XMLInputStreamFactory xif = XMLInputStreamFactory.newInstance( ); WLCachedRowSet rs = factory.newCachedRowSet( ); //load the metadata information from the XML schema WLRowSetMetaData metaData = (WLRowSetMetaData) rs.getMetaData( ); FileInputStream fis = new FileInputStream("/tmp/stockitems.xsd"); XMLInputStream xis = xif.newInputStream(fis); metaData.loadXMLSchema(xis); fis.close( ); xis.close( ); //load the rowset data from the XML file fis = new FileInputStream("/tmp/stockitems.xml"); xis = xif.newInputStream(fis); rs.loadXML(xis); fis.close( ); xis.close( );
Table 5-7 explains how the JDBC types are mapped to XML schema types.
JDBC type |
XML schema type |
---|---|
BIGINT |
xsd:long |
BINARY |
xsd:base64Binary |
BIT |
xsd:boolean |
BLOB |
xsd:base64Binary |
BOOLEAN |
xsd:boolean |
CHAR |
xsd:string |
DATE |
xsd:dateTime |
DECIMAL |
xsd:decimal |
DOUBLE |
xsd:decimal |
FLOAT |
xsd:float |
INTEGER |
xsd:int |
LONGVARBINARY |
xsd:base64Binary |
LONGVARCHAR |
xsd:string |
NUMERIC |
xsd:integer |
REAL |
xsd:double |
SMALLINT |
xsd:short |
TIME |
xsd:dateTime |
TIMESTAMP |
xsd:dateTime |
TINYINT |
xsd:byte |
VARBINARY |
xsd:base64Binary |
VARCHAR |
xsd:string |
Table 5-8 describes how XML schema types are mapped to equivalent JDBC types.
XML schema type |
JDBC type |
---|---|
base64Binary |
BINARY |
boolean |
BOOLEAN |
byte |
SMALLINT |
dateTime |
DATE |
decimal |
DECIMAL |
double |
DOUBLE |
float |
FLOAT |
hexBinary |
BINARY |
int |
INTEGER |
integer |
NUMERIC |
long |
BIGINT |
short |
SMALLINT |
string |
VARCHAR |
Introduction
Web Applications
Managing the Web Server
Using JNDI and RMI
JDBC
Transactions
J2EE Connectors
JMS
JavaMail
Using EJBs
Using CMP and EJB QL
Packaging and Deployment
Managing Domains
Clustering
Performance, Monitoring, and Tuning
SSL
Security
XML
Web Services
JMX
Logging and Internationalization
SNMP