Here's another task that could take too long.
This task can be accomplished with optimistic locking and fetching into an array, but there's a better way.
Assume you have a table with an indexed primary or unique key. Assume further that the DBMS optimizes the MAX and MIN functions, and supports some version of the non-standard SQL-extension SET ROWCOUNT command (each of the Big Eight do). Here's the transaction to use:
SET TRANSACTION ISOLATION LEVEL READ COMMITTED SQLSetStmtAttr( ...SQL_ATTR_MAX_ROWS,25...) /* SET ROWCOUNT equivalent */ SELECT * FROM Table1 WHERE unique_field > MAX(:last_unique_field) for (n=1; n<=25; ++n) FETCH ... } ROLLBACK NB: last_unique_field = value of last unique_field in the last fetch in the loop
This is another example of a READ COMMITTED transaction. The second time this is run, the value of last_unique_field will be the same as it was for the twenty-fifth row the previous time the transaction was run, so it's possible to go forward indefinitely. It's trivial to conceive of going both forward and backward, detecting end, and so on. The only essential matter is that, because the transaction ends after each screenful, Non-repeatable Reads and Phantoms are inevitable in such scroll cursor emulations. This scan trick works even if you're updating, but you should avoid changing the unique key. (Note that this functionality may be provided automatically by a higher-level interface; see our discussion of cursor rowsets in Chapter 12, "ODBC." Cursor rowsets act similarly, but may lock too much.)