Data-Change Statements

   

Data-Change Statements

The SQL Standard defines an "SQL-data-change statement" as any of: INSERT, UPDATE, DELETE. Ifand only ifa data change involves a large number of rows, you can speed up the ODBC execution of the transaction.

One way to improve data-change performance is to pass several data-change statements at once, to reduce the overhead of network transmissions. Here are three common ways to pass multiple data-change statements:

  • If you're using ODBC 3.x, call the function SQLBulkOperations , which allows array passing. This function is not available in standard SQL/CLI, nor in all implementations .

  • If you're using standard SQL:1999, enclose multiple statements in a compound statement block, for example:

     SQLExecDirect(   hstmt,   "BEGIN \     UPDATE Table1 SET column1 = 5; \     UPDATE Table2 SET column1 = 6; \   END",SQL_NTS); 

Portability

Ingres, InterBase, MySQL, and Oracle don't support this feature.


  • Some DBMSs allow INSERTs to contain multiple clauses, as required by the SQL Standard. Take advantage of the feature, like this:

     SQLExecDirect(   hstmt,   "INSERT INTO Table1 VALUES \      (5, 6, 7), \      (5, 7, 8), \      (5, 8, 9)",SQL_NTS); 

Portability

Ingres, Informix, InterBase, Microsoft, Oracle, and Sybase don't support this feature.


Another way to improve performance is to set auto-commit mode based on the number of data-change statements you're executing. If you find that most of your transactions contain two or more data-change statements, turn auto-commit mode off with:

 SQLSetConnectAttr(    ...,SQL_ATTR_AUTOCOMMIT,SQL_AUTOCOMMIT_OFF,...); 

If most of your transactions contain only one data-change statement, then auto-commit mode should be on (providing, of course, that there's no reason to believe a ROLLBACK will be necessary). Although auto-commit mode is ugly, it is ODBC's default. Also, short transactions are slightly faster when auto-commit is on , because the client doesn't need to send a separate COMMIT message to the server.

A third way to speed up data changes is to put UPDATE and DELETE statements inside fetch loops , in order to make changes to the row at the "current" cursor position (that is, the last row fetched ). The requisite statements here are:

 UPDATE ... WHERE CURRENT OF <cursor> DELETE ... WHERE CURRENT OF <cursor> 

WARNING

Programs that use the WHERE CURRENT OF versions of UPDATE or DELETE (the positioned UPDATE/DELETE statements) can have trouble with locks. If you use these statements, you'll also need to concern yourself with transaction isolation modes, cursor sensitivity, andabove all! timing (see Chapter 15, "Locks"). You don't want to enter a fetch loop that displays rows on the screen and waits for a user to change what's displayed, do you? Wellmaybe you do. But in that case, you should use row locators.


After you've executed a data-change statement, you can call SQLRowCount to find out how many rows were affected. Unfortunately, to support SQLRowCount a DBMS must track and pass more information than is necessary for the performance of the data change itself. Some DBMSs allow you to turn counting off, and you should do so if your program doesn't make use of SQLRowCount .

The Bottom Line: Data-Change Statements

Pass multiple data-change statements at once whenever possible, to reduce the overhead of network transmissions.

Set auto-commit mode based on the number of data-change statements in your transactions. For one data change, auto-commit should be on . For multiple data changes, auto-commit should be off .

Put UPDATE and DELETE statements inside fetch loops and make changes to the row at the current cursor positionbut read Chapter 15, "Locks" before you do so.

Turn counting off if your program doesn't make use of SQLRowCount .

   


SQL Performance Tuning
SQL Performance Tuning
ISBN: 0201791692
EAN: 2147483647
Year: 2005
Pages: 125

Similar book on Amazon

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net