Updating a Data Store

for RuBoard

After the data has been retrieved and modified by your application, you need to resynchronize it with the data store. Keep in mind that because we're dealing with the disconnected model here, the data adapter handles updates in a batch fashion. In other words, when its overloaded Update method is called, it looks for all changed rows in the given DataSet , DataTable , or array of DataRow objects, and attempts to run the InsertCommand , UpdateCommand , or DeleteCommand associated with the data adapter as appropriate. Of course, as you learned on Day 5, "Changing Data," the data adapter makes the determination about how a row was changed by looking at the value of its RowState property. It hardly needs to be said that for a data adapter to synchronize the data in a DataSet or DataTable with a data store, the AcceptChanges method shouldn't have been called because doing so resets the RowState property to Unchanged .

In the remainder of today's lesson, you'll learn how the Update method accomplishes this and how you can customize its behavior for your particular scenarios.

The Update Process

As you just learned, the data adapter exposes the InsertCommand , UpdateCommand , and DeleteCommand properties that reference command objects called by the Update method as it traverses the changed rows. Behind the scenes, the parameters exposed by the command object are populated with values from the row being inspected. These values are then passed to the command that is executed. It's important to remember that this process is row oriented, so if you change 15 rows in a DataTable and then pass it to the Update method of a data adapter, 15 commands will be executed against the data store, one for each row.

Tip

This process implies that if you wanted to make the same change to multiple rows, you shouldn't retrieve them into a DataTable and then make the change to each row and call the Update method. It's far more efficient to change multiple rows in a single statement with the appropriate WHERE clause using the ExecuteNonQuery method.


The interesting aspect of this architecture is that it's flexible by giving you complete control over how the modifications are actually applied. This is the case because the command object can simply reference a statement or stored procedure that actually does the work. This enables you to write commands that can modify multiple tables in the data store based on the data in a single DataTable . In addition, you can create multi-function stored procedures that, for example, can be referenced by both the UpdateCommand and InsertCommand . This can be done by allowing the stored procedure to determine whether the row is already in the table so that you can write a single stored procedure to handle the insert or update of a row in the Titles table, as shown in Listing 12.6.

Listing 12.6 Multi-function stored procedures. This SQL Server stored procedure can be used both to insert and update a row in the Titles table.
 CREATE  PROCEDURE usp_SaveTitle        @ISBN         [nvarchar](10) = null,        @Description  [nvarchar](2048) = null,        @Title        [nvarchar](100) = null,        @Author       [nvarchar](250) = null,        @Price        [money] = null,        @PubDate      [smalldatetime] = null,        @Publisher    [nchar](5) = null,        @CatID        [uniqueidentifier] = null AS IF EXISTS (SELECT * FROM Titles WHERE ISBN = @ISBN) BEGIN   UPDATE Titles     SET Description = @Description,     Title = @Title,     Author = @Author,     Price = @Price,     PubDate = @PubDate,     Publisher = @Publisher,     CatID = @CatID   WHERE ISBN = @ISBN   RETURN 1 END ELSE BEGIN   INSERT INTO Titles (ISBN, Description, Title,    Author, Price, PubDate, Publisher, CatID)     VALUES (     @ISBN,     @Description,     @Title,     @Author,     @Price,     @PubDate,     @Publisher,     @CatID     )   RETURN 0 End GO 
graphics/analysis.gif

The procedure in Listing 12.6 uses the Transact -SQL EXISTS clause to determine whether the row already exists based on the primary key column (ISBN). If so, an UPDATE statement is performed; if not, an INSERT is performed. Note that the procedure returns 1 if the row was updated and 0 if the row was inserted.

From within your .NET code, you can then reference the procedure once and associate it with both command objects as shown in Listing 12.7.

Listing 12.7 Modifying data. This code sets up the data adapter to insert, update, and delete rows from the database based on the DataSet .
 SqlCommand cmSave = new SqlCommand("usp_SaveTitle", con); cmSave.CommandType = CommandType.StoredProcedure; SqlCommand cmDelete = new SqlCommand("usp_DeleteTitle", con); cmDelete.CommandType = CommandType.StoredProcedure; SqlDataAdapter da = new SqlDataAdapter(); da.UpdateCommand = cmSave; da.InsertCommand = cmSave; da.DeleteCommand = cmDelete; // Configure save parameters cmSave.Parameters.Add(new SqlParameter(   "@isbn", SqlDbType.NVarChar, 10, "isbn")); cmSave.Parameters.Add(new SqlParameter(   "@description", SqlDbType.NVarChar, 2048, "Description")); cmSave.Parameters.Add(new SqlParameter(   "@title", SqlDbType.NVarChar, 100, "Title")); cmSave.Parameters.Add(new SqlParameter(   "@author", SqlDbType.NVarChar, 250, "Author")); cmSave.Parameters.Add(new SqlParameter(   "@price", SqlDbType.Money, 4, "Price")); cmSave.Parameters.Add(new SqlParameter(   "@pubDate", SqlDbType.DateTime, 4, "PubDate")); cmSave.Parameters.Add(new SqlParameter(   "@publisher", SqlDbType.NChar, 5, "Publisher")); cmSave.Parameters.Add(new SqlParameter(   "@catId", SqlDbType.UniqueIdentifier, 8, "CatId")); // Configure delete parameters cmDelete.Parameters.Add(new SqlParameter(   "@isbn", SqlDbType.NVarChar, 10, "isbn")); // Synchronize the data da.Update(dsTitles); 
graphics/analysis.gif

You'll notice in Listing 12.7 that the cmSave SqlCommand object is instantiated and configured and then used to populate both the UpdateCommand and InsertCommand properties of the SqlDataAdapter . Because the structure of the stored procedure used to delete a row is so different from that used to insert or update a row, a separate command object is used. You'll also notice that in this case, the empty constructor of the SqlDataAdapter is used because the SelectCommand needn't be populated. This is because only the Update method is called in this listing. When the Update method is called, one of the two stored procedures will be executed for each modified, inserted, or deleted row.

Note

If the DataSet contains multiple tables, you should specify the source table mapping that the update method should use. If none is specified, it will use the default table mapping and subsequently update the first table in the DataSet . Of course, as we discussed earlier, you can also simply pass the name of the DataTable to process if no table mapping exists for it.


One of the other key points to notice in Listing 12.7 is the last argument passed to the constructor of the SqlParameter objects. This parameter specifies the SourceColumn property that identifies the column in the row from which to populate the parameter. This is what allows the data adapter to associate the data in the row being updated with the parameters of the command object. In addition, an overloaded version of the constructor enables you to pass in the SourceVersion as well. This property maps to a value from the DataRowVersion enumeration discussed on Day 5 and specifies which version ( Current , Default , Original , or Proposed ) to use for the column. You would use the Original version of the column, for example, to ensure that the WHERE clause in an UPDATE statement modifies the same row that was returned from the data store.

You can also control when exceptions are thrown by this process. By default, the ContinueUpdateOnError property of the data adapter is set to False , so the first command that results in an error will throw the SqlException or OleDbException , as appropriate. This means that not all the rows that were modified, inserted, or deleted might have been processed by the Update method. Setting the property to True won't throw an exception, but will simply update the RowError and RowState properties of the DataRow object. This allows all the changes to be attempted against the data store. As you'll see in the next section, this means that you must programmatically determine whether errors have occurred.

Finally, you can control which rows are processed by the Update method, and in which order, by using the GetChanges method. For example, to make sure that the inserts are processed before the updates and deletes, you could use code like this:

 da.Update(dsTitles.GetChanges(DataRowState.Added)); da.Update(dsTitles.GetChanges(DataRowState.Modified)); da.Update(dsTitles.GetChanges(DataRowState.Deleted)); 

Keep in mind that you can also pass combinations of DataRowState values to the Update method to send different subsets of the rows.

Handling Events

During the process of updating a data store, the data adapter throws the RowUpdating and RowUpdated events. The former fires immediately before the appropriate command is executed for a row, whereas the latter fires immediately after the command has been executed. The event handlers receive different event argument types derived from the RowUpdatingEventArgs and RowUpdatedEventArgs classes from the System.Data.Common namespace. For example, the event handler for the RowUpdating event for an OleDbDataAdapter receives an OleDbRowUpdatingEventArgs object, whereas the handler for the RowUpdated event receives an OleDbRowUpdatedEventArgs object. Tables 12.3 and 12.4 show the primary properties exposed by each.

Table 12.3. Row updating event argument properties. These are the properties inherited by the event argument types passed into the event handler for the RowUpdating event.
Property Description
Command Gets or sets a reference to the command used to perform the update.
Errors Returns the exception generated when the command executes.
Row Returns the DataRow sent to the update statement.
StatementType Returns a value from the StatementType enumeration that identifies which type of statement is being processed ( Select , Insert , Update , Delete ).
Status Gets or sets a value from the UpdateStatus enumeration that indicates how the command will behave. Can be set to Continue , ErrorsOccurred , SkipAllRemainingRows , or SkipCurrentRow .
TableMapping Returns the DataTableMapping used by the command.
Table 12.4. Row updated event argument properties. These are the properties inherited by the event argument types passed into the event handler for the RowUpdated event.
Property Description
Command Returns a reference to the command used to perform the update.
Errors Returns the exception generated when the command executes.
RecordsAffected Returns the number of rows modified by the command.
Row Returns the DataRow sent to the update statement.
StatementType Returns a value from the StatementType enumeration that identifies which type of statement is being processed ( Select , Insert , Update , Delete ).
Status Gets or sets a value from the UpdateStatus enumeration that indicates how the command will behave. Can be set to Continue , ErrorsOccurred , SkipAllRemainingRows , or SkipCurrentRow .
TableMapping Returns the DataTableMapping used by the command.

As you can see from Tables 12.3 and 12.4, you can view all the information that will be used to update a row before it's processed and, in the RowUpdating event handler, even change the command to be used on the fly. In addition, using the Status property is especially interesting because you can affect the behavior of the rest of the Update method by skipping the current row or all remaining rows. In the RowUpdated event, you can also view the exception generated by the command even if the ContinueUpdateOnError property is set to True .

Although not used frequently, the RowUpdated event handler, for example, can be used to log all the changes made to the data for tracking purposes or to manually cascade changes in other DataSet objects. Like other events, the technique used to capture them is to create a method to act as the event handler and then add the handler to the invocation list for the event, as shown in the following code snippet:

 // Handle the RowUpdated event private void TitleChanged(object sender, OleDbRowUpdatedEventArgs e) {     Trace.WriteLine("Primary key " + e.Row.Item(0).ToString()      + " was " & e.StatementType.ToString() + " on "      + DateTime.Now.ToLongTimeString()); } da.RowUpdated += new OleDbRowUpdatedEventHandler(this.TitleChanged); 
Using a Disconnected Update Model

Now that you've learned the basic way in which updates are processed by the data adapter, it's time to apply this knowledge to a programming pattern that you can use in applications that work with disconnected data.

The basic idea in this disconnected model, of course, is that your application will retrieve rows from the data store and cache them in the DataSet . The DataSet will then be modified by the presentation code over time and will eventually be re-synchronized using a data adapter. As you'll learn in more detail next week, in multi-tier applications, the synchronization process would eventually occur in a method of the data access tier that accepts the modified DataSet . This method is responsible for creating the appropriate data adapter, passing the DataSet to the Update method, and returning rows that caused errors. As a result, the basic pattern that a method in the data access tier could implement is shown in Listing 12.8.

Listing 12.8 Implementing the disconnected update. This method shows the skeleton of a method that would be used to synchronize a DataSet with the data store.
 public virtual DataSet SaveMyData(DataSet ds) {     //Make sure the DataSet has some changed data     if (ds == null  ds.HasChanges() == false)     {         // Can simply return null         return null;     }     // Create the connection object     // Create the data adapter (da)     // Create the data adapter commands and configure their parameters     //  making sure to map their SourceColumns and SourceVersions     // Associate the commands with the data adapter     // Set the ContineUpdateOnError property to true     try     {         // Call the Update method passing in results of the GetChanges method         da.Update(ds);         // Check for errors and return rows that were in error         if (ds.HasErrors)         {             DataSet dsErrors;             dsErrors = ds.GetChanges(DataRowState.Modified                DataRowState.Deleted);             return dsErrors;         }         return null;     }     catch (Exception e)     {         // An exception occurred, probably in the connection         // Wrap and throw a specific exception     } } 
graphics/analysis.gif

What you'll first notice in Listing 12.8 is that the SaveMyData method checks to make sure that the DataSet is instantiated and has changes that can be processed. If there are changes, the method proceeds to create the connection, command, and data adapter objects that will be needed, much as is shown in Listing 12.7. Note that in this model, the method sets the ContinueUpdateOnError property to True to make sure that all the changed rows actually get a chance to be synchronized with the data store. Within the try block, the DataSet is then passed to the Update method.

Note

It's important at this point to pass in the entire DataSet rather than simply the results of the GetChanges method. This is the case because the Update method will modify the properties of the DataTable and DataRow objects in the DataSet , and we need to be able to read those changes when the method returns. If you call the GetChanges method directly in the Update method, copies will be created and the caller won't be able to retrieve the changes.


Because the ContinueUpdateOnError property is set to True , no exception will be thrown when the Update method is called unless the implicit opening of the connection object fails. As a result, the catch block need only wrap the exception that was generated and throw it back to the caller. Once the Update method returns, the data adapter will have successfully synchronized the rows that it could, setting their RowState properties to Unchanged . Each row that failed will have its RowState set either set to Modified (for modified rows that failed) or Deleted (for rows that couldn't be deleted from the data store). Both of these types of rows are then returned from the method in the dsErrors DataSet using the GetChanges method.

Note

The algorithm shown here is generic and assumes that you'll be updating the lone DataTable in the DataSet . Of course, you could augment this method to accept the name of the table you want to update and even create multiple data adapters to update each of the tables in turn . As we'll discuss shortly, this method also assumes that each row is treated individually and not within the context of a transaction.


The caller can use the SaveMyData method like so:

 DataSet myErrors; myErrors = SaveMyData(ds.GetChanges()); foreach (DataRow r in dsErrors.Tables[0].GetErrors()) {     // Inspect the r.RowError and r.RowState properties } 

Notice that the caller passes into the SaveMyData only the changed rows in order to save resources, and then catches any errors in the myErrors DataSet . The tables in the myErrors DataSet can then be traversed and each row inspected using the RowState and RowError properties, which will be set to the original RowState and contain the error message from the data store, respectively.

The client could also, of course, bind the myErrors DataSet to a grid in order to visually present the rows that caused errors to the user. The user could then make additional changes or decide not to reprocess the row. Those rows that are changed a second time could then be sent back to the SaveMyData method, and the others could be merged with the original DataSet using the Merge method.

Isolation and Concurrency

In the previous section, you learned how the data adapter processes updates and how you might write data access methods to handle changes in a DataSet . However, the discussion fails to address the issues of isolation and concurrency for those updates.

Isolating Changes

For example, as noted in the previous section, the algorithm shown in Listing 12.8 assumes that each row will succeed or fail individually. As you learned on Day 9, "Using Connections and Transactions," the data adapter can isolate and treat all the changed rows it encounters as a logical unit of work using transactions. If you want to make sure that either all the changes succeed or all of them fail, you would change the algorithm in Listing 12.8 as shown in Listing 12.9.

Listing 12.9 Implementing the disconnected update. This method shows the skeleton of a method that would be used to synchronize a DataSet with the data store.
 public virtual void SaveMyData(DataSet ds) {     //Make sure the DataSet has some changed data     if (ds == null  ds.HasChanges() == false)     {         // Can simply return         return;     }     // Create the connection object (con)     // Create the data adapter (da)     // Create the data adapter commands and configure their parameters     // Associate the commands with the data adapter     // Set the ContineUpdateOnError property to false     try     {         con.Open();         SqlTransaction trans = con.BeginTransaction();         // Set the Transaction property of the command objects         try         {             // Call the Update method, passing in the DataSet             da.Update(ds);             // Commit the changes             trans.Commit();             return;         }         catch (Exception e)         {             // Rollback the transaction             trans.Rollback();             // Wrap and throw the exception         }     }     catch (Exception e)     {         // Could not open the connection or start the transaction         // Wrap and throw the Exception     }     finally     {         con.Close();     } } 
graphics/analysis.gif

In Listing 12.9, the SaveMyData method is marked as void rather than returning a DataSet that contains the rows that had errors. This is because when the first error is encountered , the method will throw an exception and roll back the transaction. Notice that the connection must be opened explicitly and the BeginTransaction method must be called to initiate the local transaction. You must then set the Transaction property of each of the command objects associated with the data adapter accordingly . Within the nested try block, the Update method is called and if it succeeds without throwing an exception, the transaction is committed. If an exception occurs, the transaction can be rolled back and the exception wrapped and thrown back to the caller.

Tip

When you throw the exception after rolling back the transaction, you should embed the primary key value or row number that caused the error in the message. This will allow the client to more easily process the error.


The outer try block contains a finally block that closes the connection either in the event the connection or transaction had problems or the Update method throws an exception. Note that this algorithm relies on the fact that the ContinueUpdateOnError property is set to False . Doing so throws an exception when the first error is encountered and places the error returned from the data store in the Message property of the exception object.

Handling Concurrency Issues

The previous example dealt with isolating the changes but didn't address the issue of concurrency or how changes from multiple users are handled. In a nutshell , concurrency isn't handled directly by the data adapter, so it must be handled either through local transactions or the commands you use with the data adapter to apply changes to the data store.

graphics/newterm.gif

In general, concurrency falls into two camps: optimistic concurrency and pessimistic concurrency. The difference between the two is that optimistic concurrency strives to allow as many users to simultaneously work with the data as possible, whereas pessimistic concurrency tries to ensure that when a user updates a row, the row is in exactly the same state as when it was first read. The tradeoff is that optimistic concurrency creates fewer conflicts at the cost of potentially losing changes made by other clients because it employs a last-in-wins strategy. Pessimistic concurrency, on the other hand, ensures that each client's changes won't be summarily overwritten at the cost of causing more conflicts to occur.

Typically, the disconnected model implies that you use a form of optimistic concurrency because it isn't realistic (and would cause a lot of conflicts) to assume that a row remains totally unchanged while the data is cached on a client somewhere. To use optimistic concurrency, the WHERE clause of the statement that executes as a result of the UpdateCommand should contain only the primary key columns . This is the case in Listings 12.6 and 12.7 where the UPDATE statement in the usp_SaveTitle stored procedure considers only the ISBN in the WHERE clause and where only the values from the current row version are passed to the stored procedure.

Note

The implicit assumption made in Listing 12.6 as well as Listing 12.10 (discussed shortly) is that the ISBN is immutable. In other words, once the title is added to the table, the ISBN can't be changed. This can be enforced on SQL Server using a trigger and within the DataSet by setting the ReadOnly property of the DataColumn to True .


That having been said, you can use two techniques to implement pessimistic concurrency with a data adapter. The first is to rely on a local transaction with an isolation level that restricts other users from changing the data while it's locked. In other words, you would start a transaction with an isolation level such as RepeatableRead before invoking the Fill method. You would then make your changes and use the same connection and transaction with the data adapter when calling the Update method. This technique puts the burden of locking the rows on the data store. Although this ensures that the rows remain locked and can't be changed by other clients, it violates the cardinal design rule of transaction usage: Never allow user input in the middle of a transaction. In other words, always make transactions as short as possible so as not to lock out other clients. It also, of course, isn't very useful in applications in which you don't hold continuous connections to the data store, such as ASP.NET applications that use ADO.NET. Because of these reasons, this technique is recommended for only a few specialized scenarios.

The second form of pessimistic concurrency relies on the configuration of the WHERE clause in the UpdateCommand of the data adapter. By including all the columns in the WHERE clause and passing the values that were originally retrieved from the data store to the command, the command can ensure that the row is in exactly the same state as when it was retrieved. For example, if you wanted to create a stored procedure in SQL Server that uses pessimistic concurrency to update a row in the Titles table, the procedure might look like the one in Listing 12.10.

Listing 12.10 Pessimistic concurrency. This stored procedure could be used to update a row in the Titles table and ensure that the updated row is the same as when it was retrieved.
 CREATE  PROCEDURE usp_UpdTitle         @ISBN     [nvarchar](10),         @Description     [nvarchar](2048) = null,         @Title     [nvarchar](100) = null,         @Author     [nvarchar](250) = null,         @Price     [money] = null,         @PubDate     [smalldatetime] = null,         @Publisher     [nchar](5) = null,         @CatID     [uniqueidentifier] = null,         @original_Description  [nvarchar](2048) = null,         @original_Title     [nvarchar](100) = null,         @original_Author     [nvarchar](250) = null,         @original_Price     [money] = null,         @original_PubDate     [smalldatetime] = null,         @original_Publisher     [nchar](5) = null,         @original_CatID     [uniqueidentifier] = null AS IF EXISTS (SELECT * FROM Titles WHERE ISBN = @ISBN) BEGIN   UPDATE Titles     SET Description = @Description,     Title = @Title,     Author = @Author,     Price = @Price,     PubDate = @PubDate,     Publisher = @Publisher,     CatID = @CatID   WHERE ISBN = @ISBN   AND Title = @original_Title   AND Author = @original_Author   AND Price = @original_Price   AND PubDate = @original_PubDate   AND Publisher = @original_Publisher   AND CatID = @original_CatID   IF @@rowcount = 0     RAISERROR('The row has been changed',14,1) END ELSE   RAISERROR('That ISBN no longer exists',14,1) GO 
graphics/analysis.gif

You'll remember from Day 2, "Getting Started," that the Data Adapter Configuration Wizard contains options that enable you to specify the concur-rency level. By choosing pessimistic, the wizard will create a procedure very much like that shown in Listing 12.10. In this case, the procedure simply checks to make sure that the ISBN exists and, if so, updates it with the parameters passed to the procedure. The WHERE clause contains all the original values passed in to the procedure. If the row can't be found (that is, it has been deleted) the RAISERROR statement is issued, which will result in a row error in the data adapter. Likewise, if the row can't be updated because one of the columns doesn't match, an error is raised.

The ADO.NET code that configures the data adapter would then have to change to include both the current values that are used in the SET statement and the original values used in the WHERE clause, as shown in Listing 12.11.

Listing 12.11 Configuring pessimistic concurrency. This code snippet shows how you would add parameters to the command used to update the Titles table using pessimistic concurrency.
 SqlCommand cmSave = new SqlCommand("usp_UpdTitle", con); cmSave.CommandType = CommandType.StoredProcedure; cmSave.Parameters.Add(new SqlParameter(   "@isbn", SqlDbType.NVarChar, 10, "isbn")); cmSave.Parameters.Add(new SqlParameter(   "@description", SqlDbType.NVarChar, 2048, "Description")); cmSave.Parameters.Add(new SqlParameter(   "@title", SqlDbType.NVarChar, 100, "Title")); cmSave.Parameters.Add(new SqlParameter(   "@author", SqlDbType.NVarChar, 250, "Author")); cmSave.Parameters.Add(new SqlParameter(   "@price", SqlDbType.Money, 4, "Price")); cmSave.Parameters.Add(new SqlParameter(   "@pubDate", SqlDbType.DateTime, 4, "PubDate")); cmSave.Parameters.Add(new SqlParameter(   "@publisher", SqlDbType.NChar, 5, "Publisher")); cmSave.Parameters.Add(new SqlParameter(   "@catId", SqlDbType.UniqueIdentifier, 8, "CatId")); cmSave.Parameters.Add(new SqlParameter(   "@original_description", SqlDbType.NVarChar, 2048, "Description")); cmSave.Parameters["@original_description"].SourceVersion =   DataRowVersion.Original; cmSave.Parameters.Add(new SqlParameter(   "@original_title", SqlDbType.NVarChar, 100, "Title")); cmSave.Parameters["@original_title"].SourceVersion =   DataRowVersion.Original; cmSave.Parameters.Add(new SqlParameter(   "@original_author", SqlDbType.NVarChar, 250, "Author")); cmSave.Parameters["@original_author"].SourceVersion =   DataRowVersion.Original; cmSave.Parameters.Add(new SqlParameter(   "@original_price", SqlDbType.Money, 4, "Price")); cmSave.Parameters["@original_price"].SourceVersion =   DataRowVersion.Original; cmSave.Parameters.Add(new SqlParameter(   "@original_pubDate", SqlDbType.DateTime, 4, "PubDate")); cmSave.Parameters["@original_pubDate"].SourceVersion =   DataRowVersion.Original; cmSave.Parameters.Add(new SqlParameter(   "@original_publisher", SqlDbType.NChar, 5, "Publisher")); cmSave.Parameters["@original_publisher"].SourceVersion =   DataRowVersion.Original; cmSave.Parameters.Add(new SqlParameter(   "@original_catId", SqlDbType.UniqueIdentifier, 8, "CatId")); cmSave.Parameters["@original_catId"].SourceVersion =   DataRowVersion.Original; 
graphics/analysis.gif

As you can see from Listing 12.11, each parameter must be added to the collection. Because the SourceVersion property defaults to Current , the property needn't be set for the parameters that represent the new values to be updated. However, all the original values need to have their SourceVersion property set to Original .

Note

Although one of the signatures for the overloaded constructor of the SqlParameter object enables you to specify the SourceVersion (so you could avoid coding two statements for each parameter), it also requires several other arguments that are easier to allow to be defaulted.


The obvious downside to using this form of pessimistic concurrency is that both the statement that performs the update against the data store and the ADO.NET code used to configure the data adapter become longer and more complex. Some data stores, however, provide a quicker means of determining whether the row has changed since it was retrieved. In SQL Server, for example, you can add a column with the timestamp data type to a table. This column is automatically updated to a unique binary value each time a row is inserted or updated in the table. The result is that you can select this column when the DataSet is populated and then pass it back to the UpdateCommand of the data adapter. The stored procedure or inline SQL can then compare the value you originally retrieved with the value in the database. If they don't match, the row has been changed. Using a timestamp or other equivalent saves you from having to pass two parameters for each column and configuring the UpdateCommand with all those parameters.

Note

graphics/newterm.gif

Although the data type is called timestamp , it has no correlation with a time or date and isn't human readable. A better way to think of timestamp columns is with the term row version. In fact, SQL Server 2000 includes a rowversion synonym for the timestamp data type, which is preferred because it will eventually replace timestamp in future versions of SQL Server. You should also specify the same column name for the timestamp ( rowversion ) column in all your tables for consistency; for example, calling it rowVersion . As you would expect, you can have only one timestamp column per table.


Using the rowversion column, the UPDATE statement in the usp_UpdTitle stored procedure would be rewritten as follows :

 UPDATE Titles   SET Description = @Description,   Title = @Title,   Author = @Author,   Price = @Price,   PubDate = @PubDate,   Publisher = @Publisher,   CatID = @CatID WHERE ISBN = @ISBN AND rowVersion = @rowVersion 

When you configure the data adapter, you would then only need to add the current parameters and the rowVersion parameter like so:

 cmSave.Parameters.Add(new SqlParameter("@rowVersion",  SqlDbType.Timestamp, 8, "rowVersion")); 
Refreshing Rows

The final issue that needs to be addressed in this context is re-retrieving data from the data store when an update succeeds or fails. As we discussed previously, the UpdatedRowSource property of a command object controls how result sets or output parameters are processed from the UpdateCommand as rows in a DataTable are updated by a data adapter. This property can be set to one of the values from the UpdateRowSource enumeration, as shown in Table 12.5.

Table 12.5. Values of the UpdateRowSource enumeration. These values determine how a row is refreshed in the event that the command returns data.
Value Description
Both Both output parameters and the first returned row are mapped to the row being processed.
FirstReturnedRecord The first row in the result set is mapped to the row being processed.
None Result sets and output parameters are ignored.
OutputParameters Output parameters are mapped to the row being processed.

The default is Both , so if the command returns a result set, it will be mapped to the DataRow that was updated. This can be useful in pessimistic concurrency situations where if the row has changed, you'd want to refresh the row in the DataTable so that the client is aware of the changes. To add this capability to the procedure shown in Listing 12.8, you would simply set the UpdatedRowSource property of the command object to either Both or FirstReturnedRecord and then modify the IF statement that checks to see whether the row was successfully updated, like so:

 IF @@rowcount = 0 BEGIN     SELECT * FROM Titles WHERE ISBN = @ISBN     RAISERROR('The row has been changed',14,1) END 

This capability can also be useful in conjunction with the InsertCommand in the event the data store assigns default or auto-incrementing values to one or more columns in the table. This is the case, for example, with SQL Server IDENTITY columns. In this way, the server-generated data will be placed into the row.

Command Builders

Some providers (SqlClient and OleDb included) also implement command builder classes that can be used to automatically generate insert, update, and delete commands that work against a single table in the data store. This is typically accomplished by the command builder object examining the SelectCommand and then asking the data store for additional information.

To use a command builder, you can simply instantiate it and pass it the data adapter that will be used, as follows:

 OleDbDataAdapter da = new OleDbDataAdapter(sqlStatement, con); OleDbCommandBuilder builder = new OleDbCommandBuilder(da); 

At this point, nothing occurs. However, the command builder sets up an event handler, so when the Update method of the data adapter is called, it will contact the data store and build the commands that will be referenced by the InsertCommand , UpdateCommand , and DeleteCommand properties of the data adapter. After the initial Update , these commands are associated with the data adapter and won't have to be re-created.

The commands can also be inspected using the GetInsertCommand , GetDeleteCommand , and GetUpdateCommand methods of the command builder. You might do this, for example, to modify the default properties set for the command. If the SelectCommand changes after the commands are built, you need to call the RefreshSchema method to rebuild the commands based on the new SelectCommand .

As you might expect, there are three main drawbacks to using this approach:

  1. The commands that are generated use inline SQL and therefore don't take advantage of the performance of database constructs such as stored procedures.

  2. For the command builder to build the statements, it must retrieve additional schema information from the data store. This entails an additional roundtrip to the data store, which should be avoided at all costs (another cardinal rule: Minimize round trips to the data store), especially in the multi-user Web applications for which ADO.NET was designed.

  3. Command builders work only when the SelectCommand encapsulates a relatively simple statement that it can parse to determine which table to create the commands for. Command builders wouldn't be effective, for example, if the SelectCommand executed a stored procedure that included a JOIN statement.

For all these reasons, you should avoid command builders in most situations.

for RuBoard


Sams Teach Yourself Ado. Net in 21 Days
Sams Teach Yourself ADO.NET in 21 Days
ISBN: 0672323869
EAN: 2147483647
Year: 2002
Pages: 158
Authors: Dan Fox

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net