Data Adapters

 

In ADO.NET, the data adapter object acts as a two-way bridge between a data source and the DataSet object. The DataSet is a disconnected container of data, and the adapter takes care of filling it and submitting its data back to a particular data source. Viewed from an abstract point of view, a data adapter is similar to a command and represents another way of executing a command against the data source.

Note 

In a certain way, the concepts of command, data reader, and data adapter are the results of the ADO Recordset split. Born to be a simple COM wrapper around an SQL result set, the ADO Recordset soon became a rather bloated object incorporating three types of cursors read-only, disconnected, and server. Compared to ADO, the ADO.NET object model is simpler overall and, more important, made of simpler objects. Instead of providing a big monolithic object such as the Recordset, ADO.NET supplies three smaller and highly specialized objects the command, data reader, and DataSet. The data reader is generated only by a direct query command; DataSet is generated only by a data adapter. To complete the comparison, note that ADO.NET has no native support for database server cursors.

The big difference between commands and data adapters is just in the way each one returns the retrieved data. A query command returns a read-only cursor the data reader. The data adapter performs its data access, grabs all the data, and packs it into an in-memory container the DataSet or DataTable. Under the hood, the data adapter is just an extra layer of abstraction built on top of the command/data reader pair. Internally, in fact, the data adapter just uses a command to query and a data reader to walk its way through the records and fill a userprovided DataSet.

Like commands and data readers, data adapters are specific to each data provider. So expect to find a data adapter class for SQL Server, one for Oracle, and so on. To come to grips with data adapters, let's examine the SQL Server adapter.

The SqlDataAdapter Class

By definition, a data adapter is a class that implements the IDataAdapter interface. However, looking at the actual implementation of the adapters in the supported providers, you can see that multiple layers of code are used. In particular, all data adapter classes inherit from a base class named DbDataAdapter and implement the IDbDataAdapter interface. The relationship is shown in Figure 8-1.


Figure 8-1: The hierarchy of data adapters and implemented interfaces.

Programming the SQL Server Data Adapter

Table 8-1 shows the properties of the SqlDataAdapter class that is, the data adapter class for SQL Server.

Table 8-1: Properties of the SqlDataAdapter Class

Property

Description

AcceptChangesDuringFill

Indicates whether insertions of a row during a fill operation should be committed. True by default.

AcceptChangesDuringUpdate

Indicates whether changed rows processed during a batch update operation should be committed. True by default. Not supported in ADO.NET 1.x.

ContinueUpdateOnError

Indicates whether in case of row conflicts the batch update continues or an exception is generated.

DeleteCommand

Gets or sets a statement or stored procedure to delete records from the database during batch update. Is a member of the IDbDataAdapter interface.

FillLoadOption

Indicates how retrieved values will be applied to existing rows. Not supported in ADO.NET 1.x.

InsertCommand

Gets or sets a statement or stored procedure to insert new records in the database during batch update. Is a member of the IDbDataAdapter interface.

MissingMappingAction

Determines the action to take when a table or column in the source data is not mapped to a corresponding element in the in-memory structure. Is a member of the IDataAdapter interface.

MissingSchemaAction

Determines the action to take when source data does not have a matching table or column in the corresponding in-memory structure. Is a member of the IDataAdapter interface.

ReturnProviderSpecificTypes

Indicates whether provider-specific types should be used to create table layouts to contain result sets during a fill operation. Not supported in ADO.NET 1.x.

SelectCommand

Gets or sets a statement or stored procedure to select records from the database. During batch update, the method is used to download metadata; it is used to select records in a query statement. Is a member of the IDbDataAdapter interface.

TableMappings

Gets a collection that provides the mappings between a source table and an in-memory table. Is a member of the IDataAdapter interface.

UpdateBatchSize

Indicates the size of the blocks of records submitted at a time during the batch update. Set to 1 by default. Not supported in ADO.NET 1.x.

UpdateCommand

Gets or sets a statement or stored procedure to update records in the database during batch update. Is a member of the IDbDataAdapter interface.

One thing is essential to know about a data adapter: It is a two-way channel used to read data from a data source into a memory table and to write in-memory data back to a data source. The data source used in both cases is likely to be the same, but it's not necessarily the same. These two operations, known as fill and update, can be clearly identified in the preceding list of properties.

The four xxxCommand members of the IDbDataAdapter interface are used to control how inmemory data is written to the database during an update operation. This is not entirely true of SelectCommand. Although SelectCommand plays a role in the batch update process, it is the key member in performing the fill operation. The MissingXXX properties, TableMappings collection, and, in ADO.NET 2.0, FillLoadOption and ReturnProviderSpecificTypes indicate how data read out of the data source is mapped onto client memory.

Once loaded in memory, the (disconnected) data is available for client-side updates performed by a Windows Forms application or an ASP.NET page. Client updates consist of adding new rows and deleting or updating existing ones. A batch update is the data provider procedure that, triggered by the client application, posts all the pending in-memory changes back to a data source. In carrying out this procedure, a bunch of database management system (DBMS) specific commands are required to carry out the three basic operations insert, update, and delete. The InsertCommand, UpdateCommand, and DeleteCommand properties are SqlCommand objects that do just this.

Important 

ADO.NET batch updates consist of a series of commands sequentially submitted to the database, by means of the data adapter. As a developer, you fire the batch update process with a single command. Bear in mind that conceptually ADO.NET batch updates don't equate to a series of queries submitted in a single command. "Batch update" doesn't really mean that a batch of commands and data is moved on the DBMS and executes there.

Using a batch update is a powerful approach, but it's not particularly suited to ASP.NET applications. The difficulty lies in the fact that Web applications work over a stateless protocol such as HTTP. So to make the whole scheme work well, you should cache the in-memory table in the session, which is not something all applications can afford. In addition, note that using a batch update saves you from a lot of coding and can be easily configured to serve complex update scenarios. Using a batch update, though, doesn't necessarily give you significant performance advantages because each update requires its own command in ADO.NET 1.x. In ADO.NET 2.0, you can group more updates in a unique command instead, through the new UpdateBatchSize property.

Table 8-2 lists the methods of the data adapter objects.

Table 8-2: Methods of the SqlDataAdapter Class

Method

Description

Fill

Populates an in-memory table with rows read from the source.

FillSchema

Configures an in-memory table so that the schema matches the schema in the data source.

GetFillParameters

Returns the parameters the user set on the query statement.

Update

Updates the data source based on the current content of the specified in-memory table. It works by calling the respective INSERT, UPDATE, or DELETE statements for each inserted, updated, or deleted row, respectively, in the table.

The data adapter uses the SelectCommand property to retrieve schema and data from the data source. The connection object associated with the SelectCommand does not need to be open. If the connection is closed before the reading occurs, it is opened to retrieve data and then closed. If the connection is open when the adapter works, it remains open.

Filling a DataSet Using a Data Adapter

A data adapter object uses the Fill method to populate an in-memory object with data retrieved through a query. The in-memory structure is a DataSet or DataTable object. As we'll see more clearly in a moment, the DataSet is the in-memory counterpart of a DBMS database. It might contain multiple tables (that is, multiple DataTable objects) and set up relationships and constraints between tables. Each table, in turn, is made of a number of columns and rows.

Filling a DataSet object ultimately means filling one of its tables. The data adapter can create a new table for each result set generated by the query. The table mapping code decides how. (If the table exists already, it is updated.) Mapping a result set to a DataSet is a process articulated in two phases: table mapping and column mapping. During the first step, the data adapter determines the name of the DataTable that will contain the rows in the current result set. Each DataTable is given a default name that you can change at will.

Note 

Although you can fill a DataTable with any kind of data from any existing source, the name of the table doesn't have to reflect necessarily the name of a database table, even when the data comes out of a database query. The DataTable's table name serves only to identify the object. Changing the name of a DataTable doesn't have any impact on the name of the database table that might have been used to fill it.

The default name of the DataTable depends on the signature of the Fill method that was used for the call. For example, let's consider the following two Fill calls:

DataSet ds = new DataSet(); adapter.Fill(ds); adapter.Fill(ds, "MyTable"); 

In the first call, the name of the first result set generated by the query defaults to "Table". If the query produces multiple result sets, additional tables will be named Table1, Table2, and so on, appending a progressive index to the default name. In the second call, the first result set is named MyTable and the others are named after it: MyTable1, MyTable2, and so forth. The procedure is identical; what really changes in the two cases is the base name.

The names of the tables can be changed at two different moments. You can change them after the DataSet has been populated or, when using table mapping, you can define settings that will be used to name the tables upon creation. You define a table mapping on a data adapter object by using the TableMappings property.

Note 

You can also use the Fill method to populate a single DataTable. In this case, only the first result set is taken into account and only one mapping phase occurs column mapping:

DataTable dt = new DataTable(); adapter.Fill(dt); 

The preceding code shows how to use the Fill method to populate a DataTable.

Loading Options

In ADO.NET 2.0, you can better control the way data is loaded into the various data tables during a fill operation. By setting the FillLoadOption property, you indicate how rows already in a DataTable combine with rows being loaded. The FillLoadOption property accepts a value from the LoadOption enumeration. Table 8-3 describes the feasible values.

Table 8-3: Values from the LoadOption Enumeration

Value

Description

OverwriteChanges

Updates the current and original versions of the row with the value of the incoming row.

PreserveChanges

Default option. Updates the original version of the row with the value of the incoming row.

Upsert

Updates the current version of the row with the value of the incoming row.

In each case, the description indicates the behavior when the primary key of a row in the incoming data matches the primary key of an existing row.

OverwriteChanges addresses the need to initialize tables with fresh data. PreserveChanges, on the other hand, is useful when you are in the process of synchronizing existing in-memory data with the current state of the database. In this case, you want to preserve any changes you entered on the client that is, the current values you're working with and that you plan to submit back to the database later. At the same time, you might want to update the values in the DataSet that represent the original values read from the database. Finally, Upsert simply overwrites the current value, leaving the original value intact.

It is important to note that in-memory rows maintain two distinct values current and original. The current value is the value that you receive when you read the content of a cell. The original value is the last value stored in the cell that was committed. When you assign a value to a newly created row, you set the current value. The original value is null. The assigned value must be committed to become an effective part of the row. You commit a row by invoking the AcceptChanges method (which will be discussed in more detail later). When this happens, the current value is duplicated as the original value and the overall state of the row is modified to unchanged. The row has no pending changes.

A DataSet populated with a fill operation presents all committed rows where current and original values coincide. Or at least this is the default behavior that you can alter by setting the AcceptChangesDuringFill property. Once the data is downloaded on the client, the client application can work with it and enter changes, as shown here:

DataTable table = _data.Tables[0]; DataRow row = table.Rows[0]; row["firstname"] = "Lucy"; 

The assignment simply alters the current value of the row; the original value remains set to null or what it was before the assignment. To make "Lucy" become the effective original value of the row, you have to explicitly accept or commit the change:

// Accept all pending (uncommitted) changes on the row row.AcceptChanges(); 

Uncommitted changes are important because only pending uncommitted changes are taken into account during a batch update operation. To read the current value of a row value, you do as follows:

Response.Write(row["firstname"].ToString()); 

To read the original value, you resort to the following:

Response.Write(row["firstname", DataRowVersion.Original].ToString()); 

Figure 8-2 shows the output of the sample page that illustrates the FillLoadOption property and the adapter's Fill method.

image from book
Figure 8-2: Examining the effect of the various load options.

As you can see, the Upsert option replaces the current value, leaving the original intact.

Note 

In ADO.NET 1.x, the default behavior is OverwriteChanges; if AcceptChangesDuringFill is false, the actual behavior you get is Upsert. You never preserve client changes in ADO.NET 1.x. In ADO.NET 2.0, the value of AcceptChangesDuringFill is taken into account only for rows added, not for existing rows that get updated by the fill operation.

The DataSet is an empty container that a data adapter fills with the results of a query. But what about the number and structure of the child tables? The number of tables depends on the number of result sets. The structure of the tables depends on the table-mapping mechanism.

The Table-Mapping Mechanism

The .NET data provider assigns a default name to each result set generated by the query. The default name is Table or any name specified by the programmer in the call to Fill. The adapter looks up its TableMappings collection for an entry that matches the default name of the result set being read. If a match is found, the data adapter reads the mapped name. Next, it attempts to locate in the DataSet a DataTable object with the name specified in the mapping, as shown in Figure 8-3.

image from book
Figure 8-3: Mapping a result set onto a DataSet object.

If the result set named Table has been mapped to Employees, a table named Employees is searched in the DataSet. If no such DataTable object exists, it gets created and filled. If such a DataTable exists in the DataSet, its content is merged with the contents of the result set.

The TableMappings property represents a collection object of type DataTableMappingCollection. Each contained DataTableMapping object defines a pair of names: a source table name and an in-memory table name. Here's how to configure a few table mappings:

DataSet ds = new DataSet(); DataTableMapping dtm1, dtm2, dtm3; dtm1 = adapter.TableMappings.Add("Table", "Employees"); dtm2 = adapter.TableMappings.Add("Table1", "Products"); dtm3 = adapter.TableMappings.Add("Table2", "Orders"); adapter.Fill(ds); 

It goes without saying that the default names you map onto your own names must coincide with the default names originated by the call to the Fill method. In other words, suppose you change the last line of the previous code snippet with the following one:

adapter.Fill(ds, "MyTable"); 

In this case, the code won't work any longer because the default names will now be MyTable, MyTable1, and MyTable2. For these names, the TableMappings collection would have no entries defined. Finally, bear in mind you can have any number of table mappings. The overall number of mappings doesn't necessarily have to be related to the expected number of result sets.

The Column-Mapping Mechanism

If table mapping ended here, it wouldn't be such a big deal for us. In fact, if your goal is simply to give a mnemonic name to your DataSet tables, use the following code. The final effect is exactly the same.

DataSet ds = new DataSet(); adapter.Fill(ds); ds.Tables["Table"].TableName = "Employees"; ds.Tables["Table1"].TableName = "Products"; 

The mapping mechanism, though, has another, rather interesting, facet: column mapping. Column mapping establishes a link between a column in the result set and a column in the mapped DataTable object. Column mappings are stored in the ColumnMappings collection property defined in the DataTableMapping class. The following code shows how to create a column mapping:

DataSet ds = new DataSet(); DataTableMapping dtm1; dtm1 = adapter.TableMappings.Add("Table", "Employees"); dtm1.ColumnMappings.Add("employeeid", "ID"); dtm1.ColumnMappings.Add("firstname", "Name"); dtm1.ColumnMappings.Add("lastname", "FamilyName"); adapter.Fill(ds); 

Figure 8-4 extends the previous diagram (Figure 8-3) and includes details of the column-mapping mechanism.

image from book
Figure 8-4: How the table and column mappings control the population of the DataSet.

In the preceding code, the source column employeeid is renamed ID and placed in a DataTable named Employees. The name of the column is the only argument you can change at this level. Bear in mind that all this mapping takes place automatically within the body of the Fill method. When Fill terminates, each column in the source result set has been transformed into a DataTable column object an instance of the DataColumn class.

Missing Mapping Action

The Fill method accomplishes two main operations. First, it maps the source result sets onto in-memory tables. Second, it fills the tables with the data fetched from the physical data source. While accomplishing either of these tasks, the Fill method could raise some special exceptions. An exception is an anomalous situation that needs to be specifically addressed codewise. When the adapter can't find a table or column mapping, or when a required DataTable or DataColumn can't be found, the data adapter throws a kind of lightweight exception.

Unlike real exceptions that must be resolved in code, this special breed of data adapter exceptions has to be resolved declaratively by choosing an action from a small set of allowable options. Data adapters raise two types of lightweight exceptions: missing mapping actions and missing schema actions.

A missing mapping action is required in two circumstances that can occur when the data adapter is collecting data to fill the DataSet. You need it if a default name is not found in the TableMappings collection, or if a column name is not available in the table's ColumnMappings collection. The data adapter's MissingMappingAction property is the tool you have to customize the behavior of the data adapter in the face of such exceptions. Allowable values for the property come from the MissingMappingAction enumeration and are listed in Table 8-4.

Table 8-4: The MissingMappingAction Enumeration

Value

Description

Error

An exception is generated if a missing column or table is detected.

Ignore

The unmapped column or table is ignored.

Passthrough

Default option. It adds the missing table or column to the structure.

Unless you explicitly set the MissingMappingAction property prior to filling the data adapter, the property assumes a default value of Passthrough. As a result, missing tables and columns are added using the default name. If you set the MissingMapping-Action property to Ignore, any unmapped table or column is simply ignored. No error is detected, but there will be no content for the incriminating result set (or one of its columns) in the target DataSet. If the MissingMappingAction property is set to Error, the adapter is limited to throwing an exception whenever a missing mapping is detected.

Once the data adapter is done with the mapping phase, it takes care of actually populating the target DataSet with the content of the selected result sets. Any required DataTable or DataColumn object that is not available in the target DataSet triggers another lightweight exception and requires another declarative action: the missing schema action.

Missing Schema Action

A missing schema action is required if the DataSet does not contain a table with the name that has been determined during the table-mapping step. Similarly, the same action is required if the DataSet table does not contain a column with the expected mapping name. MissingSchemaAction is the property you set to indicate the action you want to be taken in case of an insufficient table schema. Allowable values for the property come from the MissingSchemaAction enumeration and are listed in Table 8-5.

Table 8-5: The MissingSchemaAction Enumeration

Value

Description

Error

Generates an exception if a missing column or table is detected.

Ignore

Ignores the unmapped column or table.

Add

The default option. Completes the schema by adding any missing item.

AddWithKey

Also adds primary key and constraints.

By default, the MissingSchemaAction property is set to Add. As a result, the DataSet is completed by adding any constituent item that is missing DataTable or DataColumn. Bear in mind, though, that the schema information added in this way for each column is very limited. It simply includes name and type. If you want extra information such as the primary key, autoincrement, read-only, and allow-null settings use the AddWithKey option instead.

Note that even if you use the AddWithKey option, not all available information about the column is really loaded into the DataColumn. For example, AddWithKey marks a column as autoincrement but does not set the related seed and step properties. Also the default value for the source column, if any, is not automatically copied. Only the primary key is imported; any additional indexes you might have set in the database are not. As for the other two options, Ignore and Error, they work exactly as they do with the MissingMappingAction property.

Prefilling the Schema

MissingMappingAction and MissingSchemaAction are not as expensive as real exceptions, but they still affect your code. Put another way, filling a DataSet that already contains all the needed schema information results in faster code. The advantage of this approach is more evident if your code happens to repeatedly fill an empty DataSet with a fixed schema. In this case, using a global DataSet object pre-filled with schema information helps to prevent all those requests for recovery actions. The FillSchema method just ensures that all the required objects are created beforehand:

DataTable[] FillSchema(DataSet ds, SchemaType mappingMode); 

FillSchema takes a DataSet and adds as many tables to it as needed by the query command associated with the data adapter. The method returns an array with all the DataTable objects created (only schema, no data). The mapping-mode parameter can be one of the values defined in the SchemaType enumeration. The SchemaType enumeration values are listed in Table 8-6.

Table 8-6: The SchemaType Enumeration

Value

Description

Mapped

Apply any existing table mappings to the incoming schema. Configure the DataSet with the transformed schema. Recommended option.

Source

Ignore any table mappings on the data adapter. Configure the DataSet using the incoming schema without applying any transformations.

The Mapped option describes what happens when mappings are defined. Source, on the other hand, deliberately ignores any mappings you might have set. In this case, the tables in the DataSet retain their default name and all the columns maintain the original name they were given in the source tables.

How Batch Update Works

Batch update consists of the submission of an entire set of changes to the database. The batch update basically repeats the user actions that produced the changes that have the database rather than the DataSet as the target. Batch update assumes that the application enters its changes to the dataset in an offline manner. In a multiuser environment, this might pose design problems if users concurrently access on the server the same data you're editing offline. When you post your changes on a record that another person has modified in the meantime, whose changes win out?

Data Conflicts and Optimistic Lock

The possibility of data conflicts represents a design issue, but it isn't necessarily a problem for the application. Batch update in a multiuser environment creates conflict only if the changes you enter are somewhat implied by the original values you have read. In such a case, if someone else has changed the rows in the time elapsed between your fetch and the batch update, you might want to reconsider or reject your most recent updates. Conflicts detected at update time might introduce significant overhead that could make the batch update solution much less exciting. In environments with a low degree of data contention, batch updates can be effective because they allow for disconnected architectures, higher scalability, and considerably simpler coding.

To submit client changes to the server, use the data adapter's Update method. Data can be submitted only on a per-table basis. If you call Update without specifying any table name, a default name of Table is assumed. If no table exists with that name, an exception is raised:

adapter.Update(ds, "MyTable"); 

The Update method prepares and executes a tailor-made INSERT, UPDATE, or DELETE statement for each inserted, updated, or deleted row in the specified table. Rows are processed according to their natural order, and the row state determines the operation to accomplish. The Update method has several overloads and returns an integer, which represents the number of rows successfully updated.

When a row being updated returns an error, an exception is raised and the batch update process is stopped. You can prevent this from happening by setting the ContinueUpdateOnError property to true. In this case, the batch update terminates only when all the rows have been processed. Rows for which the update completed successfully are committed and marked as unchanged in the DataSet. For other rows, the application must decide what to do and restart the update if needed.

Command Builders

The data adapter provides a bunch of command properties InsertCommand, DeleteCommand, and UpdateCommand to let the programmer control and customize the way in which in-memory updates are submitted to the database server. These properties represent a quantum leap from ADO, in which update commands were SQL commands silently generated by the library. If you don't quite see the importance of this change, consider that with ADO.NET you can use stored procedures to perform batch updates and even work with non-SQL data providers.

The commands can also be generated automatically and exposed directly to the data-adapter engine. Command builder objects do that for you. A command builder object for example, the SqlCommandBuilder class cannot be used in all cases. The automatic generation of commands can take place only under certain circumstances. In particular, command builders do not generate anything if the table is obtained by joining columns from more than one table and if calculated or aggregate columns are detected. Command builders are extremely helpful and code-saving only when they are called to deal with single-table updates. How can a command builder generate update statements for a generic table? This is where a fourth command property the SelectCommand property fits in.

A command builder employs SelectCommand to obtain all the metadata necessary to build the update commands. To use command builders, you must set SelectCommand with a query string that contains a primary key and a few column names. Only those fields will be used for the update, and the insertion and key fields will be used to uniquely identify rows to update or delete. Note that the command text of SelectCommand runs in the provider-specific way that makes it return only metadata and no rows.

The association between the data adapter and the command builder is established through the builder's constructor, as shown in the following code:

SqlCommand cmd = new SqlCommand(); cmd.CommandText = "SELECT employeeid, lastname FROM Employees"; cmd.Connection = conn; adapter.SelectCommand = cmd; SqlCommandBuilder builder = new SqlCommandBuilder(adapter); 

The builder requests metadata and generates the commands the first time they are required and then caches them. Each command is exposed through a particular method GetInsertCommand, GetUpdateCommand, and GetDeleteCommand. Note that using the command builder does not automatically set the corresponding command properties on the data adapter.

Note 

The behavior of data adapters and command builders for other managed providers does not differ in a relevant way from what we described here for the SQL Server .NET data provider.

 


Programming Microsoft ASP. Net 2.0 Core Reference
Programming Microsoft ASP.NET 2.0 Core Reference
ISBN: 0735621764
EAN: 2147483647
Year: 2004
Pages: 112
Authors: Dino Esposito
BUY ON AMAZON

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net