Design of the Data Mapping Application Block


The design goals for the Data Mapping Application Block were to:

  • Provide best practices for developing data access logic components.

  • Decouple the mapping of stored procedure parameters to the properties of an entity class (like a DataSet's DataFields) from the business logic that needs to be implemented for a specific operation.

  • Provide capabilities to map stored procedure parameters to values that are not known to the system until runtime.

  • Provide support for Insert, Update, and Delete transactions against multiple tables in an entity class or DataSet.

  • Provide support for caching business entities in a Data Access Logic Component.

  • Facilitate setting command properties like command timeouts and transaction levels through configuration instead of code.

In short, the design goals for the Data Mapping Application Block were to provide an API that makes it easy to create data access logic components that subscribe to the best practices as promoted by the Microsoft patterns & practices team and to make those data access logic components highly configurable. Examples of configurable properties include whether or not database operations should occur in the scope of a transaction, whether the business data should be cached by the data access logic component, and which stored procedure parameters map to which fields in the business entity.

The vision is that this will free developers up to concentrate on adding business logic instead of worrying about operational issues. It is not to say that you don't need to be concerned about such matters; rather, it is to say that this should be extremely easy for you to configure and modify and should not necessitate the need to recode if changes occur. Figure A.1 shows the primary classes that are responsible for providing the core functionality for the Data Mapping Application Block.

Figure A.1. Design of the Data Mapping Application Block


The DatabaseWrapper Class

The DatabaseWrapper acts as a service layer that encapsulates the Data Access Application Block. This service layer takes advantage of the Data Access Application Block's support for obtaining typed DataSets and propagating the changes in those DataSets to the backend database. The Data baseWrapper is core to the functionality of the Data Mapping Application block because it is in this class that configuration information is used to determine how to map stored procedure parameters to DataFields (that is, perform the data mapping), determine the value for a CommandTimeout, and obtain and use information relative to the database transaction isolation level.

The idea behind the DatabaseWrapper is simply to make it easy to retrieve typed DataSets from a data source and update a data source given the typed DataSet. It hides any complexities about creating the configured database provider, setting TableMappings for the DataSet properly, mapping DataColumns and non-DataSet columns to stored procedure parameters, and handling transactions. Table A.1 lists the public methods and properties for the DatabaseWrapper class.

Table A.1. DatabaseWrapper's Methods and Properties

Method/Property

Access

Description

GetdataSet

Public static

Returns a typed DataSet and uses it to load a typed DataSet.

GetdataReader

Public static

Returns a typed DataSet and returns a DataReader.

GetScalar

Public static

Returns a typed DataSet and performs an ExecuteScalar.

Execute

Public static

Returns a typed DataSet and performs an ExecuteNonQuery.

FillTable

Public static

Returns a typed DataSet and leverages it to load the DataTable for the DataSet.

FillTables

Public static

Performs the FillTable operation for multiple tables in a DataSet.

PutDataSet

Public static

Changes to a typed DataSet are passed through to the underlying data source by instantiating the correct database provider, mapping Data-Table columns and non-DataSet columns to stored procedure parameters, and invoking transactions for DataTables if necessary. In short, it updates a DataSet while respecting all relationships among the tables in the DataSet.


The value that the DatabaseWrapper provides is most evident when it is used to propagate the changes that occur in a typed DataSet back to the data source from where it originated. Chapter 3 provided a fair amount of detail about the UpdateDataSet method that is used for updating a DataSet. And while the UpdateDataSet method is very powerful, it could be easier to use if it were able to use information that it read from configuration to determine which tables to call and which fields map to specific stored procedure parameters.

For example, in Chapter 3, Listing 3.16 illustrated using the UpdateDataSet method to propagate a single insert and update a typed DataSet back to the database. There was a bit of code that was specific to the name of the stored procedure that should be called for an insert, the name of the stored procedure to call for an update, and the name of the DataFields in the DataSet that should be mapped to the stored procedure's parameters. Such information can certainly be viewed as configuration information, and by placing the retrieval of this information behind the data service layer, Listing could be simplified to look more like the code shown in Listing A.1. The DatabaseWrapper class contains a method named PutDataSet that will read the configuration information that is set for a particular typed DataSet and perform the various methods that must be called to update the DataSet by using the Data Access Application Block.

Listing A.1. Using PutDataSet with the DatabaseWrapper Class

[C#] //Create a new CustomersDS typed DataSet. CustomersDS customerDataSet = new CustomersDS(); // Load the Customer DataTable in the typed DataSet. int divId = 1; DatabaseWrapper.GetDataSet(customerDataSet, divId); //Explicitly set the value for the LastUpdate parameter //to a value not known until runtime. customerDataSet.Customers.ExtendedProperties.Add      ("LastUpdate", DateTime.Now); // Modify an existing customer. if (customerDataSet.Customers.Count > 0) {                  customerDataSet.Customers[0].CustomerName = "Len Fenster"; } // Add a new customer. CustomersDataRow customersRow = customersTable.AddCustomersRow       (divId, "CompanyXYZ", "John Doe"); // Submit the DataSet, capturing the number of rows affected. int rowsAffected = DatabaseWrapper.PutDataSet(customerDataSet); [Visual Basic] 'Create a new CustomersDS typed DataSet. Private customerDataSet As CustomersDS = New CustomersDS() ' Load the Customer DataTable in the typed DataSet. Private divId As Integer = 1 DatabaseWrapper.GetDataSet(customerDataSet, divId) 'Explicitly set the value for the LastUpdate parameter 'to a value not known until runtime. customerDataSet.Customers.ExtendedProperties.Add _       ("LastUpdate", DateTime.Now) ' Modify an existing customer. If customerDataSet.Customers.Count > 0 Then      customerDataSet.Customers(0).CustomerName = "Len Fenster" End If ' Add a new customer. Dim customersRow As CustomersDataRow = customersTable.AddCustomersRow _       (divId, "CompanyXYZ", "John Doe") ' Submit the DataSet, capturing the number of rows affected. Dim rowsAffected As Integer =DatabaseWrapper.PutDataSet(customerDataSet)

This becomes even more important when two-sweeps operations and transaction support are needed when performing the database operation.

Two-Sweeps Operations

When a DataSet contains multiple tables and foreign-key relationships between those tables, determining the order in which Create, Update, and Delete operations need to be performed on the DataTables can get fairly complicated. It is important to update in the proper sequence to reduce the chance of violating referential integrity constraints. To prevent data integrity errors from being raised, the best practice is to update the data source in the following sequence.

1.

Delete records in child tables.

2.

Delete records in the parent table.

3.

Insert and update records in the parent table.

4.

Insert and update records in the child tables.

For example, suppose you have developed an application that lets you modify both the Order and the OrderDetail information. One DataSet can be used that contains an Orders table and an OrderDetails table, and the DataSet would either have a parent-child relationship or hierarchical relationship between the tables. When updating this dataset, it will be important to follow this sequence.

1.

Delete records in the OrderDetails table.

2.

Delete records in the Orders table.

3.

Insert and update records in the Orders table.

4.

Insert and update records in the OrderDetails table.

This process is called a two-sweeps update procedure. The Data Access Application Block does not implement a two-sweeps update procedure when UpdateDataSet is called. Currently the Data Access Application Block allows for update, insert, and delete commands for each table as it is updated; however, to support a two-sweep update, the block would need to have information about the commands for all tables prior to the update of any one table.

The Data Mapping Application Block contains this kind of information about all the tables in a DataSet in its configuration data. The DatabaseWrapper recursively navigates down through the child tables in a DataSet to perform the proper command and perform a two-sweeps as long as there is no circular relationship between the tables.

Transaction Support

A common requirement when an application executes multiple operations against a database is that all of the operations must succeed or the database must roll back to its state before the operations began. This all-or-nothing requirement is called a transaction. Transactions ensure the integrity of a database system's state.

In most circumstances, the root of the transaction is the business process rather than a data access logic component or a business entity component. The reason is that business processes typically require transactions that span multiple business entities, not just a single business entity. However, situations can arise where transactional operations may be needed on a single business entity without the assistance of a higher-level business process. If such requirements are needed and there is no possibility that the operation will be part of a larger business process that will initiate the transaction, then manual transactions are an acceptable solution. Manual transactions allow explicit control of the transaction boundary with explicit instructions to begin and end the transaction. When manual transactions are implemented in data access logic components, the following recommendations need to be considered.

  • Where possible, transaction processing should be performed in stored procedures using statements like BEGIN TRANSACTION, END TRANSACTION, and ROLLBACK TRANSACTION.

  • If stored procedures cannot be used and the data access logic components will not be called from a business process, ADO.NET can be used to control transactions programmatically. This is less efficient than using explicit transactions in stored procedures because manual transactions in ADO.NET take at least as many round trips to the data store as there are operations to execute in the transaction, in addition to trips that begin and end the transaction.

The Data Access Application Block supports the use of transactions through overloaded methods that accept a class that has implemented the IDbTransaction interface. The Data Mapping Application Block takes advantage of this functionality by creating an IDbTransaction, beginning the transaction, and passing it into a call to the Data Access Application Block when the particular database operation is configured to be wrapped in a transaction. A rollback or commit is performed depending on whether any exceptions were thrown during the execution of the operation.

Thus, the code needed to wrap database operations in a transaction for a specific database operation does not need to be written in a data access logic component. The transaction level can be configured at the DataSet level, and when a database operation is performed for that DataSet, it will be in the scope of a transaction.

The DataMapper Class

The DataMapper class is at the core of the Data Mapping Application Block. It is an abstract base class whose intent is to make it easy for developers to create data access logic components for managing Create, Retrieve, Update, and Delete (CRUD) operations on the business entities for an application. Data access logic components are recommended for accessing business data because they abstract the semantics of the underlying data store and data access technology, and they provide a simple programmatic interface for retrieving and performing operations on business entities.

Data access logic components provide the logic required to access specific business data, while generic database providers (also known as data access helper components) centralize the data access API development and data connection configuration, and help to reduce code duplication. Implementing data access logic components allows all the data access logic for a given business entity to be encapsulated in a single central location, making the application easier to maintain or extend. Data access logic components should

  • Expose methods for inserting, deleting, updating, and retrieving data.

  • Use a database provider to centralize connection management and all code that deals with a specific data source.

  • Implement queries and data operations as stored procedures (if supported by the data source) to enhance performance and maintainability.

One of the largest uses of data access logic components is to perform the mappings and transformations needed between business entities and a relational data store. The primary intent of the Data Mapping Application Block is to move this mapping from being code driven to configuration driven.

With the Data Mapping Application Block, you create a business data access logic component by deriving a class from the abstract DataMapper base class and overriding the abstract DataSetType function. As the base class, the DataMapper is responsible for managing the mappings, transactions, and caching for the business data. As an application block, it takes advantage of Enterprise Library configuration capabilities to read its settings from configuration, thus letting you reap the benefits of mapping, transactions, and caching in a "codeless" manner. Listing A.2 demonstrates how to create a data access logic component for working with a Customers DataSet.

Listing A.2. Creating a Customers Data Access Logic Component

[C#] public class CustomersMapper : DataMapper {      protected override DataSet DataSetType()      {           return new CustomersDS();      }      public CustomersDS GetCustomers()      {           return (CustomersDS)GetObject();      } } [Visual Basic] Public Class CustomersMapper : Inherits DataMapper      Protected Overrides Function DataSetType() As DataSet           Return New CustomersDS()      End Function      Public Function GetCustomers() As CustomersDS           Return CType(GetObject(), CustomersDS)      End Function End Class

You'll notice that there is no code for setting transactions, caching, or mapping fields to parameters. As you will see later, this is as true for inserts, updates, and deletes as it is for reads from the data source. The base DataMapper class does this work for the derived class.

Table A.2 lists the methods and properties for the DataMapper class.

Table A.2. DataMapper's Methods and Properties

Method/Property

Access

Description

DataSetType

Protected,Abstract

Needs to be overridden. Must return a new strongly typed DataSet for the proper type. This property gets called by the base DataMapper class to instantiate and return the proper typed (or generic) Dataset.

Initialize

Protected

Performs initialization logic for a derived DataMapper.

ExplicitParameters

Protected

Helper function that gets and sets the ExplicitParameters for all DataTables in a DataSet.

get_ExplicitPara-

Protected

Helper function that gets the Explic-meters itParameters for a specific DataTable in a DataSet.

set_ExplicitPara-

Protected

Helper function that sets the ExplicitPa-meters rameters for a specific DataTable in a DataSet

GetObject

Protected

Returns a new typed DataSet as defined by the DataSetType property and filled with data from the appropriate data source.

PutObject

Protected

Inserts, updates, and deletes rows in the data source with the changes made to the typed DataSet.

RemoveObject

Protected

Removes an entire typed DataSet. Iterates through every row of every table and calls Delete, which will not cause AcceptChanges to occur on the DataSet.

GeTReader

Protected

Based on the configured information for that DataSet, the correct database provider is created and returns a DataReader

GetScalar

Protected

Based on the configured information for that DataSet, the correct database provider is created and performs an ExecuteScalar.

Execute

Protected

Based on the configured information for that DataSet, the correct database provider is created and performs an ExecuteNonQuery.

FillTable

Protected

Allows for lazy loading by allowing DataTables to be populated independently of the entire DataSet.

TRansactionType

Protected

Gets and sets the type of transaction. The default is IsolationLevel.Unspecified, which is interpreted as no transaction.

CacheSettings

Protected

Property that gets the current CacheSettings for the derived DataMapper and allows them to be programmatically overridden.

FlushDataSetFromCache

Protected

Flushes the DataSet from the cache (if any). Useful if the DataSet is being cached but operations are performed that do not automatically keep the cache in synch (e.g., Execute).


Caching Data

The guidance documented in the section Caching in the Data Services Layer in Chapter 3 of the Caching Architecture Guide for .NET Framework Application[3] states that because of the relatively high performance costs of opening a connection to a database and querying the data stored in it, data elements are excellent caching candidates and are commonly cached. Additionally, DataSets are excellent caching candidates because:

[3] Found at http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnbda/ html/CachingArch.asp.

  • They are serializable and as such can be stored in caches either in the same process or in another process.

  • They can be updated without needing to reinsert them into the cache. Because a reference to a DataSet can be cached, an application can update the data without the reference in the cache needing to change.

  • They store formatted data that is easily read and parsed by the client. Caching frequently used DataSet objects in an application often results in improved performance.

A design goal for the Data Mapping Application Block in Chapter 9 was to provide the ability to cache data in a data access logic component and to allow the caching of data to be a configurable setting. By providing this capability, it also became important to allow for the configuration of multiple expiration policies for a data access logic component so that cached data can expire per the settings needed by an application, and that CacheItemRefreshActions can be set so that actions could be taken in the application when data expired. The classes shown in Figure A.2 highlight the design for caching in the Data Mapping Application Block.

Figure A.2. Classes Used to Provide Caching Support in the Data Mapping Application Block


The CacheSettings object contains information that allows the DataMapper to know whether it should cache its data. The CacheSettings object also encapsulates a collection of CacheExpirationPolicies and a CacheRefreshAction (aka CacheRemovalCallback).

To let users set properties for the CacheExpirationPolicies at deployment time, I needed to create a wrapper class around each of the existing implementations of an ICacheItemExpiration because a node based solely on an interface cannot be instantiated. Therefore, I created an abstract base class named CacheExpiration that encapsulates an ICacheItemExpiration, and all CacheItemExpirationPolicies derive from this class. When configuration data is read in for use by the DataMapper, it will contain data that indicates whether the DataMapper should cache its data, and if so, with which expiration policies and Refresh-Action.

If a DataMapper is configured to cache its data, it will check the cache first to ascertain whether data exists. The key for caching the data is a com bination of the DataSet's name and the list of parameters that were used to retrieve it. This allows a separate copy of all the different variances that can occur for the retrieval of this data to be cached. If the data exists in the cache, it is returned. Otherwise, it is retrieved from the data source. If caching is enabled, the retrieved data is added to the cache with the appropriate expiration policies and RefreshAction as per the configuration settings. The data is then returned.

The guidance from the patterns & practices team suggests that you think about which data should be cached. The guidance states that only nontransactional data that is static or semi-static should be cached. Caching too much data or the wrong type of data can sometimes be worse than not caching any data at all. The semi-static data is the most interesting because a design needs to be applied for how to handle cached data that gets updated, inserted, or deleted. I have chosen to take the safest route by ensuring the least amount of data "staleness." I remove cached data if it is modified in any way. Another approach would be to update both the cache and data source as data is updated, inserted, and deleted.

The DataMappingProviderFactory Class and the IDataMappingProvider Interface

As previously stated, when data mapping needs to be performed in a data access logic component, a developer only needs to derive a class from the base DataMapper class. Underneath the covers, the DataMapper calls the DatabaseWrapper that uses a DataMappingFactory to obtain a DataMappingCollection from a data mapping provider. Although developers are abstracted away from needing to call the DataMappingFactory directly in their data access logic components, they can still use it if there are situations where information about the data mappings of one data access logic component are needed from another.

For example, if there were a circumstance where the information about the data mappings for an Orders data access logic component needed to be known when performing an operation in a Customers data access logic component, the DataMappingFactory can be used to obtain the DataMappingCollection that contains the mapping information for all data access logic components.

All data mapping providers should implement the IDataMappingProvider interface to provide support for the DataMappingCollection. The DataMappingProviderFactory class uses configuration information to create an instance of an IDataMappingProvider. Like the application blocks that ship with Enterprise Library, a factory also exists that provides static methods that wrap around the public methods of the DataMappingProviderFactory class. This factory is named DataMappingFactory (no Provider in the name). Either class can be used to obtain either a named instance of a DataMappingProvider or one that is configured as the default DataMappingProvider.

Both the DataMappingFactory and the DataMappingProviderFactory class expose a method named GetdataMappingProvider that returns an instance of a DataMappingProvider. The GeTDataMappingProvider method is overloaded two times: one overload expects a string that represents the name of a DataMappingProvider that has been configured for the application, and the other overload does not expect any arguments. The first overload returns the named DataMappingProvider, and the second returns the DataMappingProvider that has been configured as the default DataMappingProvider for the application. Figure A.3 shows the relationship between the DataMappingFactory, the DataMappingProviderFactory, the IDataMappingProvider interface, and the DataMappingCollection class. The Data Mapping Application Block includes an implementation of the IDataMappingProvider interface by way of the DataSetMappingProvider.

Figure A.3. DataMapping Factories and Providers in the Data Mapping Application Block


The DataSetMappingProvider

The Data Mapping Application Block includes a data mapping provider named DataSetMappingProvider that lets it use configuration data maintained on the various different classes in a DataSet to aid in its data mapping. The configuration data for this provider is contained in an object named the DataSetMappingDataCollection, which is hierarchical in nature and is intentionally meant to resemble the hierarchy of a collection of DataSets. Figure A.4 depicts the hierarchy for the objects contained in the DataSetMappingDataCollection.

Figure A.4. DataSetMappingDataCollection Object Hierarchy


The DataSetMappingCollection's root level contains a collection of DataSetMapping objects. A DataSetMapping object represents mapping information for a strongly typed DataSet. It contains information about the DatabaseInstance to which it is bound, the transaction IsolationLevel for wrapping transactions around data commands, a CacheSettings object that holds information about how the DataSet may be cached, a DataCollection of DataTableMappings, and a DataCollection of SelectCommandMappings.

The CacheSettings object represents the settings to use for caching a DataSet. It contains a CacheItemRefreshAction, a DataCollection of CacheItemExpirations, and a CacheItemPriority setting. These settings allow a concrete DataMapper to pass the arguments it needs to the Caching Application Block and have the appropriate refresh, expirations, and priorities set.

DataTableMappings simply contain a DataCollection of CommandMappings and a table name. CommandMappings contain information about the type of CommandStatement (Select, Insert, Update, Delete, or InsertUpdate) it uses, the name of the stored procedure to which it is bound, what the CommandTimeout period is (if any), and a DataCollection of CommandParameterMappings.CommandParameterMappings simply contain the name of a stored procedure parameter and the SourceColumn from the DataTable to which it should be mapped (if any).




Fenster Effective Use of Microsoft Enterprise Library(c) Building Blocks for Creating Enterprise Applications and Services 2006
Effective Use of Microsoft Enterprise Library: Building Blocks for Creating Enterprise Applications and Services
ISBN: 0321334213
EAN: 2147483647
Year: 2004
Pages: 103
Authors: Len Fenster

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net