The design goals for the Data Mapping Application Block were to:
In short, the design goals for the Data Mapping Application Block were to provide an API that makes it easy to create data access logic components that subscribe to the best practices as promoted by the Microsoft patterns & practices team and to make those data access logic components highly configurable. Examples of configurable properties include whether or not database operations should occur in the scope of a transaction, whether the business data should be cached by the data access logic component, and which stored procedure parameters map to which fields in the business entity. The vision is that this will free developers up to concentrate on adding business logic instead of worrying about operational issues. It is not to say that you don't need to be concerned about such matters; rather, it is to say that this should be extremely easy for you to configure and modify and should not necessitate the need to recode if changes occur. Figure A.1 shows the primary classes that are responsible for providing the core functionality for the Data Mapping Application Block. Figure A.1. Design of the Data Mapping Application BlockThe DatabaseWrapper ClassThe DatabaseWrapper acts as a service layer that encapsulates the Data Access Application Block. This service layer takes advantage of the Data Access Application Block's support for obtaining typed DataSets and propagating the changes in those DataSets to the backend database. The Data baseWrapper is core to the functionality of the Data Mapping Application block because it is in this class that configuration information is used to determine how to map stored procedure parameters to DataFields (that is, perform the data mapping), determine the value for a CommandTimeout, and obtain and use information relative to the database transaction isolation level. The idea behind the DatabaseWrapper is simply to make it easy to retrieve typed DataSets from a data source and update a data source given the typed DataSet. It hides any complexities about creating the configured database provider, setting TableMappings for the DataSet properly, mapping DataColumns and non-DataSet columns to stored procedure parameters, and handling transactions. Table A.1 lists the public methods and properties for the DatabaseWrapper class.
The value that the DatabaseWrapper provides is most evident when it is used to propagate the changes that occur in a typed DataSet back to the data source from where it originated. Chapter 3 provided a fair amount of detail about the UpdateDataSet method that is used for updating a DataSet. And while the UpdateDataSet method is very powerful, it could be easier to use if it were able to use information that it read from configuration to determine which tables to call and which fields map to specific stored procedure parameters. For example, in Chapter 3, Listing 3.16 illustrated using the UpdateDataSet method to propagate a single insert and update a typed DataSet back to the database. There was a bit of code that was specific to the name of the stored procedure that should be called for an insert, the name of the stored procedure to call for an update, and the name of the DataFields in the DataSet that should be mapped to the stored procedure's parameters. Such information can certainly be viewed as configuration information, and by placing the retrieval of this information behind the data service layer, Listing could be simplified to look more like the code shown in Listing A.1. The DatabaseWrapper class contains a method named PutDataSet that will read the configuration information that is set for a particular typed DataSet and perform the various methods that must be called to update the DataSet by using the Data Access Application Block. Listing A.1. Using PutDataSet with the DatabaseWrapper Class
This becomes even more important when two-sweeps operations and transaction support are needed when performing the database operation. Two-Sweeps OperationsWhen a DataSet contains multiple tables and foreign-key relationships between those tables, determining the order in which Create, Update, and Delete operations need to be performed on the DataTables can get fairly complicated. It is important to update in the proper sequence to reduce the chance of violating referential integrity constraints. To prevent data integrity errors from being raised, the best practice is to update the data source in the following sequence.
For example, suppose you have developed an application that lets you modify both the Order and the OrderDetail information. One DataSet can be used that contains an Orders table and an OrderDetails table, and the DataSet would either have a parent-child relationship or hierarchical relationship between the tables. When updating this dataset, it will be important to follow this sequence.
This process is called a two-sweeps update procedure. The Data Access Application Block does not implement a two-sweeps update procedure when UpdateDataSet is called. Currently the Data Access Application Block allows for update, insert, and delete commands for each table as it is updated; however, to support a two-sweep update, the block would need to have information about the commands for all tables prior to the update of any one table. The Data Mapping Application Block contains this kind of information about all the tables in a DataSet in its configuration data. The DatabaseWrapper recursively navigates down through the child tables in a DataSet to perform the proper command and perform a two-sweeps as long as there is no circular relationship between the tables. Transaction SupportA common requirement when an application executes multiple operations against a database is that all of the operations must succeed or the database must roll back to its state before the operations began. This all-or-nothing requirement is called a transaction. Transactions ensure the integrity of a database system's state. In most circumstances, the root of the transaction is the business process rather than a data access logic component or a business entity component. The reason is that business processes typically require transactions that span multiple business entities, not just a single business entity. However, situations can arise where transactional operations may be needed on a single business entity without the assistance of a higher-level business process. If such requirements are needed and there is no possibility that the operation will be part of a larger business process that will initiate the transaction, then manual transactions are an acceptable solution. Manual transactions allow explicit control of the transaction boundary with explicit instructions to begin and end the transaction. When manual transactions are implemented in data access logic components, the following recommendations need to be considered.
The Data Access Application Block supports the use of transactions through overloaded methods that accept a class that has implemented the IDbTransaction interface. The Data Mapping Application Block takes advantage of this functionality by creating an IDbTransaction, beginning the transaction, and passing it into a call to the Data Access Application Block when the particular database operation is configured to be wrapped in a transaction. A rollback or commit is performed depending on whether any exceptions were thrown during the execution of the operation. Thus, the code needed to wrap database operations in a transaction for a specific database operation does not need to be written in a data access logic component. The transaction level can be configured at the DataSet level, and when a database operation is performed for that DataSet, it will be in the scope of a transaction. The DataMapper ClassThe DataMapper class is at the core of the Data Mapping Application Block. It is an abstract base class whose intent is to make it easy for developers to create data access logic components for managing Create, Retrieve, Update, and Delete (CRUD) operations on the business entities for an application. Data access logic components are recommended for accessing business data because they abstract the semantics of the underlying data store and data access technology, and they provide a simple programmatic interface for retrieving and performing operations on business entities. Data access logic components provide the logic required to access specific business data, while generic database providers (also known as data access helper components) centralize the data access API development and data connection configuration, and help to reduce code duplication. Implementing data access logic components allows all the data access logic for a given business entity to be encapsulated in a single central location, making the application easier to maintain or extend. Data access logic components should
One of the largest uses of data access logic components is to perform the mappings and transformations needed between business entities and a relational data store. The primary intent of the Data Mapping Application Block is to move this mapping from being code driven to configuration driven. With the Data Mapping Application Block, you create a business data access logic component by deriving a class from the abstract DataMapper base class and overriding the abstract DataSetType function. As the base class, the DataMapper is responsible for managing the mappings, transactions, and caching for the business data. As an application block, it takes advantage of Enterprise Library configuration capabilities to read its settings from configuration, thus letting you reap the benefits of mapping, transactions, and caching in a "codeless" manner. Listing A.2 demonstrates how to create a data access logic component for working with a Customers DataSet. Listing A.2. Creating a Customers Data Access Logic Component
You'll notice that there is no code for setting transactions, caching, or mapping fields to parameters. As you will see later, this is as true for inserts, updates, and deletes as it is for reads from the data source. The base DataMapper class does this work for the derived class. Table A.2 lists the methods and properties for the DataMapper class.
Caching DataThe guidance documented in the section Caching in the Data Services Layer in Chapter 3 of the Caching Architecture Guide for .NET Framework Application[3] states that because of the relatively high performance costs of opening a connection to a database and querying the data stored in it, data elements are excellent caching candidates and are commonly cached. Additionally, DataSets are excellent caching candidates because:
A design goal for the Data Mapping Application Block in Chapter 9 was to provide the ability to cache data in a data access logic component and to allow the caching of data to be a configurable setting. By providing this capability, it also became important to allow for the configuration of multiple expiration policies for a data access logic component so that cached data can expire per the settings needed by an application, and that CacheItemRefreshActions can be set so that actions could be taken in the application when data expired. The classes shown in Figure A.2 highlight the design for caching in the Data Mapping Application Block. Figure A.2. Classes Used to Provide Caching Support in the Data Mapping Application BlockThe CacheSettings object contains information that allows the DataMapper to know whether it should cache its data. The CacheSettings object also encapsulates a collection of CacheExpirationPolicies and a CacheRefreshAction (aka CacheRemovalCallback). To let users set properties for the CacheExpirationPolicies at deployment time, I needed to create a wrapper class around each of the existing implementations of an ICacheItemExpiration because a node based solely on an interface cannot be instantiated. Therefore, I created an abstract base class named CacheExpiration that encapsulates an ICacheItemExpiration, and all CacheItemExpirationPolicies derive from this class. When configuration data is read in for use by the DataMapper, it will contain data that indicates whether the DataMapper should cache its data, and if so, with which expiration policies and Refresh-Action. If a DataMapper is configured to cache its data, it will check the cache first to ascertain whether data exists. The key for caching the data is a com bination of the DataSet's name and the list of parameters that were used to retrieve it. This allows a separate copy of all the different variances that can occur for the retrieval of this data to be cached. If the data exists in the cache, it is returned. Otherwise, it is retrieved from the data source. If caching is enabled, the retrieved data is added to the cache with the appropriate expiration policies and RefreshAction as per the configuration settings. The data is then returned. The guidance from the patterns & practices team suggests that you think about which data should be cached. The guidance states that only nontransactional data that is static or semi-static should be cached. Caching too much data or the wrong type of data can sometimes be worse than not caching any data at all. The semi-static data is the most interesting because a design needs to be applied for how to handle cached data that gets updated, inserted, or deleted. I have chosen to take the safest route by ensuring the least amount of data "staleness." I remove cached data if it is modified in any way. Another approach would be to update both the cache and data source as data is updated, inserted, and deleted. The DataMappingProviderFactory Class and the IDataMappingProvider InterfaceAs previously stated, when data mapping needs to be performed in a data access logic component, a developer only needs to derive a class from the base DataMapper class. Underneath the covers, the DataMapper calls the DatabaseWrapper that uses a DataMappingFactory to obtain a DataMappingCollection from a data mapping provider. Although developers are abstracted away from needing to call the DataMappingFactory directly in their data access logic components, they can still use it if there are situations where information about the data mappings of one data access logic component are needed from another. For example, if there were a circumstance where the information about the data mappings for an Orders data access logic component needed to be known when performing an operation in a Customers data access logic component, the DataMappingFactory can be used to obtain the DataMappingCollection that contains the mapping information for all data access logic components. All data mapping providers should implement the IDataMappingProvider interface to provide support for the DataMappingCollection. The DataMappingProviderFactory class uses configuration information to create an instance of an IDataMappingProvider. Like the application blocks that ship with Enterprise Library, a factory also exists that provides static methods that wrap around the public methods of the DataMappingProviderFactory class. This factory is named DataMappingFactory (no Provider in the name). Either class can be used to obtain either a named instance of a DataMappingProvider or one that is configured as the default DataMappingProvider. Both the DataMappingFactory and the DataMappingProviderFactory class expose a method named GetdataMappingProvider that returns an instance of a DataMappingProvider. The GeTDataMappingProvider method is overloaded two times: one overload expects a string that represents the name of a DataMappingProvider that has been configured for the application, and the other overload does not expect any arguments. The first overload returns the named DataMappingProvider, and the second returns the DataMappingProvider that has been configured as the default DataMappingProvider for the application. Figure A.3 shows the relationship between the DataMappingFactory, the DataMappingProviderFactory, the IDataMappingProvider interface, and the DataMappingCollection class. The Data Mapping Application Block includes an implementation of the IDataMappingProvider interface by way of the DataSetMappingProvider. Figure A.3. DataMapping Factories and Providers in the Data Mapping Application BlockThe DataSetMappingProviderThe Data Mapping Application Block includes a data mapping provider named DataSetMappingProvider that lets it use configuration data maintained on the various different classes in a DataSet to aid in its data mapping. The configuration data for this provider is contained in an object named the DataSetMappingDataCollection, which is hierarchical in nature and is intentionally meant to resemble the hierarchy of a collection of DataSets. Figure A.4 depicts the hierarchy for the objects contained in the DataSetMappingDataCollection. Figure A.4. DataSetMappingDataCollection Object HierarchyThe DataSetMappingCollection's root level contains a collection of DataSetMapping objects. A DataSetMapping object represents mapping information for a strongly typed DataSet. It contains information about the DatabaseInstance to which it is bound, the transaction IsolationLevel for wrapping transactions around data commands, a CacheSettings object that holds information about how the DataSet may be cached, a DataCollection of DataTableMappings, and a DataCollection of SelectCommandMappings. The CacheSettings object represents the settings to use for caching a DataSet. It contains a CacheItemRefreshAction, a DataCollection of CacheItemExpirations, and a CacheItemPriority setting. These settings allow a concrete DataMapper to pass the arguments it needs to the Caching Application Block and have the appropriate refresh, expirations, and priorities set. DataTableMappings simply contain a DataCollection of CommandMappings and a table name. CommandMappings contain information about the type of CommandStatement (Select, Insert, Update, Delete, or InsertUpdate) it uses, the name of the stored procedure to which it is bound, what the CommandTimeout period is (if any), and a DataCollection of CommandParameterMappings.CommandParameterMappings simply contain the name of a stored procedure parameter and the SourceColumn from the DataTable to which it should be mapped (if any). |