Using the Data Access Application Block's APIOne of the reasons that the Data Access Application Block has been among the most popular application blocks produced by the Microsoft patterns & practices team is that it is very easy to use. It was very important that this remained the case for the Enterprise Library version of the Data Access Application Block as well. The DatabaseFactory, database providers, and database command wrappers that were discussed in the previous section all play a role in achieving the design goal for a simple and consistent development API. Developing an application to use Enterprise Library's Data Access Application Block involves two types of activities: configuring the application with the appropriate database providers and writing code to call the API exposed by the Data Access Application Block to retrieve and update data from a database. This section provides detailed information on both of these tasks. First it details how to configure a database provider so that you can use it in an application, and then it specifies which database provider methods you should use to retrieve and update information and why. The section concludes with an overview of data access logic components and why they are recommended as the typical primary controller for managing database providers in a distributed application. Configuring a Database ProviderOnce the database command wrapper and database provider classes exist, a database provider can be returned by the Data Access Application Block by adding its configuration information to the rest of the configuration information for the Data Access Application Block. By default, the configuration information for the Data Access Application Block is kept separate from the rest of the configuration information for an application. The default setting in the metaconfiguration data (the configuration data that describes information about an application's configuration) is for the configuration data to be stored in an XML file named dataConfiguration.config. You can change this location to be a different file, the application domain configuration file (app.config or web.config), the Windows Registry, or a relational database. For details about modifying the metaconfiguration data for an application, see Chapter 1. The first step in using the Data Access Application Block in an application is to add the block to the overall configuration for the application. Although this can be accomplished manually, the easiest and least error-prone way to do this is to use the Enterprise Library Configuration Tool (see Chapter 2). To add the Data Access Application Block using the Configuration Tool, open the application domain configuration file for the application in the tool, right-click on the application node, and select New > Data Access Application Block. Figure 3.4 shows the resulting configuration hierarchy. A new configuration section is added to the nodes underneath the settings for the Configuration Application Block, and a configuration hierarchy has been created to hold the settings for the Data Access Application Block. Figure 3.4. Hierarchy after Adding the Data Access Application Block to an Application
Configuring a database provider so that it can be used by the Data Access Application Block involves adding several elements to the configuration for the block. The elements are database type, connection string, and database instance. Adding a Database TypeThe database type specifies information about a database provider so that it can be created by the Data Access Application Block. You must add an XML element to the configuration data that specifies a logical name and fully qualified assembly name for a database provider. To add a database type to the Data Access Application Block, right-click the Database Types configuration node and select New > Database Type. The Property pane for the new configuration node will contain properties for the logical name of this database type and the TypeName for this database type. The TypeName is the fully qualified assembly name for a database provider. Click on the ellipses in this property to display the TypeSelector dialog. This only shows the classes that the Configuration Tool is aware are subclasses of the abstract base Database class. Figure 3.5 illustrates adding a database type that represents the custom XmlFileDatabase database provider to the configuration for the Data Access Application Block. Figure 3.5. Adding the XmlFileDatabase Database Type to the Data Access Application BlockThere is no limit to the number of database types that you can add to the configuration for an application. Because Enterprise Library automatically added a database type for the SqlDatabase database provider when the block was added to the application's overall configuration, adding the XmlFileDatabase database type resulted in two database types for this application. Listing 3.6 illustrates the section of the configuration data that has been modified due to the additional database type (the listing has been formatted for legibility purposes). Listing 3.6. Resulting Configuration from Adding a New Database Type
Adding a Connection StringThe connection string specifies the physical location of a database. A database provider uses the information contained in the connection string to open the particular data source. For a database provider to successfully get past the point of opening a connection to the data source, a valid connection string must be supplied. The .NET managed data provider interprets how to handle the connection string it is provided. For example, since the .NET managed provider for XML files does not actually work with a relational database, the provider has been written so that it will expect the value for the database parameter to contain the name of the file that it will use to store and retrieve data. The .NET managed provider for SQL, on the other hand, expects parameters that contain values for the name of the database server, the name of the database, whether a trusted connection is to be used and optionally a user ID and password (if trusted connection is set to false). Oracle's provider expects these same parameters plus information that represents Oracle package information. To add a connection string, right-click on the Connection Strings configuration node and, unless you need to create a connection string for Oracle, select New > Connection String. If you do need to create a connection string for Oracle, select New > Oracle Connection String. This will add a special configuration node for dealing with Oracle package information. Adding a connection string automatically adds configuration nodes for the database, server, and Trusted Connection parameters. If you don't need these parameters, you can remove them by right-clicking on the node and selecting Remove. For the XmlFileDatabase, the only parameter that is needed is the database parameter; therefore, I have removed all the other configuration nodes. Figure 3.6 shows the resulting connection string after these nodes have been removed and the name of the XML file has been set for the database parameter. Figure 3.6. Setting the Connection String for the XmlFileDatabaseAdding a Database InstanceA database instance is the logical representation of a database. It couples the database type with the connection string. This is needed because a particular database provider can be used to access more than one database (e.g., accessing multiple SQL Server databases only requires that the Sql-Database provider is added to the configuration as a database type once). To add a database instance, right-click on the Database Instances configuration node and select New > Database Instance. The resulting properties for this node will be prepopulated with values based on the connection strings and database types that have already been added to the configuration for this block. To change these settings, choose a new value from the drop-down box for a particular property. For example, Figure 3.7 illustrates changing the instance for the new database instance named PortalConfiguration from using the SQL Server database type to using the XML file database type. This change has the effect of creating and returning an XmlFileDatabase database provider instead of a SqlDatabase database provider whenever the CreateDatabase method is called for the PortalConfiguration database instance. Figure 3.7. Changing the Properties for a Database InstanceListing 3.7 depicts the configuration information that results from making these changes for a new database provider to be supported by the Data Access Application Block. Hopefully, it is evident from this listing that it is far easier and less error-prone to use the Enterprise Library Configuration Tool to make these changes than it would be to manually create or modify this data. The listing has been formatted for legibility purposes. Listing 3.7. Resulting Configuration Data for the Data Access Application Block
Once the configuration information for a database instance has been successfully saved, that database instance can then be referenced in code for performing data access and updates against a particular data store. Technically, the name of the database instance can be used even before the configuration information is saved for an application; however, the application will result in runtime errors because it will not be able to find the configuration information for the referenced database instance. Developing with the Data Access Application BlockFor almost all situations, using the DatabaseFactory and a known database instance will be the starting point for performing any data access or update procedure. Once the database providers have been configured for an application, they can be used to access and update data in the data source for which they have been configured. Many methods exist that allow database providers to retrieve data from a backend database or add, delete, or modify the data in the database. The following sections detail how to use the Data Access Application Block to retrieve data from a database and update the data in it. Retrieving DataDatabase providers expose many public methods that can be used for accessing data from a backend data source. The method that should be used for a particular situation depends on many details. One of those details is simply the amount or type of data that needs to be accessed. For example, you can use the ExecuteScalar method for retrieving a single value, and the ExecuteNonQuery method for retrieving multiple values via the output parameters from a stored procedure. Listing 3.8 illustrates how these two methods differ by accessing a stored procedure to get the unique identifier for a customer with the ExecuteScalar method, and then using this identifier to obtain more details about that customer with the ExecuteNonQuery method. Listing 3.8. Using the ExecuteScalar and ExecuteNonQuery Methods for Retrieving Data
In addition to the ExecuteScalar and ExecuteNonQuery methods for accessing particular data values, there are also several other methods you can use to retrieve multiple rows of data. These methods can be categorized by the approaches that are used for retrieving the data: connected access and disconnected access. Connected AccessNo discussion about accessing data would be complete without mentioning DataReaders. DataReaders provide noncached read-only, forward-only access to data. Requests are returned as a query executes, and the data is stored in the network buffer until it is requested using the Read method. DataReaders are often used to increase an application's performance, because data can be retrieved as soon as it is available and typically only one row of data is stored in memory at any one time to reduce system overhead. It is important to note, however, that until a DataReader is closed, its connection is exclusively locked. This prevents any other commands from being executed on that same connection. When the DataReader is of no more use, it must be closed for the database connection to be released. The Data Access Application Block handles connection management whenever possible by using the CommandBehavior.CloseConnection method to automatically close connections when the DataReader is closed. A test conducted to determine the best data access method for performance and scalability showed that DataReaders were ideal for retrieving a single record from a database.[5] As more records were retrieved from a database, the performance of the DataReader continually improved over methods like using a DataSet. It was determined that the overhead associated with creation of a DataSet object was responsible for worse performance and an increase in memory overhead.
However, while the tests confirmed that a DataReader is a better choice than a DataSet for applications that require optimized read-only and forward-only data access, the tests also showed that because the DataReader holds a database connection while an application reads data, it limits scalability if the connection is held long enough for contention to occur. Other mechanisms like DataSets or custom data transfer objects (DTOs) only need to hold a connection while they are being populated; once they are filled, the connection may be closed and returned to the pool. This is important because when a delay was introduced to these tests, contention for the database caused the DataSet to outperform the DataReader. The tests concluded that DataReaders are an excellent solution if there is a need to retrieve multiple rows from a database, use them once, and then discard them. The most important detail to remember when using a DataReader is that the data should be used immediately and the DataReader should be closed as soon as possible. Listing 3.1 illustrated using a DataReader to retrieve multiple rows of customer data from a database and displaying the data in a tabulated form. Note how the DataReader was used immediately to populate the DataGrid and then closed. The DataReader did not explicitly cache the data, manipulate it by using a DataSet or custom object, or pass it to other components. In other words, the listing illustrates how to display the results as quickly as possible for a single use. Disconnected AccessWhen data needs to be passed through multiple tiers of an application, interacted with dynamically, cached locally, or extensive processing needs to be performed on it, a more disconnected and stateful approach to working with data is typically needed. There are many different approaches that can be used to work with data in a disconnected fashion; each has its own benefits and liabilities. Three of the most popular ways of working with disconnected data in a .NET environment are generic DataSets, typed DataSets, and custom business entity components or data transfer objects. The following gives an overview of the benefits and liabilities of each of these approaches and how they are supported by Enterprise Library's Data Access Application Block. A DataSet is a disconnected, in-memory representation of relational data. Because it holds no persistent connection to outside resources, it is ideal for packaging, exchanging, caching, persisting, and loading data. A DataSet object contains zero or more DataTable objects, which represent a table in a relational database. A DataTable contains zero or more DataColumns and DataRows, each respectively representing the columns and rows in a relational database table. Additionally, a DataSet may have Relation objects to associate tables together. The following are the advantages of using a generic DataSet for accessing data in a disconnected manner.
The following are some of the disadvantages of using generic DataSets for disconnected access to data.
Listing 3.9. Accessing Data Using a Generic DataSet
The Data Access Application Block makes it easy to return a generic DataSet object from a backend database. Once a database provider has been instantiated, returning a DataSet is as simple as calling the ExecuteDataSet method with the name of a stored procedure and parameter array. This is illustrated in Listing 3.10. This method will use dynamic parameter discovery to automatically determine the parameters for the stored procedure. Alternatively, a database command wrapper could have been used to explicitly handle the parameters for the database commands and pass them to the ExecuteDataSet method. Listing 3.10. Calling ExecuteDataSet to Return a DataSet
Typed DataSetsA typed DataSet is a DataSet that is associated with an XML schema and contains strongly typed methods, properties, and type definitions based on that schema. Typed DataSets exhibit approximately the same instantiation and marshalling performance as generic DataSets. However, because of the strong-typing, using a typed DataSet instead of a generic DataSet to access data allows for a few additional benefits.
Listing 3.11. Accessing Data Using a Typed DataSet
The following are some of the disadvantages of accessing data as a typed DataSet (in addition to those listed for a generic DataSet).
Even with these disadvantages, the gains in developer productivity from using typed DataSets often makes it a much more attractive way to represent disconnected data than other means. Additionally, the Data Access Application Block makes populating typed DataSets with data extremely easy. The LoadDataSet method accepts a DataSet object and a list of names for the DataTables of that DataSet. It populates the DataTables for the supplied DataSet with data from the backend database. This works a bit differently than the ExecuteDataSet method, because the ExecuteDataSet method returns a new DataSet with generic tables named Table, Table1, Table2, and so on. LoadDataSet allows a typed DataSet to be passed to and populated by the Data Access Application Block. Listing 3.12 shows how to populate a new CustomersDS DataSet by calling the LoadDataSet method. The example then uses the typed properties of the DataSet to retrieve the name of the first customer in the Customers DataTable. In this example, only the Customers DataTable is populated, even if there are other DataTables in the DataSet, because only the name of the Customers DataTable is passed to the LoadDataSet method. Listing 3.12. Calling LoadDataSet to Populate a Typed DataSet
Custom Business Entity ComponentsCustom business entity components and data transfer objects are another way that disconnected data can be represented and passed through the multiple tiers of a distributed application. These are custom classes that are designed to represent the data in a backend database; however, these classes do not derive from the DataSet class. They can be built entirely from scratch or with a tool, like the XML Schema Designer (xsd.exe) that ships with Visual Studio, to create the class from an XML schema. Many tools also exist today that can automate the creation of these custom classes given the data model for a database. Custom business entities typically contain the following.
The following are some advantages of using a custom entity to represent disconnected data.
Listing 3.13. Accessing Data Using a Custom Business Entity
Listing 3.14. Updating Data with a Custom Business Entity
The main disadvantage associated with using custom entity components as compared to generic and typed DataSets is a possible decrease in developer productivity. This is due to the fact that code must be written to handle many of the features that DataSets provide automatically. Specifically, the following are some of these features.
Custom components or data transfer objects are not uncommon. In fact, they are used to represent data in every Enterprise Library application block. The most common form of data that must be represented in any application block is configuration data. Every application block in Enterprise Library uses a data transfer object to represent configuration data for a particular runtime object. The data transfer objects can be recognized with the name of the runtime object for which it holds the data suffixed with the word Data. For example, the data transfer object that holds the configuration data for database providers is named DatabaseProviderData. One of the private member variables that this class encapsulates holds information about the connection string for a database provider. That class is named ConnectionStringData. Typically, the most efficient means of populating the private member variables for a data transfer object is to use a DataReader. There is usually no need to incur the performance overhead of using a DataSet in addition to populating the separate member variables contained by a data transfer object. Updating DataJust as the Data Access Application Block provides different methods for retrieving data, so too does it provide multiple methods for updating data. Determining when to use a particular method is fairly straightforward. If DataSets are not being used to represent the data that needs to be updated, then the ExecuteNonQuery method is the right choice. If, however, generic or typed DataSets are being used, then the UpdateDataSet method can take advantage of a DataSet's capabilities for keeping track of the changes that have occurred in the DataSet and updating the database accordingly. ExecuteNonQueryThe previous section, Retrieving Data, stated that the ExecuteNonQuery method can be used for retrieving multiple values via the output parameters from a stored procedure. It does not, however, need to be used solely for the purpose of retrieving data. It can also be used to update data without the need for a DataSet or custom entity component. This makes the ExecuteNonQuery very useful in situations where there is no need to work with data in a disconnected or distributed manner; instead, a simple database command to update data needs to be executed against the database. The ExecuteNonQuery will execute the database command that is provided to it and return the number of records that have been affected. Listing 3.15 shows how to use the ExecuteNonQuery method by providing it with the name of a stored procedure that will add a customer record to the database. Dynamic parameter discovery is used for this example; however, explicit parameter handling could just as easily have been used. Listing 3.15. Using ExecuteNonQuery to Update Data
UpdateDataSetDataSets are an excellent way to keep track of changes that have occurred in an application, and they are very useful when submitting changes back to a database. This is especially true when an application must work with the data in a disconnected mode, because DataSets were designed with functionality to handle optimistic concurrency. There are four main approaches to managing optimistic concurrency with DataSets.
Typically, when optimistic concurrency violations occur, an exception such as DBConcurrencyException is thrown. If this occurs and the update is not in a transaction, the database state will be such that the rows that passed the optimistic concurrency validation will be committed, while the rows that failed the concurrency validation won't be updated. If generic or typed DataSets are used to represent a disconnected view of data, the Data Access Application Block's UpdateDataSet method provides the best option for updating the backend database with the changes that have occurred in the DataSet. This method propagates all of the insertions, deletions, and changes for a particular DataTable in a DataSet back to the underlying database. To accomplish this, the UpdateDataSet method relies on the DataAdapter that is exposed for a particular .NET managed data provider. The tables in the section "Design of the Data Access Application Block" listed every concrete database provider that must provide an implementation for the GetdataAdapter method. This method must return that .NET managed data provider's implementation of the IDataAdapter interface. For example, the SqlDatabase database provider returns the SqlDataAdapter that is included with ADO.NET. When the UpdateDataSet method is called, the DataAdapter is retrieved via the GetdataAdapter method, and the underlying DataAdapter's Update(DataSet,TableName) method is called to perform the update to the data source. Thus, it is the underlying DataAdapter that performs the ultimate update to the data source. It is the DataAdapter's responsibility to throw an exception if it encounters an issue trying to handle a specific command. For example, if a record is removed from a DataTable in a DataSet but the DataAdapter has not been given enough information to perform a delete in the backend database, it is the DataAdapter's responsibility to throw or propagate an exception. The Data Access Application Block then propagates this exception back to the caller of the UpdateDataSet method. A new enumeration, UpdateBehavior, has been added to this version of the Data Access Application Block. UpdateBehavior provides a caller of the UpdateDataSet method with more control over what should happen if an exception occurs during an update. For example, you can specify that all updates should be rolled back if an exception occurs or that updates that have already been successfully committed should be allowed to remain in the database. The following are the possible values for an UpdateBehavior.
In Listing 3.16, a CustomersDS DataSet is populated by calling LoadDataSet. A change is then made to one of the Customers rows and a new Customers row is added to the Customers DataTable for this typed DataSet. The UpdateDataSet method is called to propagate these changes back to the database. Since no rows were removed, a null value (Nothing in VB.NET) is supplied for the Delete command in the call to the UpdateDataSet method. Also, because the UpdateBehavior is set to Standard, if an exception occurs during the update, all records that have been successfully added or modified will remain but no further updates will occur. Listing 3.16. Using UpdateDataSet to Update Data
The UpdateBehavior enumeration is a welcome addition to this version of the Data Access Application Block. If you need a more granular level of control over what the UpdateBehavior enumeration provides, you can modify the code in the Data Access Application Block. One way to accomplish this is to handle the RowUpdating event in addition to the RowUpdated event that is already being handled. The RowUpdating event is raised just before a DataAdapter begins an update to a row. Additional overloads for the UpdateDataSet method must be added that will allow delegates to be passed in for the RowUpdated and RowUpdating events and can take action when they occur. The RowUpdatedEventArgs and RowUpdatingEventArgs objects carry properties that allow a consumer to check data like the Command, StatementType, Status, and Errors. The Errors property, for example, returns the exception that was generated by the .NET managed data provider when the command was executed. This approach actually allows for more possibilities than just checking for exceptions. A consumer of a particular database provider could perform any custom action; that is, it isn't limited to checking for nulls and throwing exceptions. Listing 3.17, for example, shows how a RowUpdating event handler can be used to capture information about the records before they are updated at the data source. Listing 3.17. Adding a RowUpdating Event Handler
The GetStoredProcCommandWrapperWithSourceColumns method is another new function that is intended to be used in conjunction with the UpdateDataSet method. Prior to the advent of this method, dynamic parameter discovery could not be used with the UpdateDataSet method. There was no way to take advantage of dynamic parameter discovery and also specify that the values for the parameters of a stored procedure should come from the data contained in a DataSet. Rather, parameters had to be explicitly handled using the AddInParameter or AddParameter methods of a database command wrapper. The GetStoredProcCommandWrapperWithSourceColumns method allows the names of DataFields in the DataTables of a DataSet to be specified as the source for the values of a stored procedure's parameters. It does this by iterating through the parameters that are dynamically discovered for the stored procedure and mapping the corresponding DataParameter's SourceColumn property to the name of a DataField. Listing 3.16 illustrated this. The stored procedure for adding new customer data to the database is named AddCustomer and it accepts three parameters: one that represents a company division ID, one that represents the company name for a customer, and one that represents the customer's name. The GetStoredProcCommandWrapperWithSourceColumns method was used to tell the Data Access Application Block that it should get the values for these parameters from the DivisionId, CompanyName, and CustomerName DataFields respectively. This is straightforward if the data always comes only from the DataFields. There are often situations, however, where some data must come from a DataSet and other data must come from outside of a DataSet. For example, some stored procedures may require a UserId parameter to be present to determine if a user is authorized to modify order information. However, from an object-oriented perspective, it may not make sense to have a UserId DataField in the DataSet. Identification and authentication parameters that represent values like SessionId, UserId, or the current date and time often need to be explicitly set; they are elements of data that exist for data entitlement and auditing reasons, but they are more global in nature than the data that typically exists in a DataSet. When using the GetStoredProcCommandWrapperWithSource-Columns method, the order in which the names of the DataFields are added dictates the order in which they will be mapped to the parameters of the stored procedure. Therefore, if some values are not to be mapped to DataFields in a DataSet, then an empty string should be set for that field. This will set that DataParameter's SourceColumn property to an empty string, which tells the DataAdapter not to map this parameter. This has the effect of "leaving room" for these explicit, non-DataSet parameters. After the database command wrapper has been created, but before the Update-DataSet method is called, the parameter collection should be modified with the values for the explicit parameters. An example of how to do this is also shown in Listing 3.16. The database command wrapper for the Update command is returned from the call to the GetStoredProcCommandWrapperWithSourceColumns method, and three of the four stored procedure parameters are mapped to DataFields in the DataSet. The second parameter, LastUpdate, is not mapped to a DataField. An empty string is substituted in place of the name of a DataField for this parameter. After the database command wrapper is returned from the GetStoredProcCommandWrapperWithSourceColumns method, the value for this field can be explicitly set. In Listing 3.16, I have set the value for this parameter equal to the current DateTime. When the UpdateDataSet method is called, all parameter values have either been explicitly set or mapped to DataFields. Transaction SupportOften an application will perform multiple database operations at one time and will require that either all operations are successfully committed or none of the operations are. This type of requirement is known as a transaction, and it is used to ensure the integrity of a database system's state. The Data Access Application Block supports the use of manual transactions for all database operations.[6] Every one of a database provider's methods that can be used to access or update data in a database is overloaded with a signature that accepts an object that implements the IDbTransaction interface.
The IDbTransaction interface is exposed from a .NET managed data provider and enables transactions by providing a set of objects that create a connection to the database, begin a transaction, commit or abort the transaction, and finally close the connection. To execute multiple database operations in a single transaction, an object that is specific to a particular database and that supports the IDbTransaction interface can be created by calling BeginTransaction on an open Connection. This object can then be passed to all of the methods that must exist in that transaction. Finally, the transaction can be committed or rolled back depending on factors like whether an exception occurred or not. Listing 3.18 provides a sample for performing multiple database operations in a single transaction. In this example, data must be added to both the Order and OrderDetails table or it must not be added to either of them. Listing 3.18. Performing Database Operations in a Transaction
Data Access Logic ComponentsEarlier in this section I discussed how generic DataSets, typed DataSets, and custom business entity components can all be used as different ways to represent data in a distributed application. These classes, though, are meant as a way to represent data, not to access it. DataSets and custom entity classes should have no knowledge of the underlying database or how to access it. Unfortunately, too often the code to populate them is scattered throughout many different places in an application. It is not unusual to have to modify the logic that surrounds populating a DataSet with data; if the code to accomplish this is dispersed throughout an application, making such changes becomes tedious and error-prone. A better design is to create separate classes, known as data access logic components (DALCs), which have the responsibility for populating the DataSets and entity classes with data and propagating the changes in them back to the database. Data access logic components are a recommended best practice for accessing business data, because they abstract the semantics of the underlying data store and data access technology and provide a simple programmatic interface for retrieving and performing operations on data. Data access logic components typically provide methods to perform Create, Retrieve, Update, and Delete (CRUD) operations relating to a specific business entity in an application (e.g., customers). Centralizing these types of operations allows for greater reuse not only in one application but among many. By maintaining this code in data access logic components, you can achieve reuse in different types of applications with different types of clients. Data access logic components should not be designed to invoke other data access logic components. Avoiding such a design helps keep the path to data predictable, thereby improving application maintainability. They should also not initiate heterogeneous transactions. Each data access logic component should be designed to work with a single data source; therefore, there should not be a scenario in which a data access logic component is the root for a heterogeneous transaction. It is appropriate, however, for a data access logic component to control a transaction that involves multiple updates in a single data source. The use of data access logic components does not negate the need for database providers; quite the contrary. They are complementary constructs. Most enterprise-level applications should contain multiple data access logic components, and they can benefit from a database provider's capability for managing database connections, executing commands, caching parameters, and so on. Data access logic components provide the logic required to access specific business data, while database providers supply a centralized interface for data access development and data connection configuration. Figure 3.8 is from the patterns & practices publication Architecture for .NET: Designing Applications and Services,[7] and it illustrates how data access logic components should work together with database providers (these are referred to as data access helpers in this publication) to perform database operations.
Figure 3.8. Data Access Logic Components
Data access logic components can also implement business logic before and after using a database provider to control database operations. They should also be used to control the mappings that are needed between the business entity and the database. That is, the logic that was previously described for mapping the fields in a DataSet to the parameters in a stored procedure should be encapsulated in a data access logic component. Listing 3.19 provides an example of a data access logic component, named CustomersDALC, which manages data related to customer information in a database. The data that is managed by this class is represented by a typed DataSet named CustomersDS. The methods shown in this listing use a database provider to return this DataSet and update changes to the customers in this database. Listing 3.19. Returning a Customer's DataSet from a Data Access Logic Component
One of the recommendations for designing data access logic components is to expose expected functionality that is common across all data access logic components in a separately defined interface or base class. However, if I go too far into how to design data access logic components in an application, I will quickly extend outside the scope of features that the Data Access Application Block is intended to provide. This section on data access logic components is intended to convey the point that using database providers in a "utility" class is generally not an ideal design. This is especially true for distributed applications. Rather, data access logic components should be used that control database providers for accessing and updating data that is specific to a certain type of data (e.g., customers). Chapter 9 covers how to create a separate application block that adheres to the promoted best practices for designing data access logic components. |