Data Access


The Microsoft .NET Framework introduces a new set of libraries that help you build robust data-centric applications. As you’ll see in Application #25, these ADO.NET libraries do not necessarily replace legacy ADO for the COM developer. Rather, they represent a significant evolution of ADO for data access from within managed code. In short, ADO.NET is the API of choice for applications built using Microsoft Visual Basic .NET.

ADO was built for data access with relational databases that operate mainly in a connected, client-server environment; and although support for working with disconnected data and XML does exist, significant features are lacking. For example, the central object in ADO is the RecordSet object. It looks like a single table of data. In fact, if you want it to contain data from multiple database tables, you must use a JOIN query. The central object in ADO.NET is the DataSet. It stores data as a disconnected hierarchy of collections that represent the relational structure of its data source (for example, Microsoft SQL Server). Moreover, a DataSet can be easily shared with other applications because the data and relational structure serialize to XML and XML Schema, respectively. A RecordSet object, on the other hand, requires COM marshalling.

Although ADO.NET might seem more complex at first, the applications in this chapter will show that this next-generation ADO is a significant improvement that is well worth the effort required of you to make the transition.

Application #12 Use a DataSet and DataView

This sample introduces you to DataSet and DataView, two classes that are at the heart of ADO.NET. You’ll learn how to connect to a database, retrieve data, use the DataView class’s sorting and filtering capabilities to display ordered subsets of the data, and bind the DataView object to a DataGrid control. Figure 3-1 shows a screen shot of Application #12.

click to expand
Figure 3-1: Application #12.

The sample application connects to the Northwind database and displays product information in a DataGrid control. You can sort the data in ascending or descending order on the primary key or secondary key. Controls are also available to filter the product information based on the first letter of the product name and the number of units in stock.

Building Upon…

Application #4: Try/Catch/Finally

New Concepts

ADO.NET was designed for data manipulation in a disconnected, distributed, XML- aware environment. The DataSet class is the principal abstraction of this architecture. The DataView class builds on it. Both reside in the System.Data namespace.

The DataSet class represents an in-memory, disconnected copy of data, with familiar class members that are consistent with a relational database model (for example, DataTable, DataRow, DataColumn, and PrimaryKey). It’s both autonomous and aware. A DataSet object doesn’t care where it came from or where it’s going, but it remembers whence it came and is able to keep track of certain changes to its data.

The bridge between a DataSet object and a database is one of the concrete subclasses of the DataAdapter class. The DataAdapter.Fill method uses the SQL SELECT statement to retrieve data and populate the DataSet object. The DataAdapter class, in turn, employs a DataReader object to do the actual work of streaming the rows into the DataSet object. You’ll learn more about the highly efficient DataReader class later in this chapter.

The DataSet class enjoys a close affiliation with XML. In fact, relational data can be abstracted and thus promoted to first-class .NET members via XML Schema mapping. This chapter, however, is concerned only with the generic, untyped DataSet object. Typed DataSet objects are an advanced topic and will be discussed later in this chapter.

Although you can use the DataTable.Select method to filter and sort the DataRowCollection object exposed by the DataTable.Rows property, a DataView object is typically preferred because it’s easier to use, more flexible, and dynamically reflects any changes made to its underlying data source—that is, the DataTable object from which the DataView object was derived. With the DataView object, you can display multiple, dynamically updated views of the same disconnected data source, making it ideal for robust data-binding applications.

Code Walkthrough

The code that follows takes you through the main steps for working with a DataSet object and DataView object, including connecting to a database, retrieving data and filling a DataSet, creating a DataView for sorting and filtering purposes, and data binding to a DataGrid control to display the data.

Connecting to a Database

Most of the samples in this book use a SqlConnection object to connect to Microsoft SQL Server or the Microsoft Data Engine (MSDE). The SqlConnection class is optimized for use with SQL Server. The OleDbConnection class is also available for OLE DB–supported data sources.

The SqlConnection class is easy to use. Its second constructor takes a connection string. Two connection string constants are provided:


"Server=(local)NetSDK;DataBase=Northwind;" &_

Therefore, creating a connection to the database is as simple as setting strConnection to one of these connection strings and typing


Creating and Filling a DataSet

The code to create and fill a DataSet object resides in the Load event of the form. It is wrapped in a Try/Catch block, which is not shown here for the sake of brevity. Following the instantiation of the SqlConnection class, a SqlDataAdapter object is created, passing the SQL SELECT statement and connection object to one of its constructors:

"SELECTProductName,UnitPrice,UnitsInStock,UnitsOnOrder " _
& "FROMproducts",cnnNW)

The Fill method is then called to populate the DataSet object (instantiated earlier): daProducts.Fill(dsProducts, PRODUCT_TABLE_NAME). Because a DataSet object can hold multiple DataTable objects, you should name each one. In this case, the name is Products. This naming is, however, not required. As with any .NET collection, you can access its members by a key (Products) or a zero-based index—for example, DataSet.Tables(0).


Manually create a DataTable object so that you can work more easily with any kind of data. A DataTable doesn’t have to originate from a DataSet. You can instantiate it directly, add DataColumn and DataRow objects, and then fill the DataTable object in a variety of ways. For example, sorting an array has traditionally been programmatically difficult. Instead, you could populate a custom DataTable by iterating through an array. Then create a DataView and easily sort and filter its contents!

Creating and Working with a DataView Object

As mentioned earlier, the DataSet object is the underlying data source for a DataView object. (You cannot create a DataView object from a DataReader object.) To create a DataView object, you can either instantiate it directly or access a DataView object initialized to default settings via the DefaultView property of the DataTable class. You can then set the RowFilter and Sort properties to filter and sort a view of the underlying data:


The RowFilter property is a string value that takes the form of a column name followed by an operator and a value to filter on. This is similar to a SQL WHERE clause. In the previous code, DEFAULT_FILTER is set to “ProductName like ‘%’”. This is equivalent to applying no filter at all.


There are numerous ways to construct a filter expression. See the Expression property of the DataColumn class in the .NET Framework software development kit (SDK) for more information.


Filter expressions build on each other. If you want to reuse a DataView object that already has a filter applied, remember to reset the RowFilter property to its default (an empty string) prior to applying a new filter. If you do not, your resultset will not be what you intended. This is a common mistake that can trip up even experienced developers.

The Sort property works in a similar fashion. It’s a string in the form of one or more column names each followed by ASC or DESC, for ascending or descending order, respectively. The default sort order is ascending, so you need only specify DESC. In this case, DEFAULT_SORT is “UnitsInStock ASC, UnitsOnOrder ASC”.

The sample application contains three Button Click event handlers that demonstrate how to apply a variety of filter and sort expressions. In each case, the principles are the same. It’s left to you to explore this code further.

Data Binding to a DataGrid Control

The last step in the process of displaying a custom view of your disconnected data source is to data bind the DataView object to a control—in this case, a DataGrid control. Although the DataGrid control is a highly customizable and complex control, it’s surprisingly easy to bind it to a data source and display its contents. You need only to set its DataSource property to an object that implements the IList or IListSource interface and the data is displayed (albeit in an unformatted manner you will typically want to change—but more on that in a later chapter). Here’s the line of code relevant to our application:



This has been a quick overview of the DataSet and DataView classes. You’ve learned some foundational ADO.NET concepts, such as how to

  • Connect to a database using a SqlConnection object
  • Retrieve data and fill a DataSet object by using a SqlDataAdapter object
  • Create a DataView object, and set its Sort and RowFilter properties
  • Data bind to a DataGrid control

Subsequent chapters will explore these and related concepts in more detail, building on what you have learned here.

Application #13 Use Stored Procedures

This sample shows you how to programmatically create stored procedures and then use them to retrieve data from SQL Server (or MSDE). The use of Microsoft Access queries is also demonstrated. A variety of stored procedures will be used. One requires an input parameter, another does not; and a third stored procedure makes use of input parameters, output parameters, and a return value. Figure 3-2 shows a screen shot of Application #13.

click to expand
Figure 3-2: Application #13.

The sample application is divided into a series of tabbed demonstrations that each build on the other as you proceed from left to right. In the first tab, you’ll create several custom stored procedures for the SQL Server or MSDE version of the Northwind database. The next three tabs demonstrate the use of these new stored procedures. The final tab shows you how to use an existing Microsoft Access query to generate a sales report. (The .mdb file containing the Access version of the Northwind database is included with the sample.)

Building Upon…

Application #4: Try/Catch/Finally

Application #12: Use a DataSet and DataView

New Concepts

Stored procedures are, in most cases, preferable to ad hoc SQL statements for a variety of reasons—such as data-tier logic encapsulation and performance gains. Discussion of these advantages is, however, beyond the scope of this chapter. It’s assumed that you’re already convinced of their importance. Additionally, you should already have a basic understanding of how stored procedures work and the SQL syntax involved with creating and using them. What remains is to show you how to work with them using ADO.NET.

Recall from the previous section that filling a DataSet object involves the following steps when using ad hoc SQL statements to retrieve data from SQL Server (or MSDE):

  1. Connect to a database using a SqlConnection object.
  2. Create a SqlDataAdapter object, passing in the SQL SELECT statement and the connection object.
  3. Call the DataAdapter.Fill method, passing in the DataSet object.

    When using stored procedures, there are two additional steps:

  4. Set the SqlCommand.CommandType property to CommandType.StoredProcedure. (The default, CommandType.Text, is for ad hoc SQL.)
  5. If parameters are used, add SqlParameter objects to the SqlParameterCollection object exposed by the SqlCommand.Parameters property.

Although the SqlParameter class is indeed new, you’ve already implicitly used the SqlCommand class in the previous section when you passed the SQL SELECT statement to the SqlDataAdapter constructor. The data adapter used this statement to initialize a SqlCommand object that it exposes via its SelectCommand property. The SqlCommand class represents any SQL statement—whether it’s ad hoc SQL text or a stored procedure. (A third, rarely used, option named TableDirect is also supported but not covered further.) It has a variety of methods to execute SQL statements. You’ll get exposure to these methods in the upcoming section concerning the ADO.NET workhorse, the SqlDataReader class.

Code Walkthrough

Here we’ll take a look at what’s involved with the two additional steps previously mentioned for using stored procedures.

Calling a Parameterless Stored Procedure

The simplest use of a stored procedure involves no parameters. This is demonstrated in the No Params tab of the sample application. In the button’s Click event handler, you’ll find the following code:

DimsdaAsNewSqlDataAdapter("[TenMostExpensiveProducts]",scnnN orthwind)
sda.Fill(dsProducts, "Products")

This code is identical to that which you saw in the previous section, with the following exceptions:

  • The first argument to the data adapter’s constructor is not a SQL statement, but rather it’s the name of the stored procedure you want to execute. It is, in fact, the CommandText setting for the SqlCommand object exposed by the data adapter.
  • The CommandType is explicitly set to StoredProcedure to indicate to the SqlCommand object what the CommandText refers to.

When you run the sample, the data is displayed in a generic gray DataGrid set to its default formatting. The other two samples involving a DataGrid use custom formatting set with DataGridTableStyle and DataGridColumnStyle objects. These are introduced in Chapter 4.

Adding an Input Parameter

A much more common scenario involves a stored procedure that uses an input parameter. Adding one is quite easy. The code in the Click event handler for the button on the Input Param tab shows how to do this. The first six lines declare and initialize the required variables. For demonstration purposes, the SqlCommand object is explicitly instantiated and passed to the SqlDataAdapter:


SqlParameter object properties representing the input parameter are then set. Notice that the ParameterName property requires the @ symbol. (ADO.NET does not add this for you.) The SQL data types are conveniently encapsulated in a SqlDbType enumeration. In this case, you’re passing a value of type Int. (You cannot use .NET Framework data types to set this property.) The actual value is taken from the selected item in the ComboBox control.

.ParameterName= "@CategoryID"

Once the parameter properties are set, you must add the SqlParameter object to the SqlParameterCollection object exposed by the SqlCommand.Parameters property:


The preceding code could also be simplified and reduced to the following:


Using an Output Parameter and Return Value

You’ll often encounter scenarios in which you want to execute a SQL SELECT statement and get back additional values that aren’t a part of the main resultset. You can do this in the sample application by using the All Types tab. The code for this tab uses a stored procedure created when you click the button in the Create Sprocs tab. The SQL statement used to programmatically create the stored procedure when you first run the application is as follows:


Notice in the third line that the @AveragePrice parameter is followed by the OUT (or OUTPUT) keyword. Its value is then set in the first SELECT statement. To retrieve this value from the database, you write the following code:


The only difference between this code and the short notation in the previous section is the addition of a ParameterDirection enumeration value. The default is Input, so normally you don’t need to set the parameter direction.

If you want to retrieve a SQL RETURN value from a stored procedure that is other than one of the five SQL Server return codes, use the ReturnValue enumeration. This will work only with an integer data type.


This code will store the return value from the SELECT statement following the RETURN keyword in the stored procedure.

Accessing the output and return parameter values is as easy as setting them. First, retrieve the parameter from the SqlParameterCollection object as you would when working with any other .NET collection: either by ordinal or key (parameter name). The Value property returns an Object type, so you must explicitly cast it to the desired type. Although this is straightforward, the syntax can get a little tricky if you want to apply string formatting, as in this code from the Click event:


You can learn more about string formatting in other chapters.

Executing Microsoft Access Queries

Thus far, you have mostly been working with classes in the System.Data.SqlClient namespace, optimized for use with SQL Server. Using Microsoft Access from ADO.NET requires a different data access provider, the classes for which are found in the System.Data.OleDb namespace. In the code for the final tabbed example, you’ll notice that all classes used to connect to the database and retrieve data are in this namespace. Aside from the very different connection string ("Provider=Microsoft.Jet.OLEDB.4.0;Data Source=..Northwind.mdb"), the code is almost identical to the previous examples.


This topic has shown you how to use stored procedures from ADO.NET. You learned how to call a stored procedure, use input and output parameters, and retrieve custom return values. These are core foundational tasks you’ll likely use repeatedly as you develop .NET applications.

Application #14 Use a SqlDataReader

When performance is your chief concern, especially with large amounts of data, use a DataReader class. This abstract class is built for speed and roughly approximates the ADO Recordset when the latter uses a forward-only, read-only firehose cursor. The DataReader class is often used in Web applications, where performance concerns are paramount. In desktop applications, it finds more limited use because rich client interfaces that consume relational data often require the more robust functionality of the DataSet class. A more detailed comparison between the DataReader class and the DataSet class is offered later in this topic.

The sample application shows you how to use a SqlDataReader class, a concrete subclass of the DataReader class that is optimized for use with SQL Server. This class is found in the System.Data.SqlClient namespace. The sample application creates and executes a new stored procedure against the Northwind database. Two sets of results from the Products table are then displayed in a TextBox control. A Label control indicates the number of products in both resultsets. Figure 3-3 shows a screen shot of Application #14.

click to expand
Figure 3-3: Application #14.

Building Upon…

Application #3: String Manipulation

Application #4: Try/Catch/Finally

Application #13: Use Stored Procedures

New Concepts

With ADO.NET, there are two main ways to retrieve data from a database. If you want to work with data offline—for example, in a disconnected manner—the DataSet class is your only choice. The connected world is served by the DataReader class. It communicates directly with the database. In fact, the DataReader class is used behind the scenes by the DataAdapter class to fill a DataSet object. One could say that the DataReader class is the real workhorse of ADO.NET.


Although the DataReader object does resemble the ADO Recordset object, you should keep two main differences in mind. First, ADO.NET has no support for server-side cursors. They aren’t necessary because of the architecture of the DataTable class, which allows access to its rows in a Collection object via a key or index. Second, ADO.NET simplifies things by providing no equivalent to the Recordset.MoveNext method, a chief source of frustration for ADO developers in the past. Rather, the DataReader.Read method automatically advances the cursor to the next row, if one exists.

Because of the lightweight, connected nature of the DataReader class, you need to be aware of some restrictions and potential pitfalls in using it:

  • Only one DataReader object can be used for any given connection object.
  • The connection object cannot be used for any other purpose while the DataReader object is open. If you want to reuse the connection, you must first call the DataReader.Close method. In fact, you should always explicitly call the Close method instead of depending on the garbage collector. An open DataReader object ties up the existing connection, which is an expensive resource.


    If you want to tie the life of the connection to the DataReader object, use the CommandBehavior.CloseConnection enumeration when calling the ExecuteReader method. This will close the connection automatically when the DataReader.Close method is invoked. This is especially applicable to n-tier applications in which the calling tier doesn’t have direct access to the underlying connection.

  • The first row of data is not available until you call the Read method. (This is roughly equivalent to the ADO RecordSet.MoveFirst method.)
  • If your stored procedure uses a return or output parameter, the parameter won’t be available until after the DataReader object is closed. In this case, it’s helpful to think of the stored procedure as a function. The code retrieving the data might have executed, but until the function ends, nothing is returned.
  • There is no equivalent to the ADO Recordset.RecordCount property. In the following “Code Walkthrough” section you’ll learn how to obtain the count using a batched query. Of course, you could also use a counter while iterating through the resultset.
  • The data is retrieved forward-only. Once it is streamed in, you cannot go back through the rows in the DataReader unless you’ve stored the data in an object that supports this, such as an ArrayList object or a DataTable object.
  • You cannot use a DataReader object for updates. The data is read-only.
  • DataReader objects have no inherent support for XML serialization with a corresponding schema. You can call ExecuteXMLReader and retrieve an XML stream, but this is only if the T-SQL FOR XML clause is used in the query (SQL Server 2000 only). If you want to retrieve relational data that can be readily serialized as XML, you should use a DataSet object.
  • A DataReader object cannot be used for data binding (that is, it doesn’t support the IList or IListSource interface). For true data binding, you would first have to iterate through the DataReader object and then add the values to an object that supports one of these interfaces.

Performance Tips

If you elect to use a DataReader object over a DataSet object, your reasons are almost certainly performance-oriented. As such, you should be aware of a few things you can do to ensure that the DataReader object works optimally:

  • Retrieve values in their native type instead of using the Item collection followed by an explicit cast. For example, sdr.GetDecimal(1) is more efficient than CDbl(sdr.Item("UnitPrice")).
  • If you want to use the Item collection, access the values by their index instead of by key (column name).


    Although accessing values using indices is faster than when using keys (because the type conversion penalty is not incurred), the performance advantage might be offset by maintenance issues and the greater potential for developer error. When keys are used, changes to the order of the columns in the resultset do not affect your code. When accessing values by indices, however, you run the risk of your application breaking if someone changes the ordering of columns in the database table or the SELECT statement. Additionally, the code is less readable when indices are used. For most applications, it’s probably wise to use keys to avoid these pitfalls. Unfortunately, certain DataReader class methods such as GetValue, GetDataType, and IsDbNull do not allow keys. Exercise caution when using these methods.

  • If you need only the data in the first column in the first row—for example, in a SELECT COUNT(*) statement—use SqlCommand.ExecuteScalar instead of ExecuteReader.
  • When retrieving large rows, consider using the CommandBehavior.SequentialAccess enumeration.

Comparing the DataSet and DataReader Objects

Space limitations do not permit an extensive comparison of the DataSet object and DataReader object. Table 3-1, however, summarizes the major differences.

Table 3-1: Differences Between DataSet and DataReader





Supports data binding

Does not support data binding

Fully supports XML serialization

Limited serialization only with SQL Server 2000 and the ExecuteXmlReader method.

Access data in any direction

Access data in forward-only manner only

Supports data updates

Data is read-only

Can be cached to improve performance


Data can be easily sorted and filtered

No support for sorting and filtering

Despite the vast differences between these two objects, it’s often unclear which of the two is best for any given scenario. This is especially true for Web applications. The performance advantage is the number one reason given for choosing a DataReader object over a DataSet object, but in a recent MSDN article titled “Best Practices for Using ADO.NET,” the authors made it clear that the performance gains might be negligible in most scenarios:

The DataAdapter uses the DataReader when filling a DataSet. Therefore, the performance gained by using the DataAdapter [sic—DataReader] instead of the DataSet is that you save on the memory that the DataSet would consume and the cycles it takes to populate the DataSet. This performance gain is, for the most part, nominal so you should base your design decisions on the functionality required.

In other words, for all but the most performance-critical scenarios, the feature-rich DataSet object is most likely your best choice.

Code Walkthrough

The code in this sample application, most of which resides in the button’s Click event handler, is minimal and straightforward. Once a connection is established, the existing GetProducts stored procedure, if it exists, is dropped and then re-created. The stored procedure contains several SQL SELECT statements and a return value:


To execute the stored procedure, the SqlCommand object is reused as follows. The variable sdr has already been declared as a SqlDataReader. Notice that the connection is tied to the life of the DataReader object by using the CloseConnection enumeration.

.CommandText= "GetProducts"

With the SqlDataReader object in hand, you can now call the Read method and iterate through the first resultset that it contains—in-stock products—building an output string for display later. The Read method returns a Boolean value: True if another row exists, and False if not. The high performance StringBuilder object, instantiated earlier, is used instead of traditional string concatenation.




Notice that the values are accessed using the GetDataType method corresponding to the underlying data type. The product name, unit price, and units in stock are contained in the first (index 0), second, and third columns, respectively. One of the ToString overloads is used to format the decimal value to currency.

To jump to the resultset returned by the second SELECT statement in the stored procedure, use the NextResult method. The code is similar to what you’ve just seen, except that the values are accessed, for the sake of demonstration, through the Item property using the column name:


Notice that use of the Item property can be explicit or implicit. Typically, you will omit Item, as in the second line. Also, because the DataReader object returns a value of type Object when not using one of the GetDataType methods, you must explicitly cast it to the proper type. In the case of the unit price, you would cast it to a Double to take advantage of the currency formatter in its ToString method.

Access the final set of results in the same way. In this case, you are merely retrieving the total number of in-stock products:


lblNumProducts.Text= "Thereareatotalof " _
&sdr.GetInt32(0).ToString& " in-stockproductsand "

An alternative approach to retrieving the equivalent of the ADO Recordset.RecordCount property is to use a return value, as demonstrated in these closing statements of the Click event handler:


lblNumProducts.Text&=scmd.Parameters("ReturnValue").Value.ToString &_
 " discontinuedproducts."

Recall that the Close method must be invoked prior to accessing the values in any output or return parameters.


This topic has shown you how to use a SqlDataReader object to retrieve and process data returned by a stored procedure containing batched SQL queries. Along the way, you learned that DataReader is a lightweight, high-performance class that efficiently serves the connected world. Being lightweight, however, it’s not nearly as feature rich as the DataSet class. You should carefully weigh your application’s current and future needs when considering whether to use a DataReader or a DataSet. When in doubt, do some performance testing to determine whether the inherent limitations of the DataReader class are worth the potential performance gains.

Application #15 Bind Data to a ComboBox

One of the most common Windows Forms controls is the ComboBox control. It’s normally used to display data in a drop-down style control that is either editable (the DropDownStyle property set to DropDown) or noneditable (the DropDownStyle property set to DropDownList). A third, infrequently used style permits cycling through values without the drop-down effect (the DropDownStyle property set to Simple).

The ComboBox supports complex data binding, as opposed to the simple data binding covered in the next topic. Although it’s complex under the hood, you’ll see it’s actually easy to implement. In fact, this topic is a slight departure from previous topics because there are few new concepts. Most of the discussion takes place in the “Code Walkthrough” section.

The sample application shows you how to bind a ComboBox control to a variety of data sources. Figure 3-4 shows a screen shot of Application #15.

click to expand
Figure 3-4: Application #15.

Building Upon…

Application #1: Use Arrays

Application #4: Try/Catch/Finally

Application #14: Use a SqlDataReader

New Concepts

Controls that permit binding to more than one data element—for example, multiple rows in two columns of a DataSet object—require complex data binding. These controls, such as ComboBox or DataGrid, don’t use the Binding class (covered in the next topic). Rather, they expose a DataSource property and other properties that, in turn, map to properties of the data source. For example, the ComboBox control uses the DisplayMember and ValueMember properties to set what the user sees for each item as well as the item’s underlying value. Thus, implementing complex data binding is often as easy as setting a few properties of the control being bound. The “Code Walkthrough” section will explain this further.


For a .NET type to qualify as a data source for complex data binding, it must implement or derive from the IList interface. Thus, arrays and most of the collection classes qualify.

Code Walkthrough

Each subheading that follows maps to an example in the sample application.


One of the easiest ways to data bind a ComboBox control is to use an array. The code amounts to two lines:


The .NET data binding mechanism treats each item in the data source as an Object data type. By default, it will set both the text and value of each ComboBox item to the ToString equivalent of the Object.


As far as a ComboBox control is concerned, an ArrayList object is no different from an Array object. The twist with the simple ArrayList example is that it’s filled by iterating through a SqlDataReader:

"SELECTLastName+','+FirstNameAsFullName " &_





As you can see, the procedure for data binding to an ArrayList object is identical to that for an Array object. If you don’t require each item in the ComboBox control to have an underlying value different from the displayed text, simply set the DataSource property and the .NET Framework does the rest.


If you want to data bind to an ArrayList object as well as associate each item in the ComboBox control with an underlying value that is different from its display text, things get a little trickier. A typical implementation of this scenario is an ArrayList collection of custom objects that expose properties that are then mapped to the data binding mechanism via the DisplayMember and ValueMember properties.

This third example creates an ArrayList collection of custom Category objects. The Category class is implemented in the Category.vb class file. It exposes two public properties, ID and Name:





Several things are worth noting here. First, the properties must be declared as Public. Second, they must be properties, not fields. The reason for this is not documented. It basically boils down to an internal design decision by the .NET Framework team. It can be a source of frustration, however, because most classes in the .NET Framework treat fields and properties in the same way. Third, the properties don’t have to be of type String. In this example, the ID property is an Integer. Recall that when the data binding occurs, the ToString equivalent of the object is displayed or used as the underlying value.

The code for this example is similar to that for a simple ArrayList:

 "SELECTCategoryID,CategoryName " &_





.ValueMember= "ID"
.DisplayMember= "Name"

The main difference is that, in addition to the DataSource property, the DisplayMember and ValueMember properties are also set. These are String values that map to the appropriate property of the objects contained in the data source.

DataTable and DataView

The final two examples are very similar. The SQL SELECT statement for both examples is shown in the following code. (The data source is acquired in the frmMain_Load event handler.)

 "SELECTProductID,ProductName " &_

Additionally, for both examples, the DataSource, DisplayMember, and ValueMember properties are set. The following code is for the DataTable example:

.DisplayMember= "ProductName"
.ValueMember= "ProductID"

As the data source is a DataTable object, it contains DataRow objects. You would normally access values in these objects using the following syntax: DataRow.Item("ColumnName"), or the shorthand DataRow("ColumnName"). Thus, it makes sense that the DisplayMember and ValueMember properties use the respective DataRow column names to map the values being bound. The same concept holds true when the data source is a DataView object.


This topic has shown you how to data bind a ComboBox control. You learned that there are two types of data binding in Windows Forms: simple and complex. The nomenclature belies the level of difficulty required to implement it. In fact, you’ll find complex data binding to be quite simple to code. Acquire a data source that implements the IList interface, set a few properties, and let the .NET Framework do the rest. The next topic explores simple data binding in support of data navigation.

Application #16 Data Binding with Navigation

A feature often required for data-driven applications is the ability to step forward and backward through a set of records. Data navigation, as it is often called, requires that all controls involved with displaying the information be kept in sync with the current record. This topic will show you how to utilize data navigation in your Windows Forms applications.

The sample application is quite simple. Four TextBox controls display employee information from the Northwind database. You can use the navigation controls to step forward and backward through each record, or to jump to the beginning or end of the set of records. A Label indicates the number of the current record. Figure 3-5 shows a screen shot of Application #16.

Figure 3-5: Application #16.

This topic is the first of three topics that build upon each other, culminating in a master-details Windows Form application. You’ll see the user interface evolve as functionality is added. For example, in Figure 3-5, notice that the Sales To Date value is not formatted as currency. Also, the Hire Date would be best displayed in a shorter version, without the time. Custom formatting handlers are covered in the next topic, with the addition of two new fields to the interface.

Building Upon…

Application #4: Try/Catch/Finally

Application #12: Use a DataSet and DataView

New Concepts

In the previous topic, you learned about complex data binding. Ironically, this is often easier to implement than simple data binding.

Simple Data Binding and Windows Forms

With simple data binding you can bind any property of any control to almost any Framework type that contains data. Controls that expose one or two properties suitable for data binding, such as the Text and Tag properties of a TextBox, use simple data binding. Implementing this requires that you create a separate Binding object for each mapping. These objects are contained in the ControlBindingsCollection object that each control exposes through its DataBindings property.


After the controls are data bound, they must be kept synchronized as the user steps through the master list of records. Unlike legacy ADO, many possible .NET data sources, including DataSet objects, do not have the concept of a current record. Instead, this support is provided by the CurrencyManager class, one of the two implementations of the abstract BindingManagerBase class.

Despite the name, CurrencyManager has nothing to do with money or finances. Rather, it presides over all Binding objects associated with one particular data source. When the value of its Position property changes, the CurrencyManager object notifies all its associated Binding objects that they should update the data they are passing to their respective bound controls with the values at the new position in the data source. The Position property is read/write, so you can set it to advance or retreat through the data source. To obtain the current position, access the read-only Current property.


A CurrencyManager object exists for each data source used for simple data binding. As such, there can be more than one CurrencyManager object for any given control, either because the control is bound to more than one data source or because the control is a container for other controls that collectively use multiple data sources.

This is where the BindingContext object steps in: to encapsulate, as a collection, one or more CurrencyManager objects. To obtain a particular CurrencyManager object, simply pass the data source to the BindingContext property. Figure 3-6 illustrates how all of this works together.

click to expand
Figure 3-6: Simple data binding for a Windows form.

Notice that the Sales To Date TextBox control has a different CurrencyManager than the Last Name TextBox control. This is because each control is bound to a separate DataTable object within a single DataSet object (as you will see). Also notice that each CurrencyManager object was obtained via the form’s BindingContext object. This is not required, however. All controls expose a BindingContext object. You could just as easily have obtained the same CurrencyManager object as follows:


Code Walkthrough

At a high level, the steps involved with implementing data binding with navigation are as follows:

  • Create Binding objects for each control using simple data binding.
  • Retrieve the CurrencyManager object for the data source that represents the set of records, and add a PositionChanged event handler.
  • Implement the PositionChanged handler to synchronize the controls.

The DataSet containing the employee records contains two DataTable objects: dtEmployee for general employee info and dtSales for employee sales data. Although the sales data could have gone in the same DataTable object as the employee info, it’s worth demonstrating how to keep data current from multiple sources.

Creating the Bindings

Once the data source is created, you can proceed to create the Binding objects for each simple-bound control. To do this, simply call the Add method of the control’s DataBindings collection, passing in the control property name, the data source, and the name of the DataColumn object that contains the actual data.

txtLastName.DataBindings.Add("Text",dtEmployee, "LastName")
txtLastName.DataBindings.Add("Tag",dtEmployee, "EmployeeID")
txtFirstName.DataBindings.Add("Text",dtEmployee, "FirstName")
txtHireDate.DataBindings.Add("Text",dtEmployee, "HireDate")
txtSalesToDate.DataBindings.Add("Text",dvSales, "SalesToDate")

Notice that txtLastName is the only control with two bindings. The Tag property stores the ID of the current employee. You’ll see how this is used to keep the Sales To Date values synchronized.

Working with the CurrencyManager

The last block of code in the CreateBindings method obtains a CurrencyManager object for dtEmployee and then adds a handler for the PositionChanged event. A private class-level variable, cmEmployee, has already been declared:


With this code in place, all you need to do to navigate the employee records is change the value of the Position property of the CurrencyManager object when a navigation button is clicked (or a key is pressed, assuming you implement the Form.KeyDown event handler and the Form.KeyPreview property is set to True). For example, NextRecord increases the Position property by 1. This same method is used by the Next button and the right-arrow key:


When the position changes, the CurrencyManager object fires the PositionChanged event and causes all its associated simple-bound controls to update. In this case, txtLastName, txtFirstName, and txtHireDate are all bound to dtEmployee. The CurrencyManager causes them to be updated automatically without further effort on your part.

If this was all you had, there would be no need to implement a PositionChanged event handler. In most scenarios, however, you need this handler to call other methods to update controls that are not under the aegis of cmEmployee or that use complex data binding—for example, a DataGrid control. In the sample application, lblRecordNumber and txtSalesToDate fall into this category.

The PositionChanged event handler is quite simple:


As you would expect, the handler merely calls methods to update the remaining controls that need to be kept in sync. These methods are as follows:

lblRecordNumber.Text= "Record " &_
cmEmployee.Position+1& " of " &dtEmployee.Rows.Count

dvSales.RowFilter= "EmployeeID= " &txtLastName.Tag.ToString

In ShowTotalSales, you can see that the Tag property is used to filter a DataView object.


As Figure 3-6 illustrates, you could also obtain the CurrencyManager for dtSalesToDate to keep txtSalesToDate current. There is more than one way to implement data navigation.


This topic has taught you how to implement data binding with navigation. Although the concepts behind simple data binding might not seem so simple, it all boils down to the following main points:

  • There is a CurrencyManager object for each data source.
  • A BindingContext object manages the CurrencyManager object or objects.
  • A CurrencyManager object synchronizes all controls that are simple-bound to its associated data source.
  • A PositionChanged handler can be implemented to tie in all the remaining controls.

The PositionChanged handler is really the key to keeping everything synchronized. When this event fires, you can call any method you need and update simple or complex controls using a variety of data sources. In the final topic of this three- part series, “Application #18: Build a Master-Details Windows Form,” you’ll see how this handler is used to keep current two DataGrid controls that are hierarchically related to one another.

Application #17 Custom Data Binding Format Handlers

In the previous topic, you saw how to implement data binding with navigation controls. For the purpose of taking a building-block approach, the controls lacked any formatting. Instead, the .NET data-binding mechanism used reflection to get the underlying type of the Object being bound and then called its default ToString method to render the value. This topic will show you how to implement custom format handlers—methods that are essential to most data-binding applications.

The sample application builds upon the previous topic. Two new controls are added so that you can learn how to handle Boolean and Null values in addition to Currency and Date values. Figure 3-7 shows a screen shot of Application #17.

Figure 3-7: Application #17.

Building Upon…

Application #4: Try/Catch/Finally

Application #12: Use a DataSet and DataView

Application #16: Data Binding with Navigation

New Concepts

This topic revolves around the Binding class. The .NET Framework documentation states that this class “represents the simple binding between the property value of an object and the property value of a control.” In the previous topic, you saw Binding objects used implicitly when simple data binding was set up for each control. For example, consider the following:

txtFirstName.DataBindings.Add("Text",dtEmployee, "FirstName")

This code binds the Text property of the First Name TextBox control to the FirstName field in the data source.

This sample application works more closely with Binding objects, instantiating them directly and adding handlers for the Format event. This event is raised by the Binding object when:

  • The property of a control is bound to a data value
  • The CurrencyManager.Position property changes
  • A DataView object is filtered or sorted, if this is the data source

Code Walkthrough

The code required to use custom data-binding format handlers is straightforward. In the sample application’s CreateBindings method, you’ll find the following code to establish a binding and formatting handler for the Sales To Date TextBox control. Code for the other controls is similar.

DimdbnSalesToDateAsNewBinding("Text",dtSales, "SalesToDate")

Make sure you add the custom format handler prior to adding the Binding object to the DataBindings collection of the control. Design-time or run-time errors are not generated if you fail to do this. However, you might not notice until you step to another record for the first time that the data in the control does not format properly.

Thus, when the format event is raised by the Binding object for this control, the MoneyToString delegate will be invoked:


The data that is bound to the control is passed to this handler in a ConvertEventArgs object. Simply access its Value property, cast it to the appropriate type to have access to the type’s special format codes (covered in Chapter 4, Application #32), and then reassign the Value property.

The same methodology is carried through for all the other controls. For example, to format the occasional NULL or empty string values that might exist in the data source for the Region TextBox control, you use the following code:

e.Value= "[N/A]"

CheckBox control formatting is handled by the following simple subroutine:


Finally, the employee’s hire date can be formatted as follows:


In all cases, the code and concepts are very similar.


This has been a short look at custom data-binding format handlers. The code and concepts are less involved and easier to understand than previous topics in this chapter.

Although not covered in this chapter, it’s worth examining the Parse event, which is essentially the opposite of the Format event. The Parse event fires when data is read from the control and pushed back into its data source. Thus, you can allow the user to enter data in a format different from the format in which it will be stored.

You can see that the .NET Framework provides virtually unlimited flexibility for formatting data in your applications. Simply add a handler to the Binding object of a control, and then write code to display the data in any way you see fit.

Application #18 Build a Master Details Windows Form

This is the final topic in a three-part series that includes data binding with navigation and custom format handlers. This sample application shows you how to build a master-details Windows Form. This type of interface is commonly used to navigate through a main resultset and allow the user to drill down into a particular record to view it, or related data, in greater detail. Figure 3-8 shows a screen shot of Application #18.

click to expand
Figure 3-8: Application #18.

The upper portion of the application displays the master set of records. On the left, you can step through information from the Employees table of the Northwind database. Orders associated with each employee are displayed in the DataGrid control to the right. The product details for any specific order—selected by clicking its row in the Orders table—are displayed in the lower DataGrid control.

Building Upon…

Application #4: Try…Catch…Finally

Application #12: Use a DataSet and DataView

Application #16: Data Binding with Navigation

Application #17: Custom Data-Binding Format Handlers

New Concepts

There are no new concepts for this topic, as it builds entirely upon concepts presented in the previous two topics in this series. As a brief review, here are the main steps you take to implement a master-details application:

  1. Create one or more data sources.
  2. Create Binding objects for each simple-bound control, adding custom formatting handlers if desired.
  3. Create a PositionChanged event handler for the CurrencyManager.

Implementing a master-details application essentially amounts to adding two new custom methods to the PositionChanged event handler for data binding the DataGrid controls. The “Code Walkthrough” section will explain this in further detail.

Code Walkthrough

We begin the walkthrough by looking at the pivotal event hander in an application of this type. Here is a good place to put any methods you want invoked when the user steps to a different record:


When comparing this with the previous topic in this series, you’ll notice the addition of the BindOrdersGrid and BindOrderDetailsGrid methods. The former method displays all orders for the current employee in the upper right DataGrid control. The relevant code is as follows:

dvOrders.RowFilter= "EmployeeID= " &txtLastName.Tag.ToString

.CaptionText= "Orders"

The previously created DataView, named dvOrders, contains the orders for every employee in the database. Thus, to show only the orders for the current employee, we set the RowFilter property of the DataView object by using the EmployeeID value bound to the Tag property of the Last Name TextBox control. All that is left to do is assign the DataView object to the DataSource property.

The BindOrderDetailsGrid method is invoked next. The pertinent code in this subroutine is similar to what you’ve just seen:

dvOrderDetails.RowFilter= "OrderID= " &strCurrentOrderID

.CaptionText= "Order# " &strCurrentOrderID

As with the data source for the master DataGrid control, dvOrderDetails contains all order details in the database. To show only the details for the currently selected order, retrieve the Order ID from the DataRowView collection by using the DataGrid.CurrentRowIndex property. Then, use this value to create a statement for the RowFilter property.

If you were to build and run the application at this point, you’d see only partial functionality. The master-details relationship is set up between the two DataGrid controls. However, it works only for the first default record as you step through the employees. What is missing is the ability to click on a row in the master DataGrid control and have the order details change in the lower DataGrid control.

To implement this, you need to handle the CurrentCellChanged event raised by the master DataGrid control:

ByValeAsSystem.EventArgs)HandlesgrdOrders.CurrentCellChange d


This event fires when the user directly clicks a different cell or clicks the frame of the DataGrid control next to the row. To provide good feedback to the user, you should use the Select method to highlight the currently selected row.


You might think it would be more appropriate to handle the DataGrid.Click event. However, this fires only when the frame next to the row is clicked. Clicking in the cells will not raise the event and will not give you the master-details effect that you probably want.


This concludes a fast-paced, three-part series culminating in building a master-details Windows Forms application. In this final topic, you learned that establishing a master-details relationship between two DataGrid controls involves:

  • Adding a couple of methods to bind the DataGrid controls to a filtered DataView object.
  • Handling the CurrentCellChanged event.

In short, once the admittedly more complex foundation of data binding with navigation has been laid, it’s not difficult to build on it.

You might also consider exploring a variation of the master-details user interface. This involves a single DataGrid control that is used to display both the orders and the order details, as in Figure 3-9.

click to expand
Figure 3-9: A close-up view of the Orders DataGrid when DataRelation objects are used in the data source.

Expanding the order row reveals a link to the Order Details table as shown in Figure 3-10. Clicking the link causes the same DataGrid control to display the order details.

click to expand
Figure 3-10: Rows from the OrderDetails DataTable as displayed in the Orders DataGrid.

Additionally, controls are added to the upper right corner of the DataGrid control to allow the user to return to the parent row, or to show or hide the parent row information. Although this type of layout requires the user to take an additional step in order to view details of a record, it does take up considerably less space.

To experiment with this implementation, uncomment the following line of code in the CreateDataSet method, which establishes a data relation between the Orders and Order Details DataTable objects:


Application #19 Use a Typed DataSet

In the first application in this chapter, you were introduced to the concept of a DataSet, a core object of ADO.NET. In this next topic, you’ll be adding to your knowledge of a generic, or untyped, DataSet its customizable subclass—known as a typed DataSet. You’ll learn how it differs from the untyped DataSet from which it derives, and why you might want to use a typed DataSet instead of its generic parent. You’ll also learn about the SqlCommandBuilder class and how it can be used in conjunction with the SqlDataAdapter class to update the data source of the DataSet object, pushing into SQL Server any changes that the DataSet object has been tracking.

The sample application shows you how to create and fill a typed DataSet object with products from the Northwind database, displaying them in a ListBox control. You can also add a new product, change the name of an existing product, or delete a product. Changes are tracked by the DataSet object until the time that you save them to the data source. The two buttons below the ListBox control allow you to repopulate the list from the database (using the SqlDataAdapter.Fill method) or to reconcile the DataSet object’s changes with the database (using the SqlDataAdapter.Update method). Figure 3-11 shows a user updating the DataSet object in the sample application.

Figure 3-11: Application #19.

Building Upon…

Application #4: Try/Catch/Finally

Application #12: Use a DataSet and DataView

New Concepts

Recall that a DataSet object represents an in-memory, disconnected, and serializable copy of data. It contains various data-related collections that model a relational database, such as those for DataTable, DataRelation, DataColumn, and Constraint objects. A DataSet object is thus a robust and powerful object that will likely form the heart of your data-tier implementation.

A limitation of the untyped DataSet object, however, is that it uses late binding. To access values, the runtime engine uses an index or field name to look up and retrieve the object or value. Recall the syntax for accessing data in an untyped DataSet object:

DataSetObjectName.Tables("<TableName>" _
orTableIndex).Rows(RowIndex)("<FieldName>" orFieldIndex)

An example of this format is


You could also use field names instead of indices, or a mix of the two—for example:


As you probably already know, late-bound access carries with it some disadvantages, including weakly typed variables and a performance degradation. This leads us to one of the main advantages of using a typed DataSet object. As the name implies, it strongly types the table and column information, making this information available through properties instead of via collection-based syntax.

Adding a Typed DataSet to Your Project

Microsoft Visual Studio .NET makes it very easy to add a typed DataSet to your project. Just follow these steps:

  1. In the Solution Explorer, right-click your project, point to Add, and then click Add New Item.
  2. In the Add New Item dialog box, from the Templates pane select DataSet. Give it a name, and then click Open. You now have the design surface of an XML Schema file.
  3. Open Server Explorer, and expand SQL Servers | | Northwind.
  4. Expand the Tables, Views, or Stored Procedures node. Click and drag any of these database objects to the DataSet object’s design surface. The XML Schema will be created based on the structure of a table and its relations, whether it be an actual table in the database or a table of results that were generated by a query.
  5. Right-click the design surface, and make sure Generate DataSet is checked.
  6. Press Ctrl+S to save the schema and automatically generate a typed DataSet class based on the schema. You are now ready to program against it!
  7. If you want to actually view the typed DataSet class, in the Solution Explorer click on the project that contains the DataSet and then click the Show All Files icon. Then expand the typed DataSet icon in the project to view its code-behind files.

A typed Dataset class is thus defined as a class that extends its base DataSet class with members created from data-source information contained in an XML schema (.xsd file). In other words, a typed DataSet class provides all the inherent functionality of the untyped DataSet class, with the additional benefit of data objects that are first-class members. This structure provides the following benefits:

  • Design-TimeAn improved development experience

    • The syntax for working with strongly typed DataSet class members provides type checking. (Make sure Option Strict is On.)
    • Type mismatch errors are caught at compile time instead of at run time.
    • Tables and columns (but not rows) can be accessed directly by name instead of by a collection-based lookup. Index or field name guesswork is virtually eliminated, and coding errors are reduced.
    • Statement completion is ensured, as is IntelliSense support.
  • Run-TimeBetter performance. In fact, code that accesses typed DataSet objects can often run twice as fast as code that accesses untyped DataSet objects using string-based collection lookups. However, the gap is narrowed considerably, if not eliminated, if you use a DataColumn object and proper type conversion code against an untyped DataSet object.

The syntax for working with a typed DataSet object takes the following form:


Therefore, if you were to use a typed DataSet to access the same data as in the preceding example, the syntax would read:


The “Code Walkthrough” section will afford you a much closer look at the syntax.

The DataSet object is disconnected from its data source. As such, when you make changes to rows in one or more of its DataTable objects, the changes are not reflected in the database until you push them back in. A DataSet object simply keeps track of all the changes and waits for you to propagate them back to the source. You can even invoke the GetChanges method to create a second DataSet object that contains only the changes made to the data thus far.

In the first application in this chapter, you learned that the SqlDataAdapter class is responsible for managing data between Microsoft SQL Server and a DataSet object. For example, to populate an empty DataSet object, you call the data adapter’s Fill method. This involves a SQL SELECT statement. Similarly, to reconcile the changes in the DataSet object with SQL Server, you call the Update method.

As you know, database changes can involve inserting, deleting, and updating records. Each operation requires a different SQL statement. Being merely the bridge between the connected and disconnected realms of the ADO.NET world, the SqlDataAdapter object does not automatically generate these statements, leaving you with two options:

  • Manually set InsertCommand, DeleteCommand, and UpdateCommand properties of the data adapter to whatever you want.
  • Instantiate the SqlCommandBuilder class, and let it do the work for you.

If the SelectCommand property of the data adapter has been assigned to an instance of the SqlCommand class—and if this command object has its CommandText property set (for example, its SELECT statement)—the SqlCommandBuilder can infer the INSERT, DELETE, and UPDATE statements. For many scenarios, this approach is adequate. Just be aware that complex updates will likely require you to manually set the commands.

Code Walkthrough

When the Windows Form loads, the FillTypedDataSet method is invoked and the ListBox control is populated. The code to fill a generic DataSet object should look familiar to you by now. The difference for a typed DataSet object is that you instantiate the DSTypedProducts class instead of the DataSet class. (The variable tdsNorthwind has already been declared.)


Remember to pass the DataTable object’s name to the Fill method if you’re going to access the results using the strongly typed syntax. When the Fill method is called and the DataTable object is left unspecified, the data adapter will create a new DataTable object simply named Table. In other words, it will not fill the strongly typed table. No errors are generated. You are simply left scratching your head as to why you have an empty DataTable object. Accessing the DataTable object by using an ordinal-based collection lookup can also lead to confusion because the ordinal is off by the number of tables in your schema. When a typed DataSet object is instantiated, its n child DataTable objects are also created. Thus, although you thought the results would be in tds.Tables(0), they’re actually in tds.Tables(0 + n).

The next line of code creates the SqlCommandBuilder object that will automatically generate the INSERT, UPDATE, and DELETE statements used later when we call the SqlDataAdapter.Update method.


All that is required is instantiation. The data adapter will maintain a reference to it as we work with the DataSet.

To populate the ListBox control, iterate through the Rows collection of the typed DataTable object:

DimdtAsDSTypedProducts.ProductsTDSDataTable=tdsNorthwind.Produc tsTDS


Notice that you add to the list only those products that haven’t been deleted from the DataSet object. RowState is one mechanism the DataSet object uses to keep track of the changes.

As mentioned earlier, the sample application allows you to add, delete, or modify a product. Let’s take a closer look at the code needed to change a product’s name. (Adding and deleting a product are similar and will not be covered further.) The key lines of code are as follows:


The first, rather lengthy, line of code initializes a strongly typed DataRow variable. The index of the selected item is used for an ordinal-based lookup in the Rows collection of the typed DataTable object. All that is left to do is assign a new value to the ProductName property. ProductName is one of the fields in the DataTable object. It has been converted to a property for the typed DataSet object.

The last bit of code we’ll look at is in the btnSaveToDataSource_Click event handler. A single line of code is all that is required to push all changes made to the DataSet back into SQL Server:

sdaTDS.Update(tdsNorthwind, "ProductsTDS")

For your own instruction, I encourage you to try an update after commenting out the creation of the SqlCommandBuilder object, discussed earlier. Also, make changes to the DataSet object and then see what happens to those changes when you click the two buttons below the ListBox control.


Typed DataSet objects are a marvelous feature of ADO.NET. They’re easy to create, and they offer many advantages over their untyped ancestor. During the development of your application, your life is made easier with statement completion and IntelliSense support for the typed data members. The improved syntax is also more intuitive. Design-time type checking pays off at run time, when type mismatch errors are all but eliminated. And performance gains can be considerable if you’re used to doing string-based collection lookups. You should give strong consideration to widespread use of typed DataSet objects when building your next data-driven application.

Application #20 Build a Data Entry Form

This data-entry application ties together in one convenient package numerous basic ADO.NET concepts you’ve been introduced to thus far. The interface consists of numerous TextBox controls and two ComboBox controls, with buttons to add, save, and delete a record. Figure 3-12 shows a screen shot of Application #20.

click to expand
Figure 3-12: Application #20.

Building Upon…

Application #4: Try/Catch/Finally

Application #14: Use a SqlDataReader

Application #15: Bind Data to a ComboBox

New Concepts

There are no new concepts for this topic. It should be considered a review of several basic ADO.NET concepts that have already been covered.

Code Walkthrough

The code that follows shows you how to use a SqlDataReader object to populate various controls with data and then manipulate the data.

Populating the ListControls

When the form first loads, it calls the following methods in its Load event handler:


The code for all three methods is almost identical because the controls all derive from ListControl. As such, the first part of this walkthrough will deal with PopulateCategoryCombo.

The PopulateCategoryCombo method contains code for using a SqlDataReader object to fill a ComboBox control with categories from the Northwind database:

strSQL= "SELECTCategoryID,CategoryNameFROMCategories"



Following the creation of a T-SQL SELECT statement, the SqlConnection and SqlCommand classes are instantiated. The connection is then opened, and a SqlDataReader variable (declared earlier) is used to receive the return from ExecuteReader. A CommandBehavior enumeration is used to conveniently close the connection when the data reader is closed.




After clearing the ComboBox control’s ObjectCollection to make way for a new, updated set of items, a Do/While loop is used to iterate through the contents of the data reader and add new ListItem objects to the ObjectCollection. Although the System.Web.UI.WebControls namespace contains a ListItem class that represents a data item within a databound list control, there is no equivalent for a Windows Forms ComboBox class. Therefore, in this application, ListItem is a custom class that functions in an almost identical manner. Its constructor takes a string value and an integer value that are used, respectively, to display the item and give it an additional value (akin to the DisplayMember and ValueMember properties if you were data binding instead of simply filling the ObjectCollection):



mName= ""




When dr.Read returns False, the application breaks out of the Do/While loop and then closes the data reader.

Populating the Remaining Form Elements

One line of code that is different among these three methods is contained in the PopulateProductList subroutine:


This sets the selected ListBox item to the first product so that the user interface does not initialize with a lot of empty fields. When this is set, its SelectedIndexChanged event is fired, the handler for which contains code to initialize the rest of the application:

Mode= "Update"

PopulateForm is thus one of the key methods in this application. It gets called when the form is loaded and every time the user selects a new product from the ListBox control. The code in this method is straightforward:


strSQL= "SELECTProductID,ProductName,QuantityPerUnit,UnitPrice, " &_
 "UnitsInStock,UnitsOnOrder,ReorderLevel,Discontinued, " &_
 "SupplierID,CategoryID " &_
 "FROMProducts " &_
 "WHEREProductID= " &objListItem.ID



chkDiscontinued.Checked=CType(dr.Item("Discontinued"),Boolean )

First a SELECT command is created that uses the product ID from the selected ListBox item in the WHERE criteria. Notice how the ID value is retrieved. The SelectedItem property returns the selected ListItem object. It is stored as a generic object, so it must be cast to the custom ListItem type before the ID property is accessible.

Next the data is retrieved and placed into a SqlDataReader, and the control properties are set to the various field values. The ComboBox controls use a custom SetSelectedItem helper method to cause the ComboBox to reflect the supplier and category associated with this product:

PrivateSubSetSelectedItem(ByValcboAsComboBox,ByValstrIDAsSt ring)


Finally the data reader is closed.

Inserting a New Product

The AddProduct method called in the btnSave_Click event handler has ADO.NET code similar to what you have just walked through:

 "INSERTProductsVALUES(" &_
ScrubInput(txtProductName.Text)& "," &_
ListItem).ID& "," &_
ListItem).ID& "," &_
ScrubInput(txtQtyPerUnit.Text)& "," &_
txtUnitPrice.Text& "," &_
txtUnitsInStock.Text& "," &_
txtUnitsOnOrder.Text& "," &_
txtReorderLevel.Text& "," &_
CType(IIf(chkDiscontinued.Checked, "1", "0"),String)& ")"




Notice the ScrubInput helper function that is used to prepare certain entries. ScrubInput escapes single quotes, removes double quotes, replaces an empty string with NULL, and trims any white space:

strValue=strValue.Replace("'", "''")
strValue=strValue.Replace("""", "")

IfstrValue.Trim()= "" Then
Return "NULL"
Return "'" &strValue.Trim()& "'"

The code for updating and deleting a product is similar to that for inserting a new product, so this ends the “Code Walkthrough” section.


This has been a quick review of some basic ADO.NET code and concepts involved with building a simple data-entry form. The SqlCommand object was used exclusively to execute all T-SQL statements. Although a DataSet object was not involved, you could certainly elect to build the data-entry form using a DataSet object instead of a SqlDataReader object. Both options are open to you, and the right choice depends on the scenario you’re trying to support.

Application #21 Build a Three Tier Data Form

For most business applications, a well-known “best practice” is to organize your code into three tiers. Although there are differences of opinion about how a tier is defined and what goes where exactly, the typical model is as follows:

  • An upper or presentation tier for the user-interface code
  • A middle or business tier to house business rules and data-access logic
  • A lower tier that contains the data storage—for example, a database

A three-tier architecture has a number of advantages, such as scalability and code reuse. For example, when the data-access logic is not tightly coupled with the presentation tier, it can easily be reused to serve many types of user interfaces, from a rich desktop client to a thin Web client.

This sample application will demonstrate how to build a simple master-details data form that follows this model. The presentation tier is comprised of a Windows Forms application that contains two DataGrid controls in a master-details relationship and buttons to update and refresh the data. Database requests from the presentation tier are handled by the middle tier, which contains a data-access layer. The term layer is defined as a logical grouping of code according to function. Multiple layers can exist within a tier. Because of this application’s lack of complexity, a business rules layer is not needed. Figure 3-13 shows a screen shot of Application #21.

click to expand
Figure 3-13: Application #21.

Building Upon…

Application #4: Try/Catch/Finally

Application #7: Object-Oriented Features

Application #12: Use a DataSet and DataView

Application #18: Build a Master-Details Windows Form

Application #19: Use a Typed DataSet

New Concepts

Several minor ADO.NET concepts are introduced in this sample and will be treated in the “Code Walkthrough” section. The majority of the code, however, should look familiar. What might be conceptually new to you is the way the application is set up in Visual Studio .NET.

The solution contains two projects, a Windows application and a class library named MiddleTier. The class library was added by right-clicking the solution in the Solution Explorer, pointing to Add, and then selecting New Project. Although both projects are indeed in the same solution, you must add a reference to the class library’s DLL in the Windows application project or the application will not be able to access these middle-tier members.

If you right-click the MiddleTier project in the Solution Explorer and then click Properties, you’ll see that the root namespace is set to MiddleTier. This means the fully qualified name of all the code in this class library begins with MiddleTier. Additional namespaces can be used to further organize the code. You’ll see that the data access code is wrapped inside a block starting with Namespace DataAccessLayer and ending with End Namespace. (If logic for business rules were added, you could place it inside a Namespace BusinessLayer/End Namespace statement.) As a result, when accessing a method from the presentation tier, the fully qualified name is MiddleTier.DataAccessLayer.ClassName.MethodName. This makes the code more organized and readable.

Finally, the middle-tier code comprises three classes: a data-access base class and two more specialized classes that derive from it. The DALBase class contains functions for getting a new SqlConnection object, returning a DataSet object to the presentation tier, and pushing DataSet changes back to the database. The other two classes, Suppliers and Products, are similar. Each has a constructor that creates a new SqlDataAdapter object, a read-only property that makes the SqlDataAdapter available to the base class, and a GetData method for filling a DataTable that is then added to the DataSet object returned by the base class.

Code Walkthrough

What follows is a discussion of the code in the presentation and middle tiers.

Presentation Tier

At the outset, the data-access layer base class is instantiated with class scope so that it can be shared by the rest of the class members:


Then, in the form’s Load event handler, several methods are called to retrieve a DataSet object from the middle tier and subsequently bind each DataTable object in the DataSet object to its respective DataGrid control. In the GetDataSet method, you’ll find the following code:


First, a method on the data-access layer by the same name is called and its return value is set to a local DataSet variable. Next, DataView objects are created to facilitate the master-details relationship using the RowFilter property, as you’ve seen in a previous topic. Finally, the user is prevented from adding a new supplier by setting the AllowNew property.


Unlike when binding to a DataTable object, the DataView object also allows you to set restrictions on what the user can do in the DataGrid control. By setting the AllowNew, AllowEdit, and AllowDelete properties, you can affect how the DataGrid control behaves.

The Save button Click event handler illustrates a couple of good points. The first line of code checks to see whether there have been any changes made to the DataSet control. If not, there is no reason to proceed with calling the middle tier:


If there are changes to be pushed back into the database, a Try/Catch block is used to wrap code that calls the data-access layer:


Notice that when the DataSet object is passed to the middle tier, GetChanges is invoked. This method creates a new DataSet object consisting only of the changes made since it was last loaded or since AcceptChanges was called. This is an important method to use, as it can greatly reduce your network traffic. Also, to provide feedback to the user, a results message is returned by the SaveChanges method and displayed using the MessageBox class. Finally, the presentation tier can receive error messages from the middle tier because SaveChanges throws a new exception when an error is encountered, as you will see shortly.

Middle Tier

The base class for the data-access layer has two primary methods: GetDataSet and SaveChanges. In each, the Suppliers and Products classes are first instantiated:


Looking at the constructor for the Products class, you see that it contains familiar code for creating a SqlDataAdapter and setting the SELECT, INSERT, UPDATE, and DELETE commands that are used for transferring data back and forth between the DataSet object and the database:


.CommandText= "SELECT*FROMProductsDAL"



With instances of these two classes in hand, their respective GetData methods are then called to retrieve two DataTable objects, which are then added to a new DataSet object to form a complete package that can easily be passed back to the presentation tier:



Before returning the DataSet object, however, a DataRelation object is added to the DataSet object to set up a parent-child relationship between the Suppliers and Products tables:




The last line of code is for instructional purposes only—specifically, to highlight the Rule enumeration. This enumeration is used to indicate the action that is taken when an ADO.NET ForeignKeyConstraint is enforced. The default value is Cascade, which means that related rows will also be updated or deleted. Thus, in this case the statement is not necessary because by default the DeleteRule (as well as the UpdateRule) is set to Cascade. If you wanted different behavior, you could use one of the other three Rule enumeration values: None, SetDefault, and SetNull.

Realize that without this DataRelation object there is no hierarchical relationship between the two tables in the DataSet object. In such a case, if the user were to delete a supplier from the master DataGrid control—which in turn marks them for deletion in its underlying DataSet object—any products associated with that supplier will remain in the DataGrid control and the DataSet object. Then, when the user attempts to save the changes back to the database, orphaned products will exist that will violate foreign-key constraints set in the lower data tier, causing an exception of type SqlException to be thrown. Try this for yourself. Run the application, click a supplier, look at its associated products in the lower DataGrid control, and then press Delete. Now comment the DataRelation code and repeat these steps. Notice that the products in the lower DataGrid control remain.

The SaveChanges function begins by declaring an integer variable for storing the number of rows changed when SqlDataAdapter.Update is called. Also declared is a SqlTransaction variable. This class represents a T-SQL transaction that is made in a SQL Server database. Use of a transaction is highly recommended. It ensures that all or none of the changes are pushed back to the database. In other words, if an error is encountered at any time during the update, the transaction can be rolled back and any partial changes can be undone. A two-phase transaction is used here: one phase to submit the changes to the database, and another to commit them if all goes well.

The update code is wrapped in a Try/Catch block. After obtaining and opening a new connection, a transaction is started by invoking the BeginTransaction method of the SqlConnection object:


Because the Products table is the child, its changes should be pushed to the database first. The SqlDataAdapter exposed by the Products class is accessed, and the Products table is passed to the Update method. Recall that the Update method will use the commands already generated by its SqlCommandBuilder object to queue the changes in the database for the final commit:


Immediately following this line of code is similar code for the Suppliers table. Notice that the number of rows changed by this second update is added to the intNumRowsChanged variable to record the final number of rows changed:


If no exceptions have been thrown thus far, the transaction can be committed and a results message passed back to the presentation tier:

ReturnintNumRowsChanged.ToString& " changesweremadetothedatabase."

However, if an error is encountered, the Catch block contains the following code to roll back the transaction and then throw a new exception that can be caught by the calling method:

ThrowNewException("Thechangescouldnotbesavedduetothe " &_
"followingerror: " &exp.Message)

Finally, no matter what happens, the connection needs to be closed:



This sample has shown you how to build a simple master-details Windows application that is architected according to a three-tier model. The user interface resides in the presentation tier and does not have direct access to the database. Instead, it calls the middle tier, which acts as a broker between the lowest and highest tiers. Organizing and structuring your code in this way has many advantages. Using namespaces to further organize your code takes it one step further, making your code more usable, more readable, and easier to maintain.

Application #22 Build an Offline Application

This application builds upon the applications shown for data-entry forms and data binding with navigation by adding the ability to work offline as well as undo changes. By offline we mean that the application doesn’t need to be connected to a database to persist changes between application executions. You’ll see that this offline capability introduces several challenges and pitfalls. Figure 3-14 shows a screen shot of Application #22.

click to expand
Figure 3-14: Application #22.

Building Upon…

Application #4: Try/Catch/Finally

Application #12: Use a DataSet and DataView

Application #16: Data Binding with Navigation

Application #19: Use a Typed DataSet

Application #20: Build a Data Entry Form

Application #21: Build a Three-Tier Data Form

New Concepts

The new concepts for this application center around how a DataSet object supports change tracking and offline operations using XML. At the beginning of this chapter, you learned that a DataSet object is an in-memory copy of data and the data’s structure, or schema. When you query a database and store the results in a DataSet object, the object no longer needs to stay connected to the database. From this point, the DataSet object doesn’t care where it came from. It’s an intelligent, autonomous data cache. It inherently supports offline operations.

You need to understand two main aspects of the disconnected nature of the DataSet object. First, recall from an earlier topic that changes made to data in a DataSet object do not affect the object’s data source until you call the SqlDataAdapter.Update method. How does the DataSet object keep track of changes? It uses a DataRowState enumeration. A DataRow object with no changes pending has the Unchanged enumeration value assigned to it. DataRow objects with changes pending are marked as Modified. Other enumeration values are Detached, Added, and Deleted. Note that these are all indicators of pending changes. That is, pending reconciliation with the data source (for example, a database). Until you reload the DataSet object or invoke either the AcceptChanges or RejectChanges method, the modifications are tracked using this DataRow enumeration in combination with an internal copy of the original DataRow object. Another related enumeration exposed by both the DataRow and DataRowView classes is DataRowVersion. This enumeration allows you to access the Current, Original, Proposed, or Default version of a DataRow object. The importance of this change-tracking concept will be underscored in the “Code Walkthrough” section, where you’ll learn why use of the DataView classwith its RowStateFilter property—is essential if you must support the ability to persist and undo changes.

The second important aspect of the disconnected nature of the DataSet object is its exposure of numerous methods that support deserializing (reading) and serializing (writing) its data and schema to XML. Among these are ReadXml and WriteXml. Both have numerous overloads to work with Stream, XmlReader, and XmlWriter objects; TextReader and TextWriter objects; and other transport mechanisms. These methods also take an optional enumeration of type XmlReadMode that influences the deserialization or serialization process. Although the XmlReadMode enumeration values are mostly different between the two methods, one that is shared by both is DiffGram.

A DiffGram represents an XML format used by DataSet objects to track changes. Recall that a DataSet object automatically keeps track of all changes made to its data elements since it was last loaded or its AcceptChanges method was called. These changes can be serialized as a DiffGram, which takes the following form:


The <DataInstance> element is just a placeholder for the actual DataSet object (or DataTable object) that owns the DiffGram. This element contains children for each DataRow object, whether it has been modified or not. In other words, this is the data in its current state. Data elements that have been modified are identified by a diffgr:hasChanges annotation. This annotation can have three values: inserted, modified, or descent. The first value indicates a new row, the second value indicates an edited row, and the third value indicates a row for which one or more children from a parent-child relationship have been modified. The <diffgr:before> element contains the original version of the row. Finally, there is a <diffgr:errors> element that contains any errors for that row.

The following DiffGram contains several changes to the DataSet object used in this application. First, Customer 1 (ID ALFKI) was deleted. It does not appear as a child of the <CustomersDataSet> element, but it is present as a child of the <diffgr:before> element. Next, the Contact Name for Customer 2 (ID ANATR) was changed to Pablo Taco. The original Customer 2 data is also listed in the <diffgr:before> section. Finally, a new Customer 92 (ID PADAH) was added. This customer appears as a child of the <CustomersDataSet> element but not under <diffgr:before> because it was obviously not in the original DataSet object. Customers 3 through 89, as well as some of the fields, have been omitted for brevity. Also, DiffGram annotations are in bold.








You are now ready for the code walkthrough.

Code Walkthrough

When the application loads, a copy of the Northwind Customers table is created (if it doesn’t already exist); simple data bindings are cleared (if they exist) and then added to each TextBox control; and customer data is retrieved from the database and displayed. The rest of this section will discuss the code used for adding and deleting customer records, canceling modifications, and finally, saving the DataSet object to an XML file and loading the DataSet object from an XML file.

Adding a Record

The code for adding a record resides in the btnAdd_Click event handler. Although you could add a row directly to the DataTable, the syntax is easier if you go through the DataView, which will impart the change to its underlying DataTable:

drv("CustomerID")= "[ID]"
drv("CompanyName")= "[CompanyName]"

As the CustomerID and CompanyName fields do not accept NULL, a default value for each is provided. Following this, the DataRowView.EndEdit method is invoked. It is important to call EndEdit because if the user adds a row and then navigates elsewhere, the row is left in a Detached RowState. Leaving the row in a Detached RowState means that the DataRow has been created but is not yet a member of a DataRowCollection. A DataRow in this state can cause problems, so you should call EndEdit to append it to the DataRowCollection, which also marks it as a pending addition.


The CurrencyManager also exposes methods for ending and canceling edits. Although using these methods might appear to work in certain situations, the .NET Framework documentation gives the following cautionary advice: “This property [sic] was designed to be used by complex-bound controls, such as the DataGrid control, to cancel edits. Unless you are creating a control that requires this same functionality, it is not recommended that you use this method.”

Once the new row has been added, all that remains to be done is update the record number indication and move the position of the CurrencyManager to the new record so that the user can enter the data.


The code in UpdateRecordNum contains a single line:

lblPosition.Text=cmCustomers.Position+1& " of " &dvCustomers.Count

You might think it would be acceptable to use the Count property of the DataTable DataRowsCollection instead of the Count property of the DataView object. The problem is that changes to the DataTable object are pending. Therefore, any additions or deletions will not be reflected in the total count. The DataView object, however, conveniently makes available a RowStateFilter property, the default for which is CurrentRows. This ensures that the user sees the current version of the rows—that is, the version that reflects the pending changes.

Further modifications to the record are handled automatically. When the user begins editing any item in the DataRow object, its BeginEdit method is implicitly called. Likewise, when the user navigates to another record, EndEdit is automatically invoked and the DataSet object records the pending changes.

Deleting a Record

Deleting a record can be done using three different methods. If you’re not concerned about the DataSet object tracking changes (and thus having the ability to undo them), invoke Remove (if you have a reference to a specific DataRow object) or RemoveAt (if you have only the row’s index in the DataRowsCollection). These methods permanently remove the row. They are equivalent to calling Delete, which merely marks a row for pending deletion, and then calling AcceptChanges.

This application supports change tracking and undoing, so in the btnDelete_Click event handler, you’ll find the following code:


Here is another example of the importance of working through the DataView object instead of with the DataTable object directly. It’s reasonable to think you should be able to invoke Delete on the DataRow object and see the same results as when invoking it on the DataRowView object. However, for reasons not entirely clear at the time of this writing, when using the DataTable object, clicking the Delete button will only cause the current record to be marked for deletion. Subsequent clicks do nothing, unless you manually advance to the next record (and even this appears to function inconsistently). Perhaps there is a way around this pitfall that permits interaction directly with the DataTable object. Regardless, if you use the DataView object instead, everything works as you would expect.

Cancelling Modifications

Two buttons in this application permit modification cancellations: Cancel Edit and Cancel All. The Click event handler for the first button contains code to cancel edit operations on the current record:


This will undo any modifications made to the current record since EndEdit was called (either explicitly or implicitly). As mentioned earlier, it’s best to invoke CancelEdit on the DataRowView object, not CancelCurrentEdit on the CurrencyManager object. The CustomerID TextBox control is also disabled when editing is cancelled. This control is editable only when adding a new customer.

The code for handling the Cancel All button is slightly different. Here, the DataTable.RejectChanges method is invoked to undo all pending changes to all DataRow objects. (There is no equivalent method for the DataView class.)


Notice that you invoke CancelEdit (or EndEdit) prior to RejectChanges; if you didn’t, modifications to the current record would not be undone. This is because the record is still in edit mode, and thus the changes have a DataRowVersion enumeration value of Proposed instead of Current (that is, pending).

Saving to and Loading from an XML File

The application allows you to save changes to the database or to an XML file. In the btnSave_Click event handler, you’ll find the following code:


MsgBox("NochangestotheDataSetweredetected.Saveaborted.", _

As you would expect by now, EndEdit is called before doing anything else so that the changes are marked as pending. Next, the DataSet object is checked for any changes. If any exist, an appropriate method is called based on the checked state of the Use XML File CheckBox control. If you’re saving to a database, using the GetChanges method is advised so that only the changes get propagated back to the database, thus minimizing network traffic.

The SaveCustomersToXml method requires only one line of code to persist the DataSet object to XML:


An overload that accepts an XmlWriteMode enumeration is used to write the DataSet object in the DiffGram format. Similarly, in the LoadCustomersFromXml method, you’ll find the following code to load the DataSet object from the XML file:


Experiment with the application to see how this works. Run it, make changes to various parts of it, check Use XML File, and then click Save. Then make some other changes, and click Refresh. You’ll see the recent changes overwritten by the data from the XML file.


This is the longest discussion in this chapter. A fair amount of conceptual ground had to be covered to facilitate an adequate understanding of the code behind an offline application. Along the way, you were alerted to several pitfalls concerning the DataTable and CurrencyManager classes that could easily become a source of frustration. The bottom line is to remember the two most important points of this topic: avoid using the CurrencyManager methods, and when displaying data, in general think DataView and not DataTable. If you adhere to these rules of thumb, building a robust offline application should be smooth sailing.

Application #23 Use T SQL to Create Database Objects

This sample application shows you how to create database objects by executing T- SQL statements from ADO.NET. The emphasis will be on constructing T-SQL statements to create a database and then add various database objects to it, such as a table, stored procedure, and view. The table will then be populated, a DataSet object filled with its contents, and the results displayed in a DataGrid control. Figure 3-15 shows a screen shot of Application #23.

click to expand
Figure 3-15: Application #23.

Building Upon…

Application #4: Try/Catch/Finally

Application #12: Use a DataSet and DataView

New Concepts

The new concepts are entirely related to SQL Server (or MSDE, which is a lite version of SQL Server and has the same internal architecture). The two main ADO.NET methods used in this application, SqlCommand.ExecuteNonQuery and SqlDataAdapter.Fill, have already been seen in previous topics. The former executes all T- SQL statements that create the new database and its associated objects. Fill is used only in the last step, to retrieve the data from the new database table and place it in a DataSet object.

As for SQL Server, you need to understand how it keeps track of its objects, both at the server and individual database levels. At the server level, the master database is all-important—every SQL Server instance has one. It stores all data that affects the operation of that instance, such as user accounts, other databases, and configuration settings. To create a database from code, you first need to check whether it already exists. You do this by querying the master..sysdatabases table. Objects residing in a particular database are found in the sysobjects table for that database.

Code Walkthrough

You’ll now see how to query the sysdatabases and sysobjects tables and then use DROP, CREATE, and other T-SQL commands from ADO.NET.

Creating a Database

Although it’s not required, you should find out whether the target SQL Server instance already contains a database of the same name. If you elect not to run this check, an exception of type SqlException could be thrown during the creation attempt. The following code, in the application’s CreateDatabase method, demonstrates the syntax for writing a readable T-SQL statement that drops the App23Demo database if it exists and then creates it:

"SELECT* " &_
"FROMmaster..sysdatabases " &_
"WHEREName='App23Demo')" &vbCrLf&_
"DROPDATABASEApp23Demo" &vbCrLf&_

The T-SQL keyword EXISTS specifies a subquery that tests for the existence of a row. In this case, you’re looking for the row in the sysdatabases table that contains information about the App23Demo database. EXISTS returns a Boolean value, so if the return value is True, use the DROP keyword to delete the database. In either case, you then use the CREATE DATABASE statement to create the App23Demo database.

The use of the carriage-return/line-feed constant (vbCrLf) is not necessary, but it can make your T-SQL statements more readable if you Trace.Write them for debugging purposes. If you omit carriage-return/line feed constants, make sure you add appropriate spaces at the end of each line to separate the various T-SQL statements.

Executing this and all other T-SQL statements in this sample that do not retrieve data is done using the following code:


At this point in the chapter, these lines require no further comments. For the remaining T-SQL steps, this code is used to execute the statements.

Creating a Table

You are now going to be working at the database level and, thus, with the App23Demo..sysobjects table. In the btnCreateTable_Click event handler, you’ll find the following T-SQL statement:

 "USEApp23Demo" &vbCrLf&_
 "SELECT* " &_
 "FROMsysobjects " &_
 "WHEREName='NW_Seafood' " &_
 "ANDTYPE='u')" &vbCrLf&_
 "DROPTABLENW_Seafood" &vbCrLf&_
 "CREATETABLENW_Seafood(" &_
 " ProductIDIntNOTNULL," &_
 " ProductNameNVarChar(40)NOTNULL," &_
 " QuantityPerUnitNVarChar(20)NOTNULL," &_
 " UnitPriceMoneyNOTNULL," &_
 " (ProductID))"

The first line directs the server to use the App23Demo database for all subsequent commands. Following this line is the now-familiar use of the EXISTS keyword with a subquery against the sysobjects table. Notice that the object TYPE is added to the WHERE criteria. The value u indicates that it is a user table. It’s wise to include the type in your criteria because it’s possible for database objects of different types to share a name.

There are a number of statements involved with creating the NW_Seafood table. When using CREATE TABLE, you pass information about the columns the table will contain, as well as any constraints and other objects that define the table. In this sample, you see the column name, data type, and whether the table can contain NULL values. Lastly, a constraint is added to indicate that the ProductID column is the primary key.


An easy way to create a table and populate it with data is to use the SELECT INTO statement. For example, to create the table used in this sample, you could execute this T-SQL statement:


Keep in mind, however, that you’ll then need to use ALTER TABLE to add constraints and other defining objects.

Creating a Stored Procedure and View

The T-SQL statement for creating a stored procedure looks similar to that for creating a table. In the btnCreateSP_Click event handler, you’ll find the following code:

 "USEApp23Demo" &vbCrLf&_
 "SELECT* " &_
 "FROMsysobjects " &_
 "WHEREName='AddSeafood' " &_
 "ANDTYPE='p')" &vbCrLf&_

The only difference worth mentioning between the preceding code and that used to create a table is that instead of type='u', you’ll find type='p' (for procedure). Also, PROCEDURE is substituted for TABLE.

The T-SQL statement for creating a view is almost identical to the code just shown and requires no further comment:

 "USEApp23Demo" &vbCrLf&_
 "SELECT* " &_
 "FROMsysobjects " &_
 "WHEREName='GetSeafood' " &_
 "ANDTYPE='v')" &vbCrLf&_

Displaying the Data

To display the data in the DataGrid control, you must first populate the table with some products. You’ve created a stored procedure just for this purpose, AddSeafood, and this is executed in the btnPopulate_Click event handler by using the T-SQL statement EXECUTE App23Demo.dbo.AddSeafood.

Following this, in the btnDisplay_Click event handler, there is code for querying the previously created View and filling a DataSet object:

 "USEApp23Demo" &vbCrLf&_
 "SELECT* " &_

da.Fill(dsSeafood, "Seafood")

The DataGrid control is then bound to the DataSet object for display:

.CaptionText= "NorthwindSeafood"

You’ll also find DataGrid formatting code in this sample, but this is discussed in the next chapter.


This sample has given you an introduction to executing T-SQL statements that create a database and associated objects, all from ADO.NET. You learned that information about databases is contained in the master..sysdatabases table and that information about a database object is contained in that database’s sysobjects table. You also learned that the EXISTS keyword can be used with a subquery to check for the existence of a database object before creating one anew.

Should you want to learn more about using T-SQL to create database objects, consider studying one of the lengthy scripts included with the .NET Framework. For example, you can find the instnwnd.sql script in C:Program FilesMicrosoft Visual Studio .NETFrameworkSDKSamplesSetup. By reverse engineering this script, you’ll learn a lot about database object creation using T-SQL.

Application #24 Load Images from and Save Images to a Database

Sometimes you need to store images in a database instead of as physical files. This sample application, shown in Figure 3-16, will show you how to build a Windows Forms interface that allows you to do the following:

  • Browse for an image on your hard disk
  • Load the selected image into a PictureBox control for viewing
  • Save an image displayed in the PictureBox control to the database
  • Select an image from a ListBox control, and load it from the database
  • Delete an image

    click to expand
    Figure 3-16: Application #24.

Building Upon…

Application #4: Try/Catch/Finally

Application #12: Use a DataSet and DataView

Application #19: Use a Typed DataSet

New Concepts

The new concepts in this topic center around the abstract Stream class and how it’s used to convert an image file to and from the Image data type that SQL Server uses to store images. Be sure not to confuse the Image data type with the word image, as if to imply that only images can be stored therein. Rather, the Image data type can store anything as variable-length binary data.

A byte array is used to send data to an Image field. Thus, the main question is: How does one convert an image file—whether a JPEG, Bitmap, or other format— into an array of bytes? There are several ways to accomplish this in .NET. One of the easiest ways is to use a concrete implementation of the Stream class. A stream in .NET is essentially an abstraction of a sequence of bytes, whether these bytes came from a file, a TCP/IP socket, a database, or wherever. Stream classes allow you to work with binary data, reading and writing back and forth between streams and data structures (such as a byte array).

Once the image is converted to a byte array, it’s saved to a database by using coding practices you’re already familiar with at this point in the book. The steps required to retrieve an image from a database essentially amount to reversing the process.

The “Code Walkthrough” section will explain these steps in further detail. You might also peek ahead to Chapter 8, which provides more in-depth information about working with streams.

Code Walkthrough

You’ll first see how to browse for images on your hard disk and display them in a PictureBox control. Following this, you’ll see code for reading and deleting images from, and saving images to, a database.

Browsing For and Displaying an Image

The first task is to find an image on your hard disk. To do this, use an OpenFileDialog object in conjunction with a standard Button control. In the btnBrowse_Click event handler, you can see how this is done. The first few lines of code merely set properties of the OpenFileDialog object.

.InitialDirectory= "C:"
.Filter= "AllFiles|*.*|Bitmaps|*.bmp|GIFs|*.gif|JPEGs|*.jpg"

A pipe-delimited pair of file types is provided to determine the valid file types that can be accessed through the dialog box. Among other properties, you can also set FilterIndex to the default file type that you want to appear in the dialog box’s Files Of Type menu. The index is not zero-based, so in this example, Bitmaps will appear as the default.

The dialog box is not actually opened until its ShowDialog method is called, which can be combined in an If/Then statement to check which button was pressed and perform follow-on tasks:


Although an OpenFileDialog object contains an Open button instead of an OK button, there is no DialogResult enumeration for the Open button. Instead, use the OK enumeration. Once it’s confirmed that the Open button has been clicked, properties of the PictureBox control are set. Notice how the Image property—which requires an object of type System.Drawing.Image—is assigned. The Image class is abstract and exposes a number of shared methods for working with images, one of which is FromFile. This method creates an Image object from a fully qualified path; and although the OpenFileDialog.FileName property might lead you to think that it contains only the file name, it actually has the full path.

Saving an Image

Now that your image file is represented by an Image object, you can use a stream to convert it to a byte array. In the btnSave_Click event handler, the first line of code creates a MemoryStream object:


A MemoryStream object is simply a stream that uses memory as its backup store instead of some other medium. As a result, a MemoryStream object usually provides better performance. Streams are flexible. You could, for example, have used a FileStream object to open the image file directly and read it in. There are certainly numerous other ways, too. The implementation here, however, is simple and straightforward.

The MemoryStream is then passed as an argument to the Save method, another member of the Image class. You can optionally pass the image format—for example, by accessing the Image’s read-only RawFormat property:


The actual byte array conversion comes in the next line. GetBuffer returns an array of unsigned bytes being held by the stream.


It’s always best to manually close a stream as soon as possible instead of leaving it for the garbage collector. Stream.Close contains code that calls GC.SuppressFinalize, which means that the stream is not finalized and resources are released sooner. Also, if you are using a FileStream object, the underlying file cannot be moved or deleted until the stream is closed.

The last data-gathering task is to extract the filename from the full path, as there is no need to store the entire path in the database:

lblFilePath.Text.Substring(lblFilePath.Text.LastIndexOf("")+1 )

This might look a bit complex and convoluted, but all you’re doing is indicating that you want a substring of the full path that starts after the last backslash.

With the filename extracted and the image converted to a byte array, you’re now ready to use the ADO.NET practices you’ve already learned to push these to the database.

 "INSERTINTOPicture(Filename,Picture)" &_


As you can see, at this point there is nothing new except the use of the SqlDbType.Image enumeration. Set the value of the @Picture parameter to the byte array, and execute the INSERT statement as you would with any other type of data.

Reading an Image

When the application’s Manage tab is clicked, a DataSet object is automatically created and its contents bound to a ListBox control. This allows you to see what images are in the database. You can then select one to display it or delete it.

From this point forward, you’re essentially reversing the process. To display an image, you have to convert it from a byte array to an Image, and then assign it to the PictureBox.Image property:

CType(dsPics.Tables(0).Rows(lstPics.SelectedIndex)("Picture"),B yte())


The SelectedIndex property of the ListBox control is used to to retrieve the contents of the associated Picture field in the DataSet object, which is then explicitly cast to a byte array. Following this, a MemoryStream is created by passing the byte array to its constructor. The last step is to invoke the shared FromStream method to convert the stream contents to an Image, and then assign this to the PictureBox.Image property.

Deleting an Image

Deleting the image is the simplest task of all. For this, the SqlDataAdapter.Update method is called after deleting the relevant row in the DataTable object:


Finally, clean up the user interface to reflect the changes:

lblFileName.Text= ""


This application has shown you how to save images to and load images from a database. You learned that the save process involves the following progression: image file to Image to byte array to database. To load and display an image from the database, you reverse the process: database to byte array to Image to PictureBox. In addition to being introduced to the OpenFileDialog class, you were shown the MemoryStream class, which served as the in-memory conversion medium. You’ll learn more about streams in Chapter 8.

Application #25 Use Legacy ADO

There are times when you might need to use legacy ADO from within your .NET application. For example, you could have a substantial investment in an ADO code base that you want to leverage in a .NET application. Or perhaps you want to set up a search engine on your Web site using the Windows 2000 Indexing Service, which at the present time returns search results only in an ADO RecordSet.

Fortunately, the .NET Framework makes it easy to work with ADO. This sample application will show you how to retrieve and manipulate data using ADO, as well as transfer data from a RecordSet to a DataSet object. Figure 3-17 shows a screen shot of Application #25.

click to expand
Figure 3-17: Application #25.

Building Upon…

Application #4: Try/Catch/Finally

Application #12: Use a DataSet and DataView

Application #15: Bind Data to a ComboBox

Application #16: Data Binding with Navigation

New Concepts

It is assumed you already have basic experience with ADO and just want to know how to use it from a .NET application. As such, using ADO within .NET and in conjuction with ADO.NET are the only new concepts.

ADO is made accessible to a .NET application through a COM Interop layer. This layer is actually a runtime-callable wrapper (RCW) that works with interoperability services of the .NET base class library. It’s created especially for the ADODB DLL, making it appear to the .NET runtime as a native .NET component. Thus, an interop wrapper makes it appear as if you’re working directly with the COM object. However, as with a proxy class for a Web service, you are actually calling members of the wrapper class, which handle the complex underlying communication details for you and forward your requests to the COM runtime.

There are two ways to create this wrapper class. The easiest way is to add a reference to a COM component in Visual Studio.NET. If it doesn’t already have a COM interop wrapper built in—as is the case with the ADODB component—Visual Studio .NET will ask you for permission to create one. Alternatively, you can use the Type Library Importer utility (Tlbimp.exe) from the command line.


Some noteworthy instruction on this subject can be found in the .NET Framework Documentation: “When using ADO Recordset or Record objects in conjunction with .NET Framework applications, always call Close when you are finished. This ensures that the underlying connection to a data source is released in a timely manner, and also prevents possible access violations due to unmanaged ADO objects being reclaimed by garbage collection when existing references still exist. Note that the OleDbDataAdapter.Fill overload that takes a DataTable object and an ADO object implicitly calls Close on the ADO object when the Fill operation is complete.”

Code Walkthrough

The first tab of the sample application shows how to navigate through an ADO Recordset. When the form loads, an ADO connection is created and opened in the InitRecordNavigation helper routine. A RecordSet object is then created using an adOpenStatic cursor to allow moving forward and backward through the records:

 "SELECTCompanyName,ContactName,Phone " &_

Although certainly not a best practice, the connection is left open for demonstration purposes so that it can be shared among the other examples. The code for displaying the RecordSet values in the TextBox controls is contained in the PopulateSimpleNavigationForm method. The syntax is similar to what you’ve seen when working with a SqlDataReader object or other ADO.NET data sources:


When a navigation button is clicked, the button’s Click event handler moves the RecordSet cursor and then updates the TextBox controls. For example, the btnNext_Click event handler contains the following code. Notice the code to ensure that there is a next record and not the end-of-file:


The second tab of the sample application shows how to fill a ComboBox control by iterating through a RecordSet object. Here, the default cursor adOpenForwardOnly is used because forward-only, read-only access is all that is needed. When this cursor is used, a RecordSet object is very similar to the ADO.NET DataReader object.

DimstrSQLAsString= "SELECTCategoryNameFROMCategories"

This example also shows how to execute a SQL command using ADO. The following code updates the category description and then checks the success/fail integer value returned as an object when the ADO Connection object’s Execute method is called. (The SQL statement is omitted.)

MessageBox.Show("Aproblemwasencountered " _
& "whenupdatingtherecord!",Me.Text,_

The code for the example in the third tab, which shows how to insert a record, is almost identical and is not covered further.

Finally, the example in the fourth tab demonstrates how to move data from a RecordSet object to a DataSet object. First, in ways similar to what you’ve already seen, a SQL command is executed by an ADO Connection object, returning a RecordSet object:

"SELECTCustomerID,CompanyName,ContactName,Country, " &_
 " Region,Phone,Fax " &_

Next, a DataSet object is instantiated. This will be passed as an argument along with the RecordSet object to an OleDbDataAdapter object when the latter’s Fill method is called.

da.Fill(ds,rs, "Customers")

Finally, the DataGrid control is bound to the DataSet object to display the transferred data:

grdMain.DataMember= "Customers"


This topic has shown you how to work with legacy ADO classes from within a .NET application. You learned that the ADODB component is made available to managed code through a specialized COM interop wrapper that serves as an intermediary between the .NET and COM runtimes. Perhaps the most interesting example—showing the remarkable ability of ADO and ADO.NET to coexist with one another—was the final example, in which you saw how to fill a DataSet with data from a RecordSet by using a data adapter from the OLE DB .NET Data Provider.

Application #26 Use Crystal Reports

Crystal Reports is the standard reporting tool for Visual Studio .NET. It has enjoyed a long relationship with Microsoft, having shipped with Visual Basic since 1993. Crystal Reports .NET is the latest version. It was reengineered to take advantage of the .NET Framework and is a generational leap over preceding versions. This version is also not available as a standalone product. Rather, for the first time it has been fully integrated with the Visual Studio IDE.

With Crystal Reports, you have an integrated component that allows you to create a limitless number of professional-looking, custom reports that pull from a variety of data sources. You can then host these reports inside a Web or Windows application, or even publish them as a Web service.

Crystal Reports controls interact with other .NET controls to provide a rich user-interface experience with capabilities such as report filtering and drill down. Users can even export reports to Microsoft Word, Microsoft Excel, Adobe Acrobat, HTML formats, and more.

This sample shows you how to create a Windows Forms application that hosts four different reports providing various layouts and degrees of interaction. The Basic report simply shows tabular data. The Parameter report allows the user to select a company from a .NET ComboBox control and then view a report tailored to that company. The Dynamic Format report similarly takes parameters, but it uses them to format the results differently depending on the input. Finally, the Graph Drill Down report, shown in Figure 3-18, illustrates how to use bar graphs in conjunction with the drill-down feature of Crystal Reports.

click to expand
Figure 3-18: Application #26.

Building Upon…

Application #4: Try/Catch/Finally

Application #13: Use Stored Procedures

Application #14: Use a SqlDataReader

Application #15: Bind Data to a ComboBox

Application #19: Use a Typed DataSet

New Concepts

The new concepts we will focus on here center around the two data-access models used by Crystal Reports. Concepts involving the creation of reports are also briefly covered.

Data Access Models

Crystal Reports use two different data-access models. With the Pull Model, a report handles everything, from connecting to the database, submitting the SQL commands, retrieving and formatting the data, and so on. Crystal Reports uses its own database drivers that support a variety of data sources. To many, this method is the easiest because no additional coding is required. Three examples in the sample application use the Pull Model.

The Push Model involves an intermediate data layer to handle retrieving data from the data source and passing it on to the report. For example, you could use ADO.NET to create a DataSet object and then bind (or push) the DataSet object to the report. This is more complex, but it gives you greater control over how data is retrieved (for example, with or without connection pooling) and what the report receives (for example, filtered further after retrieval). It also allows you to leverage existing code or skills instead of having to learn the Crystal Reports way of data access. The first example in the sample application uses the Push Model.

Creating Reports

Crystal Reports has an integrated Reports Designer to help you create the report files (*.rpt). As with other Visual Studio .NET integrated components that have supporting design tools, you can use the designer and its associated wizards (experts, as Crystal Reports refers to them) to set all the properties for connecting to a data source, retrieving the data, applying formatting, and so on. No additional code is required.

Alternatively, you can make minimal use of the designer, writing code to set properties at run time. To do this, Crystal Reports provides an extensive API that is well documented in the Visual Studio .NET integrated help.


One stumbling block for developers starting out with Crystal Reports .NET is how to get Stored Procedures to appear as an option in the Database Expert dialog box. By default, only Tables and Views will appear.

You can add Stored Procedures and change other settings for the Reports Designer as follows. First, open a report in Visual Studio .NET. Next, right-click on any white space in the report, point to Designer, and then click Default Settings. Click the Database tab, and then check Stored Procedures.

After you click OK, you still might not see Stored Procedures as an option in the Database Expert dialog box. If this happens, close and re-open Visual Studio .NET, and then reopen the Database Expert.

The Reports Designer has many features. It’s beyond the scope of this topic to address them. Figure 3-19 depicts a report open in Visual Studio .NET, with the Field Explorer and Database Expert.

click to expand
Figure 3-19: The Field Explorer and Database Expert are shown here for a Crystal report that is open in Visual Studio .NET.

In the “Code Walkthrough” section, you’ll start with reports that have already been created.

Displaying Reports

The steps required to display a report in a Windows Forms application vary depending on how much you use the Reports Designer to configure the report. At a minimum, you simply add a Crystal Reports Viewer control to a form, create a ReportDocument object and call its Load method to load the desired report, and then set the viewer’s ReportSource property (similar to .NET data binding with the DataSource property), effectively binding it to the report file.

A more code-intensive approach would involve interacting with members of the CrystalDecisions.CrystalReports.Engine namespace to set datasources, formatting, and many other things. You’ll see some examples of this in the next section.

Code Walkthrough

This walkthrough is divided into two parts that represent the two data-access models used by Crystal Reports.

The Push Model Example

The first example you’ll look at uses the Push Model. As such, the code is a bit more complex because the data retrieval is not handled by the report. In the btnPreviewBasicReport_Click event handler, you’ll find the following variables initially declared:


The ReportDocument class is found in the CrystalDecisions.CrystalReports.Engine namespace. As the name implies, it represents the report being displayed in the CrystalReportViewer control. The other two variables should be familiar to you from Application #19: Use a Typed DataSet.

The code needed to retrieve the data source and push it to the report comes next. The typed DataSet, created from the Ten Most Expensive Products stored procedure in Northwind, is first instantiated and filled:


Next, the report is loaded by passing the path to the physical file created using the designer:


Finally, the two source properties are set: the data source for the report, and the report source for the viewer:


That’s all there is to the more complex Push Model. The next example will illustrate the Pull Model.

The Pull Model Example

The Customer Orders report is displayed on the second TabPage, Parameter Report. This report takes a single parameter, the Customer Name from the ComboBox control, showing you how a data-bound .NET control can interact with a Crystal Reports control.

The variables declared and initialized inside the btnPreviewCustomerReport_Click event handler are a bit different from the previous example. In addition to the ReportDocument variable, you’ll find the following:


The Table object represents a database table accessed by the report. You’ll use this later to iterate through the Tables collection, setting connection-string properties encapsulated by the TableLogOnInfo class. The other two objects provide a container for parameters passed to the tables in the report. These are conceptually similar to the SqlCommand.Parameters collection and the SqlParameter object with which you’re already familiar.

After the code for loading the report, you’ll see code to set the connection information:

.DatabaseName= "Northwind"

This is largely for demonstration purposes. Remember that in the Pull Model the report will already have this information. However, you can override those settings at run time. If you know the connection information will not change, this code is not required.

The next few lines of code pass the parameter value to the report:

rptCustomersOrders.DataDefinition.ParameterFields("@CustomerName"). _

First, the Value property of the ParameterDiscreteValue object is set to the text of the selected ComboBox item. This parameter is then added to the parameters collection. Finally, the parameters collection is applied to the empty parameter field that has already been set in the report using the designer.

The code for the remaining two examples of the Pull Model is left for you to examine. The code for each contains only slight variations from the example you have just seen. Although they might look quite a bit different when displayed, most of these differences result from the design of the report, not the code behind it.


Crystal Reports .NET is a powerful, greatly overhauled version of a classic product that has, for a long time, served well the needs of many corporations. In this topic, you’ve gained a better understanding of how to display reports in a Windows Forms application. If your work is more Web-oriented, you’ll be happy to know that the concepts and code involved with viewing Crystal Reports on the Web are almost identical. You’ve also learned more about the two types of data access models supported by Crystal Reports, and how to implement both.

Application #27 Compare Data Binding in Web and Windows Forms

This topic is a bit of a departure from the rest of this chapter in that the focus is on a comparison between two types of controls that are used in both Web and Windows Forms. The first type is a single-select control. In the System.Web.UI.WebControls namespace, one representation of this type is the DropDownList control. Its close relative in the Windows Forms family of controls is the ComboBox control. The second type is a multiselect control. In the Web Forms family, one incarnation of this is the CheckBoxList control. Its Windows Forms relative is the CheckBoxList control. For those of you who develop both Windows and Web applications, awareness of these differences can help you save some cross-platform development frustration.

Application #27, shown in Figure 3-20, is a Web Forms application.

click to expand
Figure 3-20: Web Forms application.

Building Upon…

Application #4: Try/Catch/Finally

Application #12: Use a DataSet and DataView

Application #14: Use a SqlDataReader

Application #15: Bind Data to a ComboBox

New Concepts

Although some code you haven’t seen before will be discussed in the “Code Walkthough” section, there are no new concepts associated with using single-select and multiselect controls. The remainder of this topic will focus on the code differences when working across platforms.

Code Walkthrough

The code-behind for the Web Form shows two ways to data-bind a DropDownList. In BindDropDownListUsingDataSet, you’ll find the following code after the DataSet is acquired:

.DataTextField= "Name"
.DataValueField= "ID"

IfTrim(txtNewOption.Text)<> "" Then
.DataSource=Helper.UI.AddOption(dsProducts,.DataTextField ,_
.DataValueField,txtNewOption.Text, "0")

Astute readers should be able to pick out three distinct differences between this and similar code for data binding a ComboBox control. First, notice that instead of DisplayMember and ValueMember properties the DropDownList control uses DataTextField and DataValueField properties. This is merely a subtle syntax difference. More important, the DropDownList control permits you to data-bind directly to a DataSet object because it automatically defaults to the first DataTable object in the collection. Such is not the case with the ComboBox control, in which you must state which DataTable object in the collection is the data source. Finally, for Web controls, you must explicitly call the DataBind method or no data binding will occur and no exception is thrown. This is one of the chief causes of initial frustration for a Windows developer who is trying to leverage his skills in the Web Forms world.

The preceding code also contains a call to an AddOption method. What is this used for? Typically, you’ll need to add an option to a control that is not stored in the database. For example, a common scenario is a DropDownList control that defaults to an All Options menu item, with each individual option listed below it. This is not as easy to accomplish as you might think because after a control is data-bound you normally cannot alter its collection of items. However, if you alter the control’s data source after it is acquired but before the control is bound, you can achieve your goal. The AddOption method that resides in a separate Helper Class Library project does just that:



This function inserts an option into the top row of a DataTable object. It then returns a DataSet object, from which you will most likely want to create a DataView object to take advantage of its sorting and filtering capabilities. You can then use the DataView object as the data source. Note that if you want the additional option to appear at the top of the list as well as the data to appear in alphabetical order, it must be presorted. Creating and sorting a DataView object after the option is added might cause the additional option not to appear at the top. Thus, the data should be sorted at the database level in the SELECT statement.

The code in the method for data binding the DropDownList control to a SqlDataReader object is similar, as is the code for data binding the CheckBoxList control. These are not covered further, as no new comparison information is unveiled. Other differences, however, summarized in the “Conclusion” section, revolve around showing the selected item or items for these controls. Space limitations do not permit addressing these here. You are encouraged to experiment with the code and review the ample comments therein to verify what is summarized in the next section.


The results of the comparison between the single-select and multiselect control types are summarized in Table 3-2 and Table 3-3.

Table 3-2: Single-Select Control Comparison

ComboBox (in DropDownList mode, Windows Forms)

DropDownList (Web Forms)

DisplayMember and ValueMember are used to set the data bindings.

DataTextField and DataValueField are used to set the data bindings.

There is no DataBind method. Data binding occurs automatically. However, when the data source is a DataSet, you must explicitly set the DataTable—for example,
clstProducts.DataSource =

You must explicitly call the DataBind method. You can also bind directly to a DataSet unless it contains more than one DataTable, in which case you would need to explicitly set the DataTable.

Access information about the selected item using the SelectedIndex, SelectedValue, and Text properties.

Access information about the selected item using the SelectedIndex, SelectedItem.Value, and SelectedItem.Text properties.

Table 3-3: Multiselect Control Comparison

CheckBoxList (Windows Forms)

CheckBoxList (Web Forms)

The first two rows in Table 3-2 also apply to these controls.

Selected means highlighted, which is different from checked.

Selected means checked.

Exposes a SelectedItems and CheckedItems collection. The former is not relevant because it
contains 0 or 1 items.

Exposes only an Items collection. You must iterate through all items and check the value of the Selected property. (There is no Checked property.)

Objects in the CheckedItems collection are of type DataRowView.

Objects in the Items collection are of type ListItem.

Application #28 Work with the XML DOM

One of the most common ways to work with XML is via its Document Object Model, or DOM. The .NET Framework provides DOM-style support via classes in the System.Xml namespace. This sample application shows you numerous ways to work with the XML DOM. Through a series of examples, you’ll become familiar with the XmlDocument class and the classes derived from the abstract XmlNode class. Figure 3-21 shows Application #28 in action.

click to expand
Figure 3-21: Application #28.

Building Upon…

Application #3: String Manipulation

Application #4: Try/Catch/Finally

New Concepts

An XmlDocument object represents an in-memory (cached) XML document that has been parsed as a tree of nodes to enable navigation and manipulation of its contents. According to the W3C specifications, each item in an XML document—whether it is an Element, Attribute, Text, etc.—is known as a node. Nodes are represented by the abstract XmlNode class. The XmlDocument class also extends XmlNode. You can get an idea of the various types of nodes that make up an XML document by perusing the members of the XmlNodeType enumeration.

For example, consider the following simple XML document. Figure 3-22 shows how the document would be loaded into the DOM.



click to expand
Figure 3-22: This figure shows how the memory is structured when the preceding XML data is read into the DOM structure (source: .NET Framework SDK Documentation).

Thus, working with XML using the DOM begins with becoming familiar with the members of the XmlDocument and XmlNode classes.

Code Walkthrough

You’ll now see how to create an XML document programmatically, load an existing XML document, iterate through XML nodes using recursion, and find nodes using XPath expressions to modify their contents.

Creating an XML Document Programmatically

The XmlDocument class has methods that allow you to programmatically create an XML document. The CreateXmlFile method in the sample application contains code that demonstrates how to do this.

First, variables that represent the major parts of an XML document are declared and initialized. (For all the examples in the sample application, the most commonly used variables—such as xDoc—have already been declared at the class level.)



You can now begin to build the document. Methods associated with each variable type are aptly named, making the code that follows self-describing:

xPI=xDoc.CreateProcessingInstruction("xml", "version='1.0'")



The only thing that requires some clarification is the use of AppendChild. As its name implies, this causes the passed node to be added to the end of the list of nodes for the document or node calling the method. You might think that CreateElement would suffice, but it only creates the element in memory and doesn’t actually add it to the DOM. Thus AppendChild is conceptually similar to ADO.NET methods such as DataTable.AcceptChanges. Until AcceptChanges is invoked, new rows and other associated changes are not reflected in the DataTable object.

The remaining elements in this simple document are added as follows:

CType(xElRoot.AppendChild(xDoc.CreateElement("Family")),XmlElem ent)

Notice that AppendChild returns a reference to the node that was appended. This comes in handy when adding nodes in succession, as when the Family element is appended and then used in the next line for adding the Father element.

The final step is to save the cached XML in the DOM to a physical file. To do this, the XmlDocument class exposes an overloaded Save method. The overload used here takes the full physical path where the file will either be created or overwritten:


If you examine the file after saving it, you’ll notice it’s not very readable because of the lack of white space. By default, white space is stripped out when a document is saved. However, you can set the XmlDocument.PreserveWhiteSpace property to True if you want the white space.

Loading XML

The XmlDocument class exposes two methods for loading XML. Load has four overloads that allow you to load XML into the DOM from a Stream object, an XmlReader object, or a TextReader object, or by providing a path to a physical file. In the sample application, the LoadXmlFile routine demonstrates the latter approach.

Alternatively, you can use LoadXml to load from a string. In the sample, LoadXMLFromString creates some XML content using a StreamWriter object and then loads it into the DOM using this line:


Iterating Nodes Using Recursion

Sometimes there is a need to iterate through all nodes in a document. A good way to do this is with a method that uses recursion. In the sample’s IterateThroughAllNodes method, you see an example of this. After loading a document into the DOM, it invokes a private helper routine named TraverseTreeAndWriteInfo, which takes three arguments: a StreamWriter object, an XmlNode object, and an integer for tracking the node level so that proper indentation can be used to format the output:

DimstrValues()AsString={s,xNode.Name,xNode.NodeType.ToString( )}



The method determines whether the current node has any children. If it does, it calls itself recursively until all nodes in the document have been traversed. At each level, the node’s name, type, and level are added to the array, which is then written to the StreamWriter object.

Finding and Manipulating Nodes

There are several ways you can find one or more nodes. The sample application uses XPath in conjunction with the SelectSingleNode and SelectNodes functions. These are exposed by XmlNode and return an XmlNode and XmlNodeList, respectively. In the sample’s ModifyElementValue method, you’ll find code that shows how to use an XPath expression to find an element:

xEl.InnerText= "JerrySmith"

The value of this element is then changed by setting the InnerText property to a different string.

In the ModifyAttributeValue method, SelectNodes is used to retrieve an XmlNodeList object containing a collection of Person elements. This list is then iterated through, and the attribute values are changed for each element using SetAttribute:

Case "father"
xEl.SetAttribute("type", "parent")
xEl.SetAttribute("gender", "male")
Case "mother"
xEl.SetAttribute("type", "parent")
xEl.SetAttribute("gender", "female")
Case "son"
xEl.SetAttribute("type", "child")
xEl.SetAttribute("gender", "male")
Case "daughter"
xEl.SetAttribute("type", "child")
xEl.SetAttribute("gender", "female")

Another XPath example is in DeleteNodesAndAddNodesWithAttributes. After finding a specific element that has a type attribute equal to parent, the age attribute is deleted and the value of the type attribute is changed to father:

xEl.SetAttribute("type", "father")

Last but not least, another good way to find a node is by its tag name, using the GetElementsByTagName method exposed by both XmlDocument and XmlElement. The following code in the sample’s DisplayElementsByTag routine illustrates the use of this method. After getting a node list, the code iterates through all of its nodes and child nodes, passing information to the StreamWriter object. Notice that to view the actual contents of the element—that is, what is between the element’s tags— the contents were treated as a child of type Text:

.WriteLine(xNode.Name& ": " &xNodeChild.Value)

For some of you, this might be easier than using XPath. However, it’s more limiting because it returns only elements.


In the .NET Framework, Microsoft continues its extensive XML support with the DOM-style XmlDocument and XmlNode classes. Using these and derived classes, you can easily work with XML loaded into the DOM. After a brief overview of the DOM, you were taken on a quick survey of numerous examples in the sample application that demonstrate how to use the many members associated with these classes.

101 Microsoft Visual Basic. NET Applications
101 Microsoft Visual Basic .NET Applications
ISBN: 0735618917
EAN: 2147483647
Year: 2006
Pages: 157 © 2008-2020.
If you may any questions please contact us: