Disconnected Web Applications

Transparent Data Sources

The Employees Manager sample application we ll be looking at is a relatively simple viewer of records that allows for pagination, sorting, and more important, insertions, deletions, and updates. Figure 7-1 shows the main application interface. (You can find the EmpMan.aspx application on the companion CD.)

Figure 7-1

The main interface for record viewer application.

When the application first loads, it performs two important tasks: it attempts to locate a particular XML file on the Web server that is known to contain the data to resume execution and, if the file is found, it reads the file s content into the Cache object and displays the content in the grid. The grid represents the working copy of the data that is affected by any insertion, deletion, and update the user performs. If the file is not available, the data is retrieved from the original data source and then cached.

Working with the Cache Object

Unlike the Session object, the Cache object does not work on a per-user basis. Any object you store is visible to all users throughout all currently running sessions. Why would you choose Cache over Session? My opinion is that if you can easily figure out how to store and retrieve user-specific information from Cache, you should seriously consider using it. The Cache object implements a least recently used (LRU) mechanism that automatically frees up rarely used or unimportant data when the system memory becomes too scarce. In other words, the Cache object gives you more power when controlling the amount of information stored in memory. If you plan to store large blocks of data and are using Session, consider implementing a home-made scavenging mechanism to ensure that valuable server resources are preserved and the server is always in shape. You can accomplish the same results without any cost by using Cache, and I think Cache gets the job done better.

To recognize the current session, you can always rely on the session ID, which has an ASCII-compatible representation in ASP.NET. However, an application that supports offline activity might require multiple sessions for the same user before the data is actually committed. In this case, a session ID is unfit to recognize users because it would change from access to access. So you recognize users through their user IDs, and you use this piece of information to store user-specific data in the cache and to create made-to-measure files.

In the sample application, the login page passes the name of the connected user to the page that follows it by using a URL parameter. The user name is then used to look up an XML file containing the DataSet object previously saved for that user. If no file is found, the data is loaded from the database. No matter how data is retrieved, when it is loaded, it is soon copied into the ASP.NET cache by using a user-specific slot. Again, the user name is the key to keeping user information distinct.

Loading Data from a Generic Source

In our application, the function that loads data checks whether the dump file exists and then either reads the file content or fills a data adapter. You can use the File.Exists method to discover whether a certain file exists. File.Exists is certainly more lightweight than setting up a try/catch block, as shown in the next snippet of code, but a try/catch block protects you from a greater number of erroneous and inconsistent situations. If the file exists with corrupted content, a simple check for existence does not shield you from a run-time error.

private DataSet LoadData(String strFile, String strConn, String strCommand) { DataSet ds; try { XmlTextReader xtr = new XmlTextReader(Server.MapPath(strFile)); ds = new DataSet(); ds.ReadXml(xtr); xtr.Close(); } catch() { SqlConnection conn = new SqlConnection(strConn); SqlDataAdapter da = new SqlDataAdapter(strCommand, conn); ds = new DataSet(); da.Fill(ds, "Employees"); } return ds; }

In spite of the radically different storage medium we re using (a relational database versus an XML file), the structure of the resulting data is the same. The DataSet object can be serialized to XML efficiently and with excellent results. Basically, it can store and hence serialize all the information that pertains to a given set of tables, including primary keys, constraints, and relations.

The companion CD contains an application named XmlCreator.aspx that allows you to execute queries to fill DataTable objects and add them to a given DataSet object. The DataSet object is then serialized to an XML disk file that can be further used as the data set of a disconnected application.

tip

An application that reads all the data from a separate XML file can work without SQL Server 2000 installed. This lets you build effective demos that can be deployed and run on any client s machine without setting up the database first. The spooky presence of the XML file can be hidden in custom classes that encapsulate the overall data access.

Creating Subtables

When you design applications that must work with the database both on line and off line, you might have to code the creation of a subtable that is, a table that contains only a subset of the original rows. This code would not be particularly hard to write, but it has the potential to unveil some rather underhanded moves by ADO.NET objects.

All ADO.NET container objects such as DataSet and DataTable hold references to the child objects, not their whole contents. An object that belongs to a certain container for example, a DataRow object that belongs to DataTable cannot be linked to another object, even when no structural or syntactical contraindications exist. For example, you can t add a DataRow object to a table that has a schema different from DataRow, and you can t even add it to a table with the same schema. The only way to work around this constraint is duplicating the objects. In .NET, this linking issue is addressed in terms of the shallow and deep copy of objects. A shallow copy of an object limits duplication to the top-level interface of the class but adds references to child objects. In .NET, classes normally allow shallow copying through a method called Clone. If a Clone method is not available for a given class, then shallow copying is not publicly allowed. In their own implementation code, though, classes can always create shallow copies of themselves by using the MemberwiseClone method.

The DataTable class allows external callers to create shallow copies through the standard method Clone.

caution

A variable-to-variable assignment does not accomplish shallow or deep copying either. Instead, it simply creates another reference to the same memory location and hence to the same object.

DataTable dt1 = dt;

Once this statement executes both dt1 and dt point to the same object.

Cloning the Table Schema

The Clone method of the DataTable class lets you create a new empty Data Table object with the same structure (columns, keys, constraints, mappings, and relations) as the parent:

DataTable dt1 = dt.Clone();

A full copy of the table can be created by using the Copy method, which combines the cloning and the duplication of the rows.

Importing Rows

You can add rows to a DataTable object in two ways: by using the Rows property and the DataRowCollection collection class, or by using the ImportRow method. The two strategies work quite differently. When you use the Rows collection, you can add only unreferenced DataRow objects (that is, newly created DataRow objects or DataRow objects that are detached from any table.) The second strategy, which uses the ImportRow method, first creates a deep copy of the DataRow object and then adds it to the Rows collection. A deep copy contains all child objects; that is, the copied table has its own unique DataRow and DataColumn objects. The following code snippet demonstrates how to select some rows from a table and use them to populate a new table.

DataRow[] rgRows = dt.Select(strCriteria); DataTable dtCopy = dt.Clone(); foreach(DataRow row in rgRows) dtCopy.ImportRow(row);

To manually and quickly duplicate a DataRow object, you use the Item- Array property of the DataRow class. ItemArray returns all the values in the row as an array. Interestingly, if you assign ItemArray an array of values, the values will be copied automatically in the row fields in the order they appeared in the original table. Armed with this functionality, you can easily set up a deep copy of a DataRow object as follows:

DataTable dt = drOrig.Table; DataRow drCopy = dt.NewRow(); drCopy.ItemArray = drOrig.ItemArray;

You need to create a new DataRow object with the same schema as the original. The DataRow object, though, does not have a public constructor and can be created only by the NewRow method called on a DataTable object that has the needed schema. The row s Table property returns the DataTable object with its corresponding schema.



Building Web Solutions with ASP. NET and ADO. NET
Building Web Solutions with ASP.Net and ADO.NET
ISBN: 0735615780
EAN: 2147483647
Year: 2002
Pages: 75
Authors: Dino Esposito

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net