Application Design Issues

[Previous] [Next]

In this section, I'd like to talk about some important general design issues relating to multitier application development. I'll start by describing techniques for passing data using DCOM, MSMQ, and HTTP and then cover techniques for passing disconnected recordsets and caching data in the middle tier.

When it comes to passing data across the network, it should be obvious that round trips are your enemy. No matter what protocol you use, calls across the network are expensive and should be made only when necessary. It's almost always better to send lots of data in a single round trip than to use multiple round trips to accomplish the same thing.

It's also important that you appreciate the different data-passing mechanisms of DCOM, MSMQ, and HTTP. The parameters of COM method calls are often strongly typed. As you saw in Chapter 3, strongly typed parameters allow the universal marshaler to optimize the way it moves data across the network. When you pass data using MSMQ or HTTP, your data is much more loosely typed. The underlying MSMQ transport sees the body of an MSMQ message as a single, typeless payload. Likewise, the underlying HTTP transport sees the body of an HTTP request as a single, typeless payload. This makes passing data with MSMQ and HTTP fundamentally different from passing data with DCOM.

If you're accustomed to COM and strongly typed interfaces, communicating across application boundaries using MSMQ and HTTP will require a new perspective. You can't communicate the semantics of your data using method signatures; you must use other techniques. This makes such things as documentation and self-describing data structures all the more important. When you pass data with a loosely typed transport, ADO recordsets and XML schemas can be valuable because they have metadata that describes the payload that's being moved between applications. Keep in mind that a transport that works with loosely typed data doesn't require you to install and configure application-specific type libraries.

Passing Data Using COM

One of the best things about COM is that it hides the details of interprocess communication. The code you write for an in-process object automatically works for an out-of-process object. You don't have to change anything in your Visual Basic class module. The client-side code that you write doesn't care whether the object is close by or far away.

As you know, COM is much easier to use than HTTP or MSMQ when it comes to passing data across process and computer boundaries. As you saw in Chapter 10, QC tries to make MSMQ as simple as COM, but MSMQ still has certain limitations and inflexibilities that make COM method calls more versatile.

Throughout this book, I've argued that DCOM is unusable in many situations. But there are still times when you can (and should) rely on COM's proxy/stub architecture—for example, when you need to make a synchronous call between two processes on the same machine. COM is not only easier to use than MSMQ and HTTP, in quite a few cases it also yields better performance.

You might encounter a situation in which you want server computers to communicate with one another using DCOM in a high-speed LAN environment. For example, in an application that uses request-based load balancing, you can use a COM+ server application running on a dedicated server computer to share session state across a farm of Web server computers.

As you can see, sometimes it's beneficial to pass data using COM. When you design a component that will be accessed from another process, you should be sure that it exposes methods that are designed to move data as efficiently as possible. Let's look at some practical considerations for designing methods that will be executed across a proxy/stub layer.

COM makes it trivial to move data based on primitive VBA types such as Integer, Double, and String. You simply create method signatures using these types to define your parameters and return values. It's also relatively easy to pass more complex data types such as arrays, UDTs, and Variants. You can also get tricky by passing arrays of UDTs or arrays nested within other arrays. The universal marshaler is your friend. It's happy to handle the details of packaging the data and shipping it to the other side.

When you're designing the interfaces for your component, you should be cautious about passing object references. Neither Visual Basic nor the universal marshaler supports moving an object across COM's proxy/stub architecture by value. This means that you can pass a Visual Basic object only by reference, not by value.

When you declare an object parameter using ByVal, the reference is marshaled from the client process to the process of the method being called. When you declare an object parameter using ByRef, the reference is marshaled in both directions. However, the COM object (and all the precious data in it) never leaves the process in which it was created.

When you pass a Visual Basic object reference across process or computer boundaries, the object doesn't actually move. The only thing that passes to the other side is a COM-savvy pointer that lets the recipient establish a connection back to the object being referenced. When the recipient wants to access a property or a method through the object reference, it must travel back across another proxy/stub pair to where the object lives. This type of marshaling is called standard marshaling. You should see that passing object references with standard marshaling is a bad thing when it results in excessive network round trips.

COM provides pass-by-value semantics for a special category of objects. In order to pass an object by value, the object must implement a standard COM interface named IMarshal. Such an object is said to provide custom marshaling. The benefit of using this type of object is that you can pass its data as payload across process and computer boundaries in simple COM method calls.

Objects that provide custom marshaling have a couple of notable requirements. They must be written in C++, not Visual Basic, and they must be installed and run as nonconfigured components.

Passing ADO recordsets using COM

To most Visual Basic programmers, the ADO Recordset object is the most common example of a component that implements custom marshaling. You can pass ADO recordsets across process and computer boundaries in COM method calls using either parameters or method return values. When you pass an ADO recordset by value, you must use a client-side static cursor. You must also disconnect the recordset by setting its ActiveConnection property to Nothing.

Let's look at an example. The GetCustomerTable method of the CCustomerManager component uses its return value to pass a disconnected recordset back to the caller:

 Function GetCustomerTable() As Recordset Dim conn As Connection, rs As Recordset Set conn = New Connection conn.Open sConnect Set rs = New Recordset rs.CursorLocation = adUseClient rs.CursorType = adOpenStatic rs.LockType = adLockReadOnly Set rs.ActiveConnection = conn rs.Open "SELECT Customer, CreditLimit, AccountBalance" & _ " FROM Customers" Set rs.ActiveConnection = Nothing conn.Close Set conn = Nothing Set GetCustomerTable = rs End Function 

After you expose this method in the CCustomerManager component, writing the client-side code to marshal the recordset from one process to another is simple:

 Dim CustomerManager As CCustomerManager Set CustomerManager = New CCustomerManager Dim rs As Recordset Set rs = CustomerManager.GetCustomerTable() ' Recordset object in now available in client process. 

Passing an ADO recordset in this manner is an effective way to move small, medium, or large sets of data across process and computer boundaries. Once a recordset has been marshaled into the recipient's process, the data is close at hand and can be accessed quickly. Moreover, an ADO recordset carries additional metadata that describes its contents (such as the names and data types of its columns). I'll revisit this topic and describe a few more high-level design issues about passing ADO recordsets around the network later in the chapter.

Passing Data Using MSMQ and QC

As you saw in Chapter 10, MSMQ and QC let you reap the benefits of asynchronous and/or disconnected communication. It's a nice way to send requests and notifications between applications. Note that, like DCOM, MSMQ has trouble going through firewalls and can be tricky to configure for Internet clients. However, it can still be used in some situations for client-to-server communication. Moreover, both MSMQ and QC are very valuable for server-to-server communication.

The application that sends the message packs the message body. It's often desirable to transmit a set of request-specific parameters, so during the design phase you should consider how to pack data into the body of an MSMQ message. On the receiving side, you must be able to unpack these parameters before you process the request. In Chapter 10, I talked about sending and receiving messages, but my examples passed only a simple VBA string in the message body. Now let's look at some techniques for passing messages that hold more complex data structures.

The body of an MSMQ message is a Variant that's stored and transmitted as a Byte array. You can read and write the usual VBA data types (such as Boolean, Byte, Integer, Long, Single, Double, Currency, Date, and String) by using the message body. MSMQ tracks the type you use in the message header. This makes it quite easy to store a single value in a message body. But it doesn't solve the problem of packing several pieces of data at once. To pack several pieces of data into a message, you must understand how to use the Byte array behind the message body.

Using an array behind the message body can be tricky because you can use only an array of Bytes. If you assign another type of array to the message body, MSMQ automatically converts it to a Byte array. Unfortunately, once your data has been converted to a Byte array, there's no easy way to convert it back to the original array type on the receiving side. This means that a simple technique such as sending your parameters in a String array won't work.

QC also has an unfortunate limitation when it comes to passing arrays. Visual Basic doesn't allow you to define array parameters using ByVal; you must use ByRef. However, using ByRef in a method signature prevents the entire interface from being queueable. This means that the methods of a queued component can't use parameters or return values to move arrays.

Despite QC's limitations with arrays, a queued component generally makes it much easier to pass data than direct MSMQ programming does. You simply define ByVal parameters in your method signatures, and the QC runtime packs all the data into an MSMQ message for you. Also remember that a client application can call the same method multiple times on a recorder object. This can produce the same results as passing an array across the network.

The technique of pairing names and values in strings is popular with MSMQ programmers. This approach allows you to pack parameterized information into the message body using a single string value. But in this scenario the sending party is responsible for constructing the string and the receiving party is responsible for extracting the name/value pairs by parsing the string on the other side.

A string that contains name/value pairs is an example of loosely typed data. Both the sender and the receiver have to agree on the format of the string ahead of time. If you decide to go down this road, the use of XML schemas and an XML parser on each side can make the loosely typed nature of the MSMQ body far more manageable.

Packing a message body using a PropertyBag object

PropertyBag objects are useful because they can automate most of the tedious work of packing your parameterized information into a message body and unpacking it later. A PropertyBag object allows you to work in terms of name/value pairs. You read named properties from and write named properties to the PropertyBag object, and the object takes care of serializing your data in an internal Byte array.

Each PropertyBag object has a Contents property, which represents its internal Byte array. You can write named values into this Byte array using the WriteProperty method. Once you write all your parameters into a PropertyBag object, you can use the Contents property to serialize the Byte array into the message body, as shown here:

 Dim msg As MSMQMessage Set msg = New MSMQMessage Dim PropBag As PropertyBag Set PropBag = New PropertyBag PropBag.WriteProperty "Customer", "Bob" PropBag.WriteProperty "Product", "MohairSuit" PropBag.WriteProperty "Quantity", 4 msg.Body = PropBag.Contents msg.Send q 

The PropertyBag object writes your named values into a stream of bytes using its own proprietary algorithm. Once you pack up a Byte array in the sender application, you need a second PropertyBag object on the receiving side to unpack it. You can unpack the message by loading the Byte array into a new PropertyBag object and calling the ReadProperty method:

 Set msg = q.Receive() Dim PropBag As PropertyBag Set PropBag = New PropertyBag PropBag.Contents = msg.Body Dim Customer As String, Product As String, Quantity As Long Customer = PropBag.ReadProperty("Customer") Product = PropBag.ReadProperty("Product") Quantity = PropBag.ReadProperty("Quantity") 

As you can see, the PropertyBag object makes your life much easier because it packs and unpacks your parameters for you. You can use this technique in other places as well. For example, you can define a Byte array parameter for a standard COM method and pass name value pairs between a client and an COM object. You can also use PropertyBag objects to store persistent name/value pairs in ASP Application variables and ASP Session variables.

Using persistable objects

Another technique for passing parameterized information using MSMQ or QC uses persistable objects. These are objects whose data can be stored and transmitted in the body of an MSMQ message. However, MSMQ can store objects in a message body only if they implement one of two standard COM interfaces: IPersistStream or IPersistStorage.

The interface definitions for IPersistStream and IPersistStorage contain parameters that are incompatible with Visual Basic. You can't implement these interfaces in a straightforward manner using the Implements keyword. Fortunately, Visual Basic 6 has added support for persistable classes. When you create a persistable class, Visual Basic automatically implements IPersistStream behind the scenes. When you work with persistable classes, you can read objects from and write objects directly to the message body. You can also create queued components that pass persistable objects as parameters. In the case of MSMQ or QC, the data associated with a persistable object is written into the body of an MSMQ message and a clone object is created on the other side when the receiving application retrieves the message.

Every public class in an ActiveX DLL or ActiveX EXE project has a Persistable property. You must set this property to Persistable at design time to make a persistent class. When you make a class persistent, the Visual Basic IDE lets you add a ReadProperties and a WriteProperties method to the class module. You can add the skeletons for these two methods using the wizard bar (which consists of two combo boxes at the top of the class module window in the Visual Basic IDE). You can also add the InitProperties method, although it isn't required when you use MSMQ or QC.

You can use the ReadProperties and WriteProperties methods to read properties from and write properties to an internal PropertyBag object. Visual Basic creates this object for you behind the scenes and uses it to implement IPersistStream. Remember that your object must implement IPersistStream in order for MSMQ to write the object's data to a message body. When MSMQ calls the methods in the IPersistStream interface, Visual Basic simply forwards these calls to your implementations of ReadProperties and WriteProperties.

Using persistable classes with MSMQ is a lot easier than it sounds. For example, you can create a new persistable class and add the properties you want to pack into the message body. Next, you provide implementations of ReadProperties and WriteProperties. Here's an example of a persistable Visual Basic class module that models a sales order request:

 ' COrderRequest: a persistable class Public Customer As String Public Product As String Public Quantity As Long Private Sub Class_ReadProperties(PropBag As PropertyBag) Customer = PropBag.ReadProperty("Customer", "") Product = PropBag.ReadProperty("Product", "") Quantity = PropBag.ReadProperty("Quantity", "") End Sub Private Sub Class_WriteProperties(PropBag As PropertyBag) PropBag.WriteProperty "Customer", Customer PropBag.WriteProperty "Product", Product PropBag.WriteProperty "Quantity", Quantity End Sub 

As you can see, there isn't much to creating a persistable class. Once you have a persistable class like the one shown above, you can use it to pack a message body like this:

 Dim msg As MSMQMessage Set msg = New MSMQMessage ' Create and prepare order request object. Dim Order As COrderRequest Set Order = New COrderRequest Order.Customer = txtCustomer.Text Order.Product = txtProduct.Text Order.Quantity = txtQuantity.Text ' Assign the object to the message body. msg.Body = Order ' WriteProperties is called. msg.Send q 

When you assign an object to the message body, MSMQ performs a QueryInterface on the object to see whether it supports IPersistStream or IPersistStorage. Since your object supports IPersistStream, MSMQ knows that it can call a method on this interface called Save. When MSMQ calls Save, Visual Basic forwards the call to your implementation of WriteProperties. This gives you an opportunity to write your named property values into the PropertyBag object, and they're automatically copied into the message body as an array of Bytes.

In the receiver application, you can easily rehydrate a persistent object from a message body by creating a new reference and assigning the message body to it:

 Set msg = q.Receive(ReceiveTimeOut:=0) Dim Order As COrderRequest Set Order = msg.Body Dim Customer As String, Product As String, Quantity As Long Customer = Order.Customer Product = Order.Product Quantity = Order.Quantity 

When you assign a message body to a reference using the Set keyword, MSMQ creates a new instance of the object and calls the Load method of IPersistStream. Visual Basic forwards this call to your implementation of ReadProperties. Once again, you use the PropertyBag object passed as a parameter to extract your data.

QC makes passing persistable objects even easier because you aren't required to explicitly receive the message. If you pass a persistable object to a method of a queued component, the QC runtime automatically creates the cloned object before your method begins to execute.

One thing to keep in mind when you use this technique is that you must install and configure the DLL that holds the persistable class on both the sender's computer and the receiver's computer. This is required because the MSMQ runtime must recreate a local copy of the object on both computers in order for this scheme to work. Note that this configuration issue can make the use of persistable classes less desirable than some of the other techniques presented in this chapter.

Passing ADO recordsets using MSMQ

It turns out that you can also pass an ADO recordset in the body of an MSMQ message. An ADO recordset component, like a persistable class, implements IPresistStream, so you can simply assign an ADO recordset to a message body and MSMQ and ADO will work together to pack all the data associated with the recordset into the message body. You can also define a method in a queued component using a recordset parameter as long as you mark the parameter with the ByVal keyword.

As in the case of passing recordsets in a COM method call, this technique works only if you're using an ADO recordset with a client-side static cursor. This technique is powerful and also extremely easy to use. Look at the following code:

 Dim conn As Connection Set conn = New Connection conn.Open sConnect Dim rs As Recordset Set rs = New Recordset rs.CursorLocation = adUseClient rs.CursorType = adOpenStatic rs.LockType = adLockReadOnly rs.Open "SELECT Customer, CreditLimit, AccountBalance" & _ " FROM Customers" ' Pack recordset into a new message. Dim msg As MSMQMessage Set msg = New MSMQMessage msg.Body = rs ' Send message to queue. msg.Send ResponseQueue ResponseQueue.Close rs.Close Set rs = Nothing conn.Close Set conn = Nothing 

On the receiving side, you can harvest the recordset from the message body just as easily. Here's an example of a client application that rehydrates a recordset and binds it to a data-aware grid control:

 Set q = qi.Open(MQ_RECEIVE_ACCESS, MQ_DENY_NONE) Set msg = q.Receive() Dim rs As Recordset Set rs = msg.Body ' Bind recordset to grid. Set grdDisplay.DataSource = rs 

Think of the design possibilities that open up when you use this approach. For example, say you're designing an application for a remote sales force. When users are connected to the network, you can download the most recent customer list to local queues on their laptop computers. Later, when users are disconnected, they can continue to add new sales orders and update customer information. Behind the scenes, you can track all of these inserts and updates in local ADO recordsets. When a user reconnects to the network, these recordsets can be transparently forwarded to a listener application that adds the new orders and posts customer updates to the database.

Passing Data Using HTTP

Using Visual Basic and Microsoft's XML parser, you can easily write code to transmit an XML document in an HTTP request. You saw a few examples of this earlier in the chapter. After all, an XML document is just a string. The challenging part is deciding what to put in it. Designing the XML schemas on which an XML document is based is a critical undertaking.

Over the next few years, XML will become more popular as a means of passing data across application, organization, and vendor boundaries, and it will create more opportunities for companies to exchange information with one another electronically. Placing an electronic order with a supplier is an obvious example, but ways to apply XML can go far beyond that. XML allows you to integrate COM+ applications with heterogeneous applications built using Corba and Enterprise JavaBeans.

XML is too important for you to ignore. If you haven't already learned about this technology, you should definitely get started and get comfortable using Microsoft's XML parser. The more you know about XML, the better. Also keep in mind that while it's highly fashionable to transmit XML documents using HTTP, you can just as easily pass them around using COM and MSMQ.

Passing ADO/XML recordsets using HTTP

You should look for opportunities to take advantage of the powerful integration of Microsoft's XML parser and ADO 2.5, which allows you to easily pass ADO recordsets from the Web server back to a client application in an HTTP response. As you'll see, you can also post an updated recordset from a client application back to the Web server in an HTTP request.

Let's look at an example. Here's an implementation of the GetCustomerTable method of the CCustomerManager component:

 Function GetCustomerTable() As Recordset Dim conn As Connection, rs As Recordset Set conn = New Connection conn.Open sConnect Set rs = New Recordset rs.CursorLocation = adUseClient rs.CursorType = adOpenStatic rs.LockType = adLockBatchOptimistic Set rs.ActiveConnection = conn rs.Open "SELECT Customer, CreditLimit, AccountBalance" & _ " FROM Customers" Set rs.ActiveConnection = Nothing conn.Close Set conn = Nothing Set GetCustomerTable = rs End Function 

This implementation of the GetCustomerTable method is similar to one shown earlier. The only difference is that this implementation assigns the recordset's LockType property a value of adLockBatchOptimistic, which allows you to update the disconnected recordset in the client application.

Now let's look at an ASP page that allows a Visual Basic client application to call the GetCustomerTable method and retrieve the recordset using HTTP:

 <%@ Language=VBScript %> <% Response.ContentType = "text/xml" set obj = Server.CreateObject("WEBMARKET.CCustomerManager") set rs = obj.GetCustomerTable() rs.Save Response, 1 %> 

The ASP page calls the CCustomerManager object to retrieve the recordset object. It then serializes the recordset into the body of the HTTP response by calling the Save method and passing the ASP Response object as the first parameter.

Let me provide a little background on what's going on behind the scenes. As its first parameter, the Save method of an ADO recordset can accept any object that implements the IStream interface. Things work nicely with IIS 5 because the ASP Response object implements IStream. In other words, the ASP Response object gives the ADO recordset object a place in which to write a serialized stream of data. When the ASP page calls Save, the ADO runtime serializes the recordset's data and writes it into the HTTP response.

Note that the ASP page passes a value of 1 as the second parameter to the Save method. This value tells the ADO runtime to serialize the recordset using ADO's XML format. In a Visual Basic application that references the ADO type library, it's better to pass the constant adPersistXML instead of the hard-coded value 1.

Now look at the code in the client application that retrieves the recordset from the Web server:

 Const sURL = http://localhost/MyApplication/ Dim rs As Recordset Set rs = New Recordset rs.Open sURL & "GetCustomerTable.asp" ' Now access the recordset programmatically ' or bind the recordset to a data-aware grid. 

It's amazing how little code is required when you're using ADO 2.5. The Open method of an ADO recordset object accepts the URL of any ASP page that can send back a recordset serialized as an XML document. You don't even have to write any code that uses the XML parser.

After the client retrieves the recordset, the sky's the limit. You can bind the recordset to a grid, and you can perform sort, filter, and find operations. If the recordset was created with a LockType setting of adLockBatchOptimistic, you can update the recordset in the client application and send it back to the Web server to post the user's changes to the database.

Let's look at an example that sends an updated recordset back to the Web server. The code is slightly more complicated than the last example because you have to use a DOMDocument object.

 Const sURL = http://localhost/MyApplication/ Dim rs As Recordset ' Use the recordset object from a grid control. Set rs = grdCustomers.DataSource Dim RequestDoc As MSXML.DOMDocument Set RequestDoc = New MSXML.DOMDocument rs.Save RequestDoc, adPersistXML Dim httpRequest As MSXML.XMLHTTPRequest Set httpRequest = New MSXML.XMLHTTPRequest httpRequest.Open "POST", sURL & "PostCustomerTable.asp", False httpRequest.send RequestDoc ' Check for success/failure. If httpRequest.Status <> 200 Then ' Raise error when return status isn't 200. Err.Raise MyErrorCode, , "HTTP transport error: " & httpRequest.Status & _ " - Description: " & httpRequest.statusText End If 

The tricky part to sending the recordset back to the Web server is serializing it into an XML document. The technique used in this example relies on a call to the recordset's Save method. The client application passes a DOMDocument object as the first parameter. The DOMDocument object, like the ASP Response object, implements the IStream interface. This means that the ADO runtime can write serialized XML data to the DOMDocument object. DOMDocument object can then be passed as a parameter to the Send method of the XMLHTTPRequest object. That's all that's required to send an XML document holding the data for the updated recordset back to the Web server.

Now let's look at the code in the ASP page that handles this incoming request on the Web server:

 <%@ Language=VBScript %> <% dim rs set rs = Server.CreateObject("ADODB.Recordset") rs.Open Request set obj = Server.CreateObject("WEBMARKET.CCustomerManager") obj.PostCustomerTable rs %> 

The ASP page creates a new recordset object and loads it with the XML data from the ASP Request object. This works because the ASP Request object in IIS 5 also implements IStream. The ASP page then creates an instance of CCustomerManager and calls the PostCustomerTable method, passing the recordset. The final link in the chain is the implementation of the PostCustomerTable method of the CCustomerManager component:

 Sub PostCustomerTable(ByVal rs As Recordset) Dim conn As Connection Set conn = New Connection conn.Open sConnect Set rs.ActiveConnection = conn rs.UpdateBatch Set rs.ActiveConnection = Nothing conn.Close Set conn = Nothing End Sub 

The implementation of PostCustomerTable establishes a connection to the database and then associates the incoming recordset with this connection. After the recordset has been associated with an active connection, this method implementation executes the UpdateBatch method. The ADO runtime then works with your OLE-DB provider to create and submit the required INSERT, UPDATE, and DELETE statements. As you can see, updatable recordsets can save you lots of programming effort because all the required SQL statements are generated and submitted behind the scenes.

Read-Only Recordsets Versus Updatable Recordsets

The practice of passing ADO recordsets around the network has always been somewhat controversial. Some purists feel that passing a disconnected recordset to a client application violates the spirit of the three-tier model. They believe that data access code and database schema information should be hidden away in the middle tier. However, others argue that the practical benefits outweigh the theoretical disadvantages.

Passing ADO recordsets to a client application is beneficial for several reasons. Once you download a recordset to a user's desktop computer, you can leverage ADO's client-side cursor engine to perform sort, filter, and find operations to your heart's content. You can bind recordset objects to data-aware controls. I know you're never supposed to admit to other Visual Basic programmers that you use bound controls, but you'll have to agree that they're pretty handy at times.

Passing recordsets to the client yields a few key benefits in terms of scalability. First, the client doesn't have to submit a request to the Web server (which must call to the database server) to perform sort, filter, or find operations. The ADO client-side cursor engine performs these operations locally, making the client application very responsive. Second, client applications submit few requests to server computers, which means that network traffic is reduced. Third, you can offload work to desktop computers, thus saving valuable processing cycles on the database server computer.

It's important to understand the distinction between read-only recordsets and updatable recordsets. Working with read-only recordsets is usually far less complicated. The use of updatable recordsets requires attention to concurrency and optimistic locking. It also requires extra contingency code to deal with runtime errors caused by update conflicts.

When you call UpdateBatch to post an updated recordset back to the database, you'd better be prepared to act when things don't go as expected. Many programmers avoid updatable recordsets because they believe that they create more problems than they solve. Other programmers find that updatable recordsets are useful, but only in certain situations. You need to determine when they're useful and when they're not.

If you use updatable recordsets, you should understand how ADO uses optimistic locking to deal with update conflicts. When a client application modifies rows in a disconnected recordset, the changes are cached in memory. However, ADO maintains a copy of the original row values as well as the new row values. Both values are transmitted when the recordset is sent back to the Web server. When the middle-tier component finally calls the UpdateBatch method to write the changes back to the database, ADO does some extra work to detect whether any updates have been made by other users since the recordset was copied from the database.

ADO detects update conflicts by adding WHERE clauses to its INSERT, UPDATE, and DELETE statements. This allows ADO to determine whether it's updating each row in its original form. ADO inspects the rows affected value on a statement-by-statement basis to ensure that each update has been made successfully. If ADO sees that an UPDATE statement has affected no rows, it determines that another user has modified the record. This update conflict causes ADO to raise a runtime error in the call to UpdateBatch.

The hard part about using updatable recordsets is handling partial failures. What if a client posts an updated recordset with 20 changes but 3 of these changes can't be written due to conflicts? Should you roll back the other 17 updates? This can be a difficult question to answer because a lot depends on the application. The bottom line is that the ADO optimistic concurrency mechanism is generic, but your contingency code must be customized to fit your needs. In some cases, you'll be required to implement a fairly complex conflict resolution scheme. This could involve sending a recordset of records that couldn't be written due to conflicts back to the client.

When you consider whether to use updatable recordsets, you must look at the data in question and determine the probability of conflict. Sometimes updatable recordsets have little or no chance of conflict—for example, if you're passing a customer recordset that represents all accounts from a specific salesperson's territory. If no other salesperson needs to update the same set of records, you can assume that conflicts are unlikely. However, if you need to pass the same customer recordset to many different users, the chances of conflict are much higher and updatable recordsets will be much tougher to use.

Let's say that your application requires a more sophisticated locking and concurrency scheme. Perhaps you want an application design where a user can retrieve a customer record for editing and lock out other users for a specific period of time. If you assume that the average user should get five minutes of "think time" to work with the record once it's been retrieved, you need to somehow lock the record to prevent others from updating it.

The first thing to note is that the optimistic locking scheme that ADO provides for client-side cursors won't give you what you want. Disconnected recordsets never leave locks on the database. Their only contribution to avoiding conflicts is that they check to determine when update conflicts have occurred. However, in our example we need a lock on the database to prevent others from updating the record.

Using a server-side cursor and pessimistic locking isn't a good solution either. Pessimistic locks should be held only for short periods—typically, the time it takes to run a transaction. One problem with a server-side pessimistic lock is that it requires you to keep a recordset object alive with an established connection. That doesn't work well in the middle tier. The second problem has to do with fault tolerance. If the client application crashes or the user falls asleep at the wheel after obtaining a pessimistic lock, the lock might be held for a long, long time.

The solution to this problem is to write a custom locking scheme. When you want users to be able to obtain a logical lock on a record, it's usually a good idea to use a timeout interval to give users a certain window of time in which to make and submit changes.

A custom locking scheme usually requires you to add extra fields to your database tables. Each table that will support the scheme needs extra columns for information about the lock owner and when the lock was acquired. Then you must add code to assign an owner and update the time that the lock was acquired.

When a user attempts to retrieve and edit a record, your code must determine whether the record is currently locked by another user. Your code must examine the last lock time and see if an existing lock has exceeded the timeout interval. If another user owns a lock, you can raise an error back to the user. However, if no lock is currently being held, you can update the lock owner and lock time information in the record before returning the record to a user for the pre-configured editing period.

As you can see, the automatic locking provided by ADO and OLE-DB can't accommodate sophisticated concurrency policies. You often need a custom scheme like the one I've just presented, which involves extra work in the design and implementation phases.

Caching Data in the Middle Tier

There are two primary reasons to cache data on the Web server—to improve response times (because it eliminates round trips to the database server) and to conserve processing cycles on the database server.

I'm going to jump right in and show you how to convert an ADO recordset to a string in the middle tier. This technique allows you to cache an ADO/XML recordset in an ASP Application variable or in a shared property using the SPM. ADO 2.5 has introduced the Stream object, which makes things pretty easy. Look at the following code:

 Dim conn As Connection Set conn = New Connection conn.Open sConnect Dim rs As Recordset Set rs = New Recordset rs.CursorLocation = adUseClient rs.CursorType = adOpenStatic rs.LockType = adLockReadOnly Set rs.ActiveConnection = conn rs.Open "SELECT Customer, CreditLimit, AccountBalance" & _ " FROM Customers" Set rs.ActiveConnection = Nothing conn.Close Set conn = Nothing ' Write recordset data into string variable using XML format. Dim strm As Stream, MyXML As String Set strm = New Stream rs.Save strm, adPersistXML MyXML = strm.ReadText rs.Close Set rs = Nothing ' Now cache MyXML in an ASP Application variable. 

As I mentioned, the Stream object is new to ADO 2.5. In earlier versions of ADO, you could store persistant recordsets in disk files, but not in memory, because you didn't have any way to write the data associated with a recordset to a variable. The Stream object in ADO 2.5 makes this task trivial.

Once you write the XML data to an ASP Application variable or a shared property in the SPM, you can reuse it in future requests. You can use this data to load an ADO recordset object or a DOMDocument object. You can also manipulate the XML data on the Web server and/or pass it back to the client in an HTTP response.

Note that you should avoid placing ADO Recordset objects and ADO Stream object in either ASP Application variables or shared properties in the SPM. They have threading characteristics that make this practice very undesirable. You're always safe to store XML text in these middle-tier variables because this data is just a string. One other option that you have is to load read-only XML data into a DOMFreeThreadedDocument object. This object has sophisticated threading capabilities, which makes it acceptable to store the object in an ASP Application variable or a shared property in the SPM. This technique can conserve processing cycles because you don't need to reload XML data into a DOMDocument object on a per-request basis.

In some applications, it's also valuable to perform Extensible Style Language (XSL) transforms on the Web server—especially if you're supporting down-level browsers that can't make any sense of XML. These clients require formatted HTML tables. You can cache the XML data that represents the input to the XSL transform. You can also cache the resulting HTML table that's generated by the XSL transform. If many clients want the same view of a table, you don't have to keep reprocessing the same task. You can do the work once and leverage the results across many requests. The tradeoff is that you might require lots of memory on the Web server. However, memory is often much easier and cheaper to obtain than processing cycles.

When you cache data in the middle tier, you have to make some assumptions about how often the data will need to be refreshed. Some data changes so often that it can't be cached for any reasonable period of time. Other data remains static for days or months at a time. Still other types of data, possibly from a data warehouse, are historic and can be considered read-only. Once you make assumptions about the nature of your data, you must then decide how often it needs to be refreshed. Static data lends itself much more easily to caching techniques.

In Chapter 9, I described some of the issues involved with refreshing cached data at regular intervals. While some aspects of refreshing data aren't complicated, designing a caching coherency scheme for a Web farm environment can be very complicated, especially for applications that must maintain session state for users who are constantly redirected across a set of different servers in a Web farm. In many cases, the easiest approach is to simply write session state out to a database.

Summary

I'd like you to take away a few important points from this chapter. First, scalability comes in many forms. While most applications require several forms of scalability, a given application usually does not require all of them. It's up to you to define your application requirements early on so you know what's important and what's not.

I also described in this chapter the critical roles HTTP and the Web farm architecture play in scaling a COM+ application. This means that most applications should base all client-to-server communication on HTTP rather than DCOM. Remember that it's also possible to use HTTP without HTML. HTML is important only when you need to provide cross-platform and down-level browser support. Visual Basic desktop applications that use HTTP will become increasingly common. They can be integrated into a scalable architecture and they can provide a user interface that's far more sophisticated than those based on Web browsers.

While I haven't taught you much about XML in this book, I encourage you to spend the time to become proficient with it. You should also download Microsoft's XML parser and become familiar with its documentation. If you're just starting out, get a good book that teaches XML from the ground up. Xml in Action by William J. Pardi (Microsoft Press, 1999) is a good entry-level book.

If you really want to master XML, I encourage you to read Essential XML: beyond markup by Don Box, Aaron Skonnard, and John Lam (Addison Wesley, 2000). OK, so I'm a little biased because one of the authors wrote the foreword to this book. However, Essential XML is a groundbreaking book that explains the heart and soul of what XML is all about. It goes way beyond the syntax of XML parsers and explains why XML is critical to our industry at this moment in time.

So, this is where the second edition of this book comes to an end. That means I should leave you with a summary of what you've learned. At the end of the day, COM and COM+ are simply technologies that help you build distributed applications. COM is the glue that allows you to assemble applications using components written in a variety of languages. COM+ is a runtime environment that offers valuable services such as thread pooling, declarative transactions, and role-based authorization checks.

COM+ and Windows 2000 also constitute a platform that provides many valuable services. IIS and ASP are essential services because they provide a basis for exposing functionality through HTTP and scaling an application through a Web farm architecture. MSMQ and QC are important because they add the extra dimension of asynchronous and disconnected communication. Taken together, these pieces provide the infrastructure that allows you to build large-scale applications that meet the needs of today's businesses. Now it's your turn to go out and apply what you've learned.



Programming Distributed Applications with COM+ and Microsoft Visual Basic 6.0
Programming Distributed Applications with Com and Microsoft Visual Basic 6.0 (Programming/Visual Basic)
ISBN: 1572319615
EAN: 2147483647
Year: 2000
Pages: 70
Authors: Ted Pattison

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net