Basic Design Goals

When creating object-oriented applications, the ideal situation is that any nonbusiness objects will already exist. This includes UI controls, data-access objects, and so forth. In that case, all we need to do is focus on creating, debugging, and testing our business objects themselves , thereby ensuring that each one encapsulates the data and business logic needed to make our application work.

As rich as the .NET Framework is, however, it doesn't provide all the nonbusiness objects that we'll need in order to create most applications. All the basic tools are there, but there's a fair amount of work to be done before we can just sit down and write business logic. There's a set of higher-level functions and capabilities that we often need, but that aren't provided by .NET right out of the box.

These include the following:

  • n-Level undo capability

  • Tracking broken business rules to determine whether an object is valid

  • Tracking whether an object's data has changed (is it "dirty"?)

  • Support for strongly typed collections of child objects

  • A simple and abstract model for the UI developer

  • Full support for data binding in both Windows Forms and Web Forms

  • Saving objects to a database and getting them back again

  • Table-driven security

  • Other miscellaneous features

In all of these cases, the .NET Framework provides all the pieces of the puzzle, but we need to put them together so that they match our specialized requirements. What we don't want to do, however, is to have to put them together for every business object we create. We want to put them together once , so that all these extra features are automatically available to all our business objects.

Moreover, because our goal is to enable the implementation of object-oriented business systems, we must also preserve the core object-oriented concepts:

  • Abstraction

  • Encapsulation

  • Polymorphism

  • Inheritance

What we'll end up with is a framework consisting of a number of classes, as shown in Figure 2-2. The following diagram shows the end resultand don't worrywe'll break it down into digestible parts as we go through the chapter!

image from book
Figure 2-2: UML static-class diagram for the framework

These classes will end up being divided into a set of assemblies, or DLLs. These are illustrated in the component diagram shown in Figure 2-3.

image from book
Figure 2-3: UML component diagram for the framework

Again, don't worry too much about the details herewe'll be discussing them throughout the chapter. These diagrams are provided to give you a glimpse of what's coming, and for convenient reference as we discuss each of the classes and components in turn . Before we start to get into the details of the framework's design, let's discuss our desired set of features in more detail.

n-Level Undo Capability

Many Windows applications provide their users with an interface that includes OK and Cancel buttons (or some variation on that theme). When the user clicks an OK button, the expectation is that any work the user has done will be saved. Likewise, when the user clicks a Cancel button, he expects that any changes he's made will be reversed or undone.

In simple applications, we can often deliver this functionality by saving the data to a database when we click OK, and discarding the data when we click Cancel. For slightly more complex applications, we may need to be able to undo any editing on a single object when the user presses the Esc key. (This is the case for a row of data being edited in a DataGrid : If the user presses Esc, the row of data should restore its original values.)

When applications become much more complex, however, these approaches won't work. Instead of simply undoing the changes to a single row of data in real time, we may need to be able to undo the changes to a row of data at some later stage.

Consider the case in which we have an Invoice object that contains a collection of LineItem objects. The Invoice itself contains data that we can edit, plus data that's derived from the collection. The TotalAmount property of an Invoice , for instance, is calculated by summing up the individual Amount properties of its LineItem objects. Figure 2-4 illustrates this arrangement.

image from book
Figure 2-4: Relationship between the Invoice, LineItems, and LineItem classes
Note 

Typically, the methods on the collection and child object would be internal in scope. Unfortunately, UML has no way to notate this particular scope, so the diagram shows them as public .

Our user interface may allow the user to edit the LineItem objects, and then press Enter to accept the changes to the item, or Esc to undo them. However, even if the user chooses to accept changes to some LineItem objects, she can still choose to cancel the changes on the Invoice itself. Of course, the only way to reset the Invoice object to its original state is to restore the states of the LineItem objects as wellincluding any changes that she "accepted" for specific LineItem objects.

As if this weren't enough, many applications have more complex hierarchies of objects and subobjects (which we'll call "child objects"). Perhaps our individual LineItem objects each have a collection of Component objects beneath them. Each one represents one of the components sold to the customer that make up the specific line item as shown in Figure 2-5.

image from book
Figure 2-5: Class diagram showing a more complex set of class relationships

Now things get even more complicated. If the user edits a Component object, those changes ultimately impact upon the state of the Invoice object itself. Of course, changing a Component also changes the state of the LineItem object that owns the Component .

The user might accept changes to a Component , but cancel the changes to its parent LineItem object, thereby forcing an undo operation to reverse accepted changes to the Component . Or in an even more complex scenario, the user may accept the changes to a Component and its parent LineItem , only to cancel the Invoice . This would force an undo operation that reverses all those changes to the child objects.

Implementing an undo mechanism to support such n-Level scenarios isn't trivial. We must implement code to "trap" or "snapshot" the state of each object before it's edited, so that we can reverse the changes later on. We might even need to take more than one snapshot of an object's state at different points in the editing process, so that we can have the object revert to the appropriate point, based on when the user chooses to accept or cancel any edits.

Note 

This multilevel undo capability flows from the user's expectations. Consider a typical word processor, where we can undo multiple times to restore the content to ever-earlier states.

And the collection objects are every bit as complex as the business objects themselves. We must handle the simple case when a user edits an existing LineItem , but we must also handle the case where a user adds a new LineItem and then cancels changes to the parent or grandparent, resulting in the new LineItem being discarded. Equally, we must handle the case where the user deletes a LineItem and then cancels changes to the parent or grandparent, thereby causing that deleted object to be restored to the collection as though nothing had ever happened .

n-Level undo is a perfect example of complex code that we don't want to write into every business object. Instead, this functionality should be written once , so that all our business objects support the concept and behave the way we want them to. We'll incorporate this functionality directly into our business object frameworkbut at the same time, we must be sensitive to the different environments in which we'll use our objects. Although n-Level undo is of high importance when building sophisticated Windows user experiences, it's virtually useless in a typical web environment.

In web-based applications, the user typically doesn't have a Cancel button. He either accepts the changes, or navigates away to another task, allowing us simply to discard the changed object. In this regard, the web environment is much simpler and if n-Level undo isn't useful to the web UI developer, she shouldn't be forced to deal with it! Our design will take into account that some user-interface types will use the concept, though others will simply ignore it.

Tracking Broken Business Rules

A lot of business logic involves the enforcement of business rules . The fact that a given piece of data is required is a business rule. The fact that one date must be later than another date is a business rule. Some business rules are the result of calculations, though others are merely toggleswhen they're broken, the object is invalid. There's no easy way to abstract the entire concept of business rules, but we can easily abstract the concept of business rules that act like a togglethat is, the rule is either broken or not broken.

Note 

There are commercial business-rule engines and other business-rule products that strive to take the business rules out of our software and keep it in some external location. Some of these may even be powerful and valuable . For most business applications, however, we end up coding the business rules directly into our software. If we're object-oriented, this means coding them into our objects.

A fair number of business rules are of the toggle variety: required fields, fields that must be a certain length (no longer than, no shorter than), fields that must be greater than or less than other fields, and so forth. The common theme is that business rules, when broken, immediately make our object invalid. Combined, we can say that an object is valid if none of these rules is broken, but invalid if any of the rules is broken.

Rather than trying to implement a custom scheme in each business object in order to keep track of which rules are broken and whether the object is or isn't valid at any given point, we can abstract this behavior. Obviously, the rules themselves must be coded into our objects, but the tracking of which rules are broken and whether the object is valid can be handled by the framework. The result will be a standardized mechanism by which the developer can check all business objects for validity. The user-interface developer should also be able to retrieve a list of currently broken rules to display to the user (or for any other purpose).

The list of broken rules is obviously linked to our n-Level undo capability. If the user changes an object's data so that the object becomes invalid, but then cancels the changes, the original state of the object must be restored. The reverse is true as well: An object may start out invalid (perhaps because a required field is blank), so the user must edit data until it becomes valid. If the user later cancels the object (or its parent, grandparent, and so on), then the object must become invalid once again, because it will be restored to its original invalid state.

Fortunately, this is easily handled by treating the broken rules and validity of each object as part of that object's state. When an undo operation occurs, not only is the object's core state restored, but so is the list of broken rules associated with that state. The object and its rules are restored together.

Tracking Whether the Object Has Changed

Another concept is that an object should keep track of whether its state data has been changed. This is important for the performance and efficiency of data updates. Typically, we only want to update data to the database if the data has changedit's a waste of effort to update the database with values it already has! Although the UI developer could keep track of whether any values have changed, it's simpler to have the object take care of this detail.

We can implement this in a number of ways, ranging from keeping the previous values of all fields (so that we can make comparisons to see if they've changed), to saying that any change to a value (even "changing" it to its original value) will result in us treating the object as being changed.

Obviously, there's more overhead involved in keeping all the original state values for comparison. On the other hand, a simpler model will often mark an object as being changed when actually it hasn't. This often has its own cost, because we'll typically only save "dirty" objects to the database. An erroneous dirty flag will cause us to interact with the database in order to update columns with the values they already possess!

Rather than having our framework dictate one cost over the other, we'll simply provide a generic mechanism by which our business logic can tell the framework whether each object has been changed. This scheme supports both extremes of implementation, allowing us to make a decision based on the requirements of a specific application.

Strongly Typed Collections of Child Objects

The .NET Framework includes the System.Collections namespace, which contains a number of powerful, generic , collection-based objects, including ArrayList , Hashtable , Queue , Stack , and NameValueCollection . For the most part, these collections accept any type of objectthey provide no mechanism by which we can ensure that only objects of a specific type (such as a business object) are in the collection.

Fortunately, the .NET Framework also includes base collection classes from which we can inherit in order to create our own collection objects. We can restrict these in order to hold only the specific types of object we choose.

Sadly, the basic functionality provided by the collection base classes isn't enough to integrate fully with our framework. As we mentioned previously, the business objects need to support some relatively advanced features, such as undo capabilities. Following this line of reasoning, the n-Level undo capabilities that we've talked about must extend into the collections of child objects, thereby ensuring that child object states are restored when an undo is triggered on the parent object. Even more complex is the support for adding and removing items from a collection, and then undoing the addition or the removal if an undo occurs later on.

Also, a collection of child objects needs to be able to indicate if any of the objects it contains is dirty. Although we could force business-object authors to write code that loops through the child objects to discover whether any is marked as dirty, it makes a lot more sense to put this functionality into the framework's collection object, so that the feature is simply available for use. The same is true with validity: If any child object is invalid, then the collection should be able to report that it's invalid. If all child objects are valid, then the collection should report itself as being valid.

As with our business objects themselves, the goal of our business framework will be to make the creation of a strongly typed collection as close to normal .NET programming as possible, while allowing our framework to provide extra capabilities that we want in all our objects. What we're actually defining here are two sets of behaviors: one for business objects (parent and/or child), and one for collections of business objects. Though business objects will be the more complex of the two, our collection objects will also include some very interesting functionality.

Simple and Abstract Model for the User-Interface Developer

At this point, we've discussed some of the business-object features that we want to support. One of the key reasons for providing these features is to make the business object support Windows- and web-style user experiences with minimal work on the part of the UI developer. In fact, this should be an overarching goal when you're designing business objects for a system. The UI developer should be able to rely on the objects in order to provide business logic, data, and related services in a consistent manner.

Beyond all the features we've already covered is the issue of creating new objects, retrieving existing data, and updating objects in some data store. We'll discuss the process of object persistence later in the chapter, but first we need to consider this topic from the UI developer's perspective. Should the UI developer be aware of any application servers? Should they be aware of any database servers? Or should they simply interact with a set of abstract objects? There are three broad models that we can choose from:

  • UI in charge

  • Object in charge

  • Class in charge

To a greater or lesser degree, all three of these options hide information about how objects are created and saved and allow us to exploit the native capabilities of .NET. Ideally, we'll settle on the option that hides the most information (keeping development as simple as possible) and best allows us to exploit the features of .NET.

Note 

Inevitably, the result will be a compromise. As with many architectural decisions, there are good arguments to be made for each option. In your environment, you may find that a different decision would work better. Keep in mind, though, that this particular decision is fairly central to the overall architecture of the framework we're building, so choosing another option will likely result in dramatic changes throughout the framework.

To make this as clear as possible, the following discussion will assume that we have a physical n- tier configuration, whereby the client or web server is interacting with a separate application server, which in turn interacts with the database. Although not all applications will run in such configurations, we'll find it much easier to discuss object creation, retrieval, and updating in this context.

UI in Charge

One common approach to creating, retrieving, and updating objects is to put the UI in charge of the process. This means that it's the UI developer's responsibility to write code that will contact the application server in order to retrieve or update objects.

In this scheme, when a new object is required, the UI will contact the application server and ask it for a new object. The application server can then instantiate a new object, populate it with default values, and return it to the UI code. The code might be something like this:

 AppServer svr =     Activator.GetObject("http://myserver/myroot/appserver.rem");   Customer cust = svr.CreateCustomer(); 

Here the object of type AppServer is anchored, so it always runs on the application server. The Customer object is unanchored, so although it's created on the server, it's returned to the UI by value.

Note 

This code uses .NET's remoting technology to contact a web server and have it instantiate an object on our behalf . If you're not familiar with remoting, there's an introduction in the next chapter.

This may seem like a lot of work just to create a new, empty object, but it's the retrieval of default values that makes it necessary. If our application has objects that don't need default values, or if we're willing to hard-code the defaults, we can avoid some of the work by having the UI simply create the object on the client workstation. However, many business applications have configurable default values for objects, and we must load those default values from the database, and that means the application server must load them.

When retrieving an existing object, we follow largely the same procedure. The UI passes criteria to the application server, which uses the criteria to create a new object and load it with the appropriate data from the database. The populated object is then returned to the UI for use. The UI code might be something like this:

 AppServer svr =     Activator.GetObject("http://myserver/myroot/appserver.rem");   Customer cust = svr.GetCustomer(myCriteria); 

Updating an object happens when the UI calls the application server and passes the object to the server. The server can then take the data from the object and store it in the database. Because the update process may result in changes to the object's state, the newly saved and updated object is then returned to the UI. The UI code might be something like this:

 AppServer svr =     Activator.GetObject("http://myserver/myroot/appserver.rem");   cust = svr.UpdateCustomer(cust); 

Overall, this model is straightforward: The application server must simply expose a set of services that can be called from the UI to create, retrieve, and update objects. Each object can simply contain its business logic, without having to worry about application servers or other details.

The drawback to this scheme is that the UI code must know about and interact with the application server. If we move the application server, or decide to have some objects come from a different server, then we must change the UI code. Moreover, if we create a Windows UI in order to use our objects, and then later create a web UI that uses those same objects, we'll end up with duplicated code. Both types of UI will need to include the code in order to find and interact with the application server.

The whole thing is complicated further when we consider that the physical configuration of our application should be flexible. It should be possible to switch from using an application server to running the data-access code on the client just by changing a configuration file. If there's code scattered throughout our UI that contacts the server any time we use an object, then we have a lot of places where we might introduce a bug that prevents simple configuration file switching.

Object in Charge

Another option is to move the knowledge of the application server into our objects themselves. The UI can just interact with our objects, allowing them to load defaults, retrieve data, or update themselves. In this model, simply using the new keyword creates a new object:

 Customer cust = new Customer(); 

Within the object's constructor, we would then write the code to contact the application server and retrieve default values. It might be something like this:

 public Customer()   {     AppServer svr =       Activator.GetObject("http://myserver/myroot/appserver.rem");     Object[] values = svr.GetCustomerDefaults();     // Copy the values into our local variables   } 

Notice here that we're not taking advantage of the built-in support for passing an object by value across the network. What we'd like to do is this:

 public Customer()   {     AppServer svr =       Activator.GetObject("http://myserver/myroot/appserver.rem");     this = svr.CreateCustomer();   } 

But it won't work because this is read-only so we'll get a compile error.

This means we're left to retrieve the data in some other manner (array, hashtable, dataset, or some other data structure), and then load it into our object's variables. The end result is that we have to write code on both the server and in our business class in order to manually copy the data values.

Given that both the UI-in-charge and class-in-charge techniques avoid all this extra coding, let's just abort the discussion of this option and move on.

Class in Charge

The UI-in-charge approach allows us to use .NET's ability to pass objects by value, but requires the UI developer to know about and interact with the application server. The object-in-charge approach enables a very simple set of UI code, but makes our object code prohibitively complex by making it virtually impossible to pass our objects by value.

The class-in-charge option gives us a good compromise by providing reasonably simple UI code that's unaware of application servers, while also allowing us to use .NET's ability to pass objects by value, thus reducing the amount of "plumbing" code that we need to write in each object. By hiding more information from the UI, we're creating a more abstract and loosely coupled implementation, thus providing better flexibility.

In this model, we'll make use of the concept of static methods on a class. A static method can be called directly, without requiring an instance of the class to be created first. For instance, suppose that our Customer class contains the following code:

 [Serializable()]   public class Customer   {     public static Customer NewCustomer()     {       AppServer svr =         Activator.GetObject("http://myserver/myroot/appserver.rem");       return svr.CreateCustomer();     }   } 

Then the UI code could use this method without first creating a Customer object, as follows :

 Customer cust = Customer.NewCustomer(); 
Note 

A common example of this tactic within the .NET Framework itself is the Guid class, whereby a static method is used to create new GUID values, as follows:

 Guid myGuid = Guid.NewGuid(); 

We've accomplished the goal of making the UI code reasonably simple, but what about the static method and passing objects by value? Well, the NewCustomer() method contacts the application server and asks it to create a new Customer object with default values. The object is created on the server, and then returned back to our NewCustomer() code, which is running on the client . Now that the object has been passed back to the client by value, we can simply return it to the UI for use.

Likewise, we can create a static method on our class in order to load an object with data from the data store as shown:

 public static Customer GetCustomer(string criteria)     {       AppServer svr =         Activator.GetObject("http://myserver/myroot/appserver.rem");       return svr.GetCustomer(criteria);     } 

Again, the code contacts the application server, providing it with the criteria necessary to load the object's data and create a fully populated object. That object is then returned by value to the GetCustomer() method running on the client, and then back to the UI code.

As before, the UI code remains simple:

 Customer cust = Customer.GetCustomer(myCriteria); 

The class-in-charge model requires that we write some static methods in each class, but keeps the UI code simple and straightforward. It also allows us to take full advantage of .NET's ability to pass objects across the network by value, thereby minimizing the plumbing code we must write. Overall, therefore, it provides the best solution, and we'll use it (and explain it further) in the chapters ahead.

Supporting Data Binding

For nearly a decade , Microsoft has included some kind of data-binding capability in its development tools. Data binding allows us as developers to create forms and populate them with data with almost no custom codethe controls on a form are "bound" to specific fields from a data source (such as a DataSet object).

For almost the same amount of time, data binding has largely been a joke. Originally, it offered performance far below what we could achieve by hand-coding the link between controls and the data source. And even after many of the performance issues were addressed, the data-binding implementations offered too little control to the developer, thereby restricting the types of user experience we could offer. In VB 6, for example, the primary issues blocking widespread use of data binding included the following:

  • We couldn't easily validate the last field the user was editing if they pressed Enter to trigger the default button's click event on a form.

  • We couldn't bind controls to anything but a Recordset it wasn't possible to bind controls to the properties of a business object.

  • We could only bind to one property on a given controltypically, the Text or Value property.

The only place where data binding has been consistently useful is in displaying large amounts of data within a grid, as long as that data didn't need updating. In most cases, the weak performance and loss of control was worth it in order to save us from writing reams of boilerplate code. With the .NET Framework, however, Microsoft has dramatically improved data binding for Windows Forms. Better still, we can use data binding when creating web applications, because Web Forms support it, too. The primary benefits or drivers for using data binding in .NET development include the following:

  • Microsoft resolved the performance, control, and flexibility issues of the past.

  • We can now use data binding to link controls to properties of business objects.

  • Data binding can dramatically reduce the amount of code we write in the UI.

  • Data binding is sometimes faster than manual coding, especially when loading data into list boxes, grids, or other complex controls.

Of these, the biggest single benefit is the dramatic reduction in the amount of UI code that we need to write and maintain. Combined with the performance, control, and flexibility of .NET data binding, the reduction in code makes it a very attractive technology for UI development.

In Windows Forms, data binding is read-write, meaning that we can bind an element of a data source to an editable control so that changes to the value in the control will be updated back into the data source as well. In Web Forms, data binding is read-only, meaning that when we bind a control to the data source, the value is copied from the data source into the control, but we must update values from our controls back to the data source manually.

The reason why Web Forms data binding is read-only is due to the nature of web development in general: It's unlikely that our data source will be kept in memory on the server while the browser is displaying the page. When the updated values are posted from the browser back to the server, there's no longer a data source available for binding, so there's no way for the binding infrastructure to update the values automatically.

In both Windows Forms and Web Forms, data binding is now very powerful. It offers good performance with a high degree of control for the developer, thereby overcoming the limitations we've faced in the past. Given the coding savings we gain by using data binding, it's definitely a technology that we want to support as we build our business framework.

Enabling Our Objects for Data Binding

Although we can use data binding to bind against any object, or any collection of homogeneous objects, there are some things that we can do as object designers to make data binding work better. If we implement these "extra" features, we'll enable data binding to do more work for us, and provide the user with a superior experience. The .NET DataSet object, for instance, implements these extra features in order to provide full data-binding support to both Windows and web developers.

The IEditableObject Interface

All of our editable business objects should implement the interface called System.ComponentModel.IEditableObject interface. This interface is designed to support a simple, one-level undo capability, and is used by simple forms-based data binding and complex grid-based data binding alike.

In the forms-based model, IEditableObject allows the data-binding infrastructure to notify our object before the user edits it, so that it can take a snapshot of its values. Later, we can tell the object whether to apply or cancel those changes, based on the user's actions. In the grid-based model, each of our objects is displayed in a row within the grid. In this case, the interface allows the data-binding infrastructure to notify our object when its row is being edited, and then whether to accept or undo the changes based on the user's actions. Typically, grids perform an undo operation if the user presses the Esc key, and an accept operation if the user moves off that row in the grid by any other means.

The IBindingList Interface

All of our business collections should implement the interface called System.ComponentModel.BindingList interface.

This interface is used in grid-based binding, in which it allows the control that's displaying the contents of the collection to be notified by the collection any time an item is added, removed, or edited, so that the display can be updated. Without this interface, there's no way for the data-binding infrastructure to notify the grid that the underlying data has changed, so the user won't see changes as they happen.

Property Change Events

Finally, we need to add events to our editable business objects, so that they can notify the form any time their data values change. Changes that are caused directly by the user editing a field in a bound control are supported automatically, but if the object updates a property value through code , rather than by direct user editing, we need to notify the data-binding infrastructure that a refresh of the display is required.

Note 

Interestingly, this feature has nothing to do with the IEditableObject or IBindingList interface. Those exist primarily to support grid or other complex controls, though these events exist primarily to support the binding of a control to a specific property on our object.

We implement this by raising events for each property on the object, where the event is the property name with the word "Changed" appended. For instance, a FirstName property should raise a FirstNameChanged event any time the property value is changed. If any control is bound to the FirstName property of our object, this will be automatically intercepted by the data-binding infrastructure, which will trigger a refresh of all the data-bound controls on the form.

Because these events are simply public events raised by our object, they're accessible not only to a form, but also to the UI developer, if they choose to receive and handle them.

Events and Serialization

The events that are raised by our business collections and business objects are all valuable. Events support the data-binding infrastructure and enable us to utilize its full potential. Unfortunately, there's a conflict between the idea of objects raising events, and the use of .NET serialization via the [Serializable()] attribute.

When we mark an object as [Serializable()] , we're telling the .NET Framework that it can pass our object across the network by value. This means that the object will be automatically converted into a byte stream by the .NET runtimea topic we'll cover in more detail in Chapter 3. It also means that any object referenced by our object will be serialized into the same byte stream, unless the variable representing it is marked with the [NonSerialized()] attribute. What may not be immediately obvious is that events create an object reference behind the scenes .

When our object declares and raises an event, that event is delivered to any object that has a handler for the event (because our object has a delegate pointing to the method that handles the event, and a delegate is a reference). We often have a form handle events from objects, as illustrated in Figure 2-6.

image from book
Figure 2-6: A Windows form referencing a business object

How does the event get delivered to the handling object? Well it turns out that behind every event is a delegate a strongly typed object that points back to the handling object. This means that any time we use events, we have bidirectional references between our object and the object that's handling our events as shown in Figure 2-7.

image from book
Figure 2-7: Handling an event on an object causes a back reference to the form.

Even though this back reference isn't visible to developers, it's completely visible to the .NET serialization infrastructure. When we go to serialize our object, the serialization mechanism will trace this reference and attempt to serialize any objects (including forms) that are handling our events! Obviously, this is rarely desirable. In fact, if the handling object is a form, this will fail outright with a runtime error, because forms aren't [Serializable()] . Rather, they are anchored objects.

Note 

If any anchored object handles events that are raised by our [Serializable()] object, we'll be unable to serialize our object because the .NET runtime serialization process will error out.

What we need to do is mark the events as [NonSerialized()] . It turns out that this requires a bit of special syntax when dealing with events. Specifically , we need to use the field target within an attribute ( [field: NonSerialized()] ), which marks the underlying delegate to not be serialized rather than the event itself.

Object Persistence and Object-Relational Mapping

One of the biggest challenges facing a business developer building an object-oriented system is that a good object model is almost never the same as a good relational model. Because most of our data is stored in relational databases using a relational model, we're faced with the significant problem of translating that data into an object model for processing, and then changing it back to a relational model later on, when we want to persist the data from our objects back into the data store.

Relational vs. Object Modeling

Before we go any further, let's make sure we're in agreement that object models aren't the same as relational models. Relational models are primarily concerned with the efficient storage of data, so that replication is minimized. Relational modeling is governed by the rules of normalization, and almost all databases are designed to meet at least the third normal form. In this form, it's quite likely that the data for any given business concept or entity is split between multiple tables in the database in order to avoid any duplication of data.

Object models, on the other hand, are primarily concerned with modeling behavior , not data. It's not the data that defines the object, but what the object represents within our business domain. In many cases, this means that our objects will contain data from multiple tables, or that multiple objects may represent a single table.

At the simplest level, consider a Customer object. In many organizations, customer data resides in multiple data stores. Sales data may be in a SQL Server database, though shipping data resides in Oracle and invoicing data is in an AS400. Even if we're lucky enough to have only one type of database engine to deal with, it's very likely that our data will be in various tables in different databases.

Each of these data stores might be relational, but when we design our Customer object, we really don't care. Some of the data that will construct our Customer object will come from each location, and when we update the Customer object, all three data stores may be updated. In even more complex cases, we may have interesting relationships between objects that bear little resemblance to a relational model. An invoice, for instance, will certainly have data from an Invoice table, but it's likely to also include some customer information, and possibly some product or shipping information, too.

A classic example of where object models and relational models often differ is in a many-to-many relational model. In such a modeland we'll take the example here of physicians and serviceswe have three tables, reflecting the fact that a physician provides many services, and any given service may be provided by many physicians as illustrated in Figure 2-8.

image from book
Figure 2-8: Data diagram showing many-to-many relationship between tables

This relationship is constructed by creating a link (or bridge ) table that contains keys from both the Physician and Service tables, thereby providing a bidirectional, many-to-many link between the two entities. Although it's possible to construct the same model using objects, it's more natural to implement the object model with two types of link objectsone for each type of parent entity as shown in Figure 2-9.

image from book
Figure 2-9: Class diagram showing many-to-many relationship between classes

This object model provides a collection-based view of the entities, allowing us to retrieve a Physician object along with a collection of child objects that represent the specific services provided by the doctor. Each of these child objects provides access to an actual Service object, but the child object is an entity in its own right, not only possibly providing information about the service, but also perhaps indicating how long the physician has been providing this service or other pertinent data.

Note that this is conceptually an infinite loop. Given a physician, we could navigate to a provided service and retrieve an actual Service object from there. As one of its child objects, that Service object would contain a Provider object that links back to the original Physician object. Although this is obviously not a relational model, it's perfectly acceptable (and intuitive) from an object-oriented perspective.

Note 

We're replacing the underlying relational database with our object model. What we're trying to do is provide an object model that more accurately represents the business entities, so that we can work with a subset of the data in an object-oriented fashion. When we're not using the objects, the actual data is stored within relational databases.

Object-Relational Mapping

If object models aren't the same as relational models (or some other data models that we might be using), we'll need some mechanism by which we can translate our data from the data storage and management tier up into our object-oriented business-logic tier.

Note 

This is a well-known issue within the object-oriented community. One of the best discussions of it can be found in David Taylor's book, Object-Oriented Technology: A Manager's Guide .

Several object-relational mapping (ORM) products exist for the COM platform, and there are ORM features in the J2EE environment. Right now, however, there's no generic ORM support within .NET. In truth, this isn't too much of a nuisance: Generic ORM support is often problematic in any case. The wide variety of mappings that we might need, and the potential for business logic driving variations in the mapping from object to object, make it virtually impossible to create a generic ORM product that can meet all our needs.

Consider the Customer object example that we discussed earlier. There the data comes from disparate data sources, some of which might not even be relational perhaps there's a file containing fixed-length mainframe records where we've implemented custom software to read the data? It's also quite possible that our business logic will dictate that some of the data is updated in some cases, but not in others. No existing ORM product, for COM or for J2EE, can claim to solve all these issues. The most they can do is provide support for simple cases, where we're updating objects to and from standard, supported, relational data stores. At most, they'll provide hooks by which we can customize their behavior. Rather than trying to build a generic ORM product as part of this book, we'll aim for a much more attainable goal.

Our framework will define a standard set of four methods for creating, fetching, updating, and deleting objects. As business developers, we'll implement these four methods to work with the underlying data management tier by using ADO.NET, or the XML support in .NET, or web services, or any other technology required to accomplish the task. In fact, if you have an ORM (or some other generic data access) product, you'll be able to invoke that from these four methods just as easily as using ADO.NET directly.

The point is that our framework will simplify object persistence and ORM to the point where all we need to do is implement these four methods in order to retrieve or update data. This places no restrictions on our ability to work with data, and provides a standardized persistence and mapping mechanism for all objects.

Preserving Encapsulation

As I noted at the beginning of the chapter, one of my key goals was to design this framework to provide powerful features while following the key object-oriented concepts, including encapsulation .

Encapsulation is the idea that all of the logic and data pertaining to a given business entity is held within the object that represents that entity. Of course, there are various ways in which one can interpret the idea of encapsulationnothing is ever simple!

One approach is to say that we can encapsulate business data and logic in our object, and then encapsulate data access and ORM behavior in some other object. This provides a nice separation between the business logic and data access, and encapsulates both types of behavior as shown in Figure 2-10.

image from book
Figure 2-10: Separation of ORM logic into a persistence object

Although there are certainly some advantages to this approach, there are drawbacks, too. The most notable of these is that there's no easy or efficient way to get the data from the persistence object into or out of the business object. For the persistence object to load data into the business object, it must be able to bypass business and validation processing in the business object, and somehow load raw data into it directly. If our persistence object tries to load data into the object using our standard properties, we'll run into a series of issues:

  • The data already in the database is presumed valid, so we're wasting a lot of processing time revalidating data unnecessarily. This can lead to a serious performance problem when loading a large group of objects.

  • There's no way to load read-only property values. We often have read-only properties for things such as the primary key of the data, and we obviously need to load that into the object, but we can't load it via the normal interface (if that interface is properly designed).

  • Sometimes, properties are interdependent due to business rules, which means that some properties must be loaded before others or errors will result. The persistence object would need to know about all these conditions so that it could load the right properties first. The result is that the persistence object would become very complex.

On the other hand, having the persistence object load raw data into the business object breaks encapsulation in a big way, because we have one object directly tampering with the internal variables of another. We could do it by using reflection (which we'll discuss in more detail in Chapter 3), or by designing the business object to expose its private variables for manipulation. But the former is slow, and the latter is just plain bad object design: It allows the UI developer (or any other code) to manipulate these variables, too, so we're asking for the abuse of our objects, which will invariably lead to code that's impossible to maintain.

A much better approach, therefore, is to view encapsulation to mean that all the logic for the business entity should be in the objectthat is, the logic to support the UI developer (validation, calculation, and so on), and the data-access logic. This way the object encapsulates all responsibility for its datait has sole control over the data from the moment it leaves the database, until the time when it returns to the database as shown in Figure 2-11.

image from book
Figure 2-11: Business object directly managing persistence to the data store

This is a simpler way of doing things, because it keeps all of the logic for our entity within the boundaries of a single object, and all of our code within the boundaries of a single class. Any time we need to alter, enhance, or maintain the logic for an entity, we know exactly where to find it. There's no ambiguity regarding whether the logic is in the business object, the persistence object, or possibly boththere's only one object.

The new approach also has the benefit of providing optimal performance. Because the data access and ORM code is inside the object, that code can interact directly with the object's private instance variables. We don't need to break encapsulation, nor do we need to resort to trickery such as reflection (or deal with the resulting performance issues).

The drawback to this approach is that we're including code inside our business class that will handle data accessthat is, we're blurring the line between the business-logic tier and the data-access tier in our n-tier logical model. Our framework will help to mitigate this by formally defining four methods into which the data-access code will be written, so there will still be a substantial barrier between our business logic and our data-access code.

On balance, then, we prefer this second view, because we achieve total encapsulation of all data and logic pertaining to a business entity with very high performance. Better still, we can accomplish this using techniques and technologies that are completely supported within the .NET Framework, so that we don't need to resort to any complex or hard-to-code workarounds (such as using reflection to load the data).

Note 

If you're interested, my goal when writing my Visual Basic 6 Business Objects book was to achieve exactly this model. Unfortunately, there was no realistic way to accomplish it with the COM platform, and so I compromised and created UI-centric and data-centric objects to implement the model. With .NET, the technology does exist to reach this goal, and the next section will explain this in more detail.

Supporting Physical n-tier Models

The question that remains, then, is how we're supposed to support n-tier physical models if the UI-oriented and data-oriented behaviors reside in one object?

UI-oriented behaviors almost always involve a lot of properties and methods a very fine-grained interface with which the user interface can interact in order to set, retrieve, and manipulate the values of an object. Almost by definition, this type of object must run in the same process as the UI code itself, either on the Windows client machine with our Windows Forms, or on the web server with our Web Forms.

Conversely, data-oriented behaviors typically involve very few methods: create, fetch, update, delete. They must run on a machine where they can establish a physical connection to the database server. Sometimes, this is the client workstation or web server, but often it means running on a physically separate application server.

This point of apparent conflict is where the concept of distributed objects enters the picture. It's possible to pass our business object from an application server to the client machine, work with the object, and then pass the object back to the application server so that we can store its data in the database. To do this, we need some generic code running as a service on the application server with which the client can interact. This generic code does little more than accept the object from the client, and then call methods on the object to retrieve or update data as required. But the object itself does all the real work. Figure 2-12 illustrates this concept, showing how we can pass the same physical business object from application server to client, and vice versa, via a generic router object that's running on the application server.

image from book
Figure 2-12: Passing a business object to and from the application server

In Chapter 1 we discussed anchored and unanchored objects. In this model, the business object is unanchored, meaning that it can be passed around the network by value. The router object is anchored, meaning that it will always run on the machine where it's created.

In our framework, we'll refer to this router object as a data portal . It will act as a portal for all data access on all our objects. Our objects will interact with this portal in order to retrieve default values (create), fetch data (read), update or add data (update), and remove data (delete). This means that the data portal will provide a standardized mechanism by which we can perform all CRUD operations.

The end result will be that our business class will include a method that the UI can call in order to load an object based on data from the database:

 public static Customer GetCustomer(string customerId)     {       DataPortal.Fetch(new Criteria(customerId));     } 

The actual data-access code will be contained within each of our business objects. The data portal will simply provide an anchored object on a machine with access to the database server, and will invoke the appropriate CRUD methods on our business objects themselves. This means that the business object will also implement a method that will be called by the data portal to actually load the data. That method will look something like this:

 protected void DataPortal_Fetch(Object criteria)     {       // Code to load the object's variables with data goes here     } 

The UI won't know (or need to know) how any of this works, so in order to create a Customer object, the UI will simply write code along these lines:

 Customer cust = Customer.GetCustomer("ABC"); 

Our framework, and specifically our data portal, will take care of all the rest of the work, including figuring out whether the data-access code should run on the client workstation or on an application server.

Note 

For more background information on the concept of a data portal, refer to my Adventures in VB.NET column. [1]

By using a data portal, we can keep all our logic encapsulated within the business objects and still support physical n-tier configurations. Better still, by implementing the data portal correctly, we'll be able to switch between having the data-access code running on the client machine and placing it on a separate application server just by changing a configuration file setting. The ability to change between different physical configurations with no changes to code is a powerful, valuable feature.

Table-Based Security

Application security is often a challenging issue. Our applications need to be able to authenticate the user, which means that we need to know the user's identity. Our applications also need to be able to authorize the user to perform (or not to perform) certain operations, or to view (or not to view) certain data. Such authorization is typically handled by placing users into groups, or by defining roles to which a user can belong.

Note 

Authorization is just another type of business logic. The decisions about what a user can and can't do or can and can't see within our application are business decisions. Although our framework will work with the .NET Framework classes that support authentication, it's up to our business objects to implement the rules themselves.

Sometimes, we can rely on our environment to authenticate the user. Windows itself can require a user to provide a user ID and password, and we can also use third-party products for this purpose. Authorization, however, is something that belongs to our application. Although we may rely on our environment (Windows, COM+, or another product) to manage the user and the groups or roles to which they belong, it's always up to us to determine what they can and cannot do within our application.

Note 

The association of users or roles with specific behaviors or data within our application is part of our business logic. The definition of who gets to do what is driven by business requirements, not technical requirements.

The .NET Framework directly supports Windows' integrated security. This means that we can use objects within the framework to determine the user's Windows identity and any domain or Active Directory (AD) groups to which they belong. In some organizations, this is enough: All the users of the organization's applications are in the Windows NT domain or AD, and by having them log in to a workstation or a website using integrated security, our applications can determine the user's identity and roles (groups).

In other organizations, howeverpossibly the majority of organizations applications are used by at least some users who are not part of the organization's NT domain or AD. They may not even be members of the organization in question. This is very often the case with web and mobile applications, but it's surprisingly common with Windows applications as well. In these cases, we can't rely on Windows' integrated security for authentication and authorization.

To complicate matters further, we really want a security model that provides role information not only to server-side code, but also to the code in our UI. Rather than allowing the user to attempt to perform operations that will generate errors due to security at some later time, we should gray out the options, or not display them at all. To do this requires the UI developer to have access to the user's identity and roles, just as the business-object author does.

Arranging this state of affairs isn't too hard as long as we're using Windows' integrated security, but it's often problematic when we rely solely on, say, COM+ role-based security, because there's no easy way to make the COM+ role information for our user available to the UI developer.

Note 

In May 2002, Juval Lowy wrote an article for MSDN magazine in which he described how to create custom .NET security objects that merge NT domain or AD groups and COM+ roles so that both are available to the application for use. [2]

For our business framework, we'll provide support for both Windows' integrated security and custom, table-based security, in which the user ID, password, and roles are managed in a simple set of SQL Server tables. This custom security is a model that we can adapt to use any existing security tables or services that already exist in your organization.

[1] Rockford Lhotka, "A Portal for My Data," Adventures in VB.NET , MSDN, March 5, 2004. See http://msdn.microsoft.com/library/en-us/dnadvnet/html/vbnet03262002.asp.

[2] Juval Lowy, "Unify the Role-Based Security Models for Enterprise and Application Domains with .NET," MSDN, May 2002. See http://msdn.microsoft.com/security/securecode/dotnet/default.aspx?pull=/msdnmag/issues/02/05/rolesec/default.aspx.



Expert C# Business Objects
Expert C# 2008 Business Objects
ISBN: 1430210192
EAN: 2147483647
Year: 2006
Pages: 111

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net