Batch-Queue Server

Batch-Queue Server

Now that we've discussed some of the key issues that we face when doing reporting or batch processing in distributed, object-oriented environments, let's move on to provide some solutionsor at least the basis from which you can develop complete solutions for your specific requirements.

Since we don't have a batch-queue processing service available as part of .NET, we'll create one here. While this really is just a basic implementation, it will allow users to schedule tasks to be run on a batch server machineone that should have a very high-speed connection to the database server so that it can interact with large data volumes efficiently . We'll be able to use this to generate reports or run batch processing on the server.

The batch-queue service will be multithreaded, allowing multiple batch jobs to run concurrently on different threads. This ability can be used to optimize the throughput on the batch server machine. We'll provide a setting through which the system administrator can specify the maximum number of batch jobs to run at any one time.

Mainframe and minicomputer operating systems have pretty much always included powerful batch-processing engines, thereby allowing users to submit tasks for background processing. These tasks are queued up and processed either as soon as possible, or at a scheduled time. Users can get a list of the pending tasks, and remove tasks from the list if they wish.

Tip 

In many cases, a lot of other features are also provided, such as the ability to suspend a task, reschedule it, and so forth. The infrastructure to support batch processing is typically quite extensive in these environments.

The lack of such concepts in the .NET environment is a big drawback, especially if we need to generate large reports or perform long-running batch-processing tasks, or if we need to schedule tasks to run in the middle of the night when no user is around to launch them manually.

CSLA.BatchQueue Design

In creating our batch-queue processor, we'll use a design similar to that of our DataPortal : There will be a client-side BatchQueue class that has static methods so that client code can easily interact with the batch-queue mechanism. The client-side BatchQueue class will use remoting to communicate with a server-side BatchQueue object, and this server-side object will actually interact with the queue processor, Server.BatchQueueService . The entire scheme is shown in Figure 11-1.

image from book
Figure 11-1: Static class diagram of batch-queue subsystem

Needless to say, there's a lot going on here, so let's break it down a bit. When a client wants to submit a batch job, the following process, illustrated in Figure 11-2, occurs.

image from book
Figure 11-2: Activity diagram for submitting a batch job

As we'll see, there are a couple of minor modifications that can be made to this flow, but this diagram shows the most common process.

The Role of the BatchQueueService Object

Let's take a closer look at the classes in the class diagram. First, we have Server.BatchQueueService . This is the actual batch-queue processing engine. It will run within a Windows service on the server where we want to do our batch processing. This might be the database server itself (if we can spare the processing power), or another server machine in close proximity to the database server.

The BatchQueueService object uses a Microsoft Message Queuing (MSMQ) queue to queue up all its tasks. MSMQ is useful because it provides all the basic infrastructure to create and manage queues and queued messages so we don't have to rewrite all that functionality.

Each task is represented by a Server.BatchEntry object, which contains all the information about the task, including the priority of the job and when it should start, the requesting user's identity, any parameter data, and the worker object that actually implements the batch job. BatchEntry doesn't implement the batch job code; it merely contains all the information that our batch job object will need to do its work.

This means that BatchQueueService just manages a collection of BatchEntry objects that are waiting to be processed as shown in Figure 11-3.

image from book
Figure 11-3: Relationship between the service and batch entries

The Role of the BatchEntry Object

Each BatchEntry object contains basic information about the entry in a BatchEntryInfo object, and the worker object that will perform the actual task (see Figure 11-4). This object can be created in two different ways, but either way the worker class must implement IBatchEntry so that the batch-processing system can invoke the object.

image from book
Figure 11-4: Composition of a BatchEntry object

One possibility is for the task object to be created on the client machine and sent to the batch server through remoting. This requires the installation of the DLL that contains the task code on both the client and the batch-server machinesthe object is created on the client, and then passed by value to the server.

Alternatively, the client can create a BatchJobRequest object, which contains the assembly and type name of a class in a DLL on the batch server. The BatchJobRequest object will dynamically load this class on the server, and then invoke it so that it performs the batch task. This approach is nice because the DLL containing the batch task object only needs to exist on the batch serverthe client never creates the task object, and doesn't need that DLL.

Tip 

The DLL containing the BatchJobRequest class itself must then be installed on both client and server, but it will be part of our batch framework, so that shouldn't be a problem. The batch framework DLL will contain the code needed on the client in order to use the batch processor as well as the code needed on the server for the server to run.

Either way, a Server.BatchEntry object contains a number of things:

  • A BatchEntryInfo object

  • A batch task object (either BatchJobRequest , or the worker object itself)

  • An optional State object containing parameter data from the client (this can be any [Serializable()] object that we want to pass to the batch job)

  • The user's BusinessPrincipal object (if we're using CSLA .NET security)

This is illustrated by Figure 11-4.

All of these objects will be serialized into an MSMQ Message object and then stored in an MSMQ queue on the batch server. By storing these objects in an MSMQ queue, we avoid having to track them ourselves . Also, MSMQ automatically sorts the entries by priority, so we don't need to worry about that either. Finally, by storing the entries in MSMQ we don't need to worry about recording them in a file or a database. Even if we stop and restart our batch service, MSMQ continues to retain any entries queued for processing.

When it's time to process an entry, Server.BatchQueueService deserializes the BatchEntry object (and all the objects it contains) into memory on the server. It then executes the entry on a thread from the .NET thread pool.

The Role of the BatchEntryInfo Object

The BatchEntryInfo object contains information about the user that submitted the request as well as the date and time that the task is to be executed, and its priority within the queue. This information isn't only used by the batch processor itself, but can also be retrieved by the client. The client might want a list of all the pending and active batch jobs in the queue. In this case, we return a collection of BatchEntryInfo objects in a BatchEntries object as shown in Figure 11-5.

image from book
Figure 11-5: BatchEntries is a collection of BatchEntryInfo objects

When this is requested , the batch server creates a BatchEntries collection object and populates it with the BatchEntryInfo objects for all active (currently running) and pending batch tasks. The collection is then returned by value (through remoting) back to the client, where it can be displayed to the user.

Security and User Identities

You may have noticed that the main UML diagram also includes the Security.Principal class from the core CSLA .NET implementation. When we implemented the DataPortal , we passed the user's BusinessPrincipal object to the server so that the server could impersonate the user, ensuring that our business objects would have the same security context on the server as on the client. We will do the same thing here in the batch-processing environment.

The caveat is that this won't work with Windows' integrated security. Unfortunately, there's no easy way to have our Windows service impersonate an actual Windows user when the batch task is invoked. Although we can impersonate the user when the task is submitted , we can't preserve the user's WindowsPrincipal object for use when the task is actually invoked.

Tip 

This is technically possible. COM+, for example, is able to do this sort of thing. However, it's nontrivial to implement, requiring extensive and in-depth knowledge of the deepest parts of Windows security and its related APIs.

One solution (if our server is running Windows Server 2003) is to have the user enter his password, so that we can pass his username and password to the server. We could then call a .NET method to impersonate the user, providing the username and password as credentials to the Windows security subsystem. Even then, this is problematic because we'd need to protect the password entered by the user, which is a big security risk.

Because of the complexity of trying to do Windows impersonation, our implementation will provide impersonation only when CSLA .NET table-based security is used. In such a case, the Server.BatchEntry object will contain not only a BatchEntryInfo object and the task object, but also contain the user's BusinessPrincipal object.

If we set our authentication to "Windows" , we'll get no security. All batch-processing code will run under the account that's running the Windows service on the server.

Creating the BatchQueue Assembly

We'll implement the CSLA.BatchQueue component as a new assembly. Clearly, not all applications will make use of batch queuing, in which case they won't need this functionality. By putting it into a separate assembly, we make it optional. This is particularly important when no-touch deployment is used, since it minimizes the amount of code that must be downloaded to the client workstations.

Open the CSLA solution and add a new Class Library project to it. Name it CSLA.BatchQueue .

Next , we'll be using the BusinessPrincipal class that's located in CSLA.dll , so add a reference to the CSLA project. Also, we'll be using remoting to communicate between client and server, and MSMQ to manage our queued tasks, so add references to System.Runtime.Remoting and System.Messaging.dll too. Finally, remove Class1.cs , because we'll be adding our own classes as we create them.

To implement the CSLA.BatchQueue , we'll start with the simpler classes, saving BatchQueueService for lastit's the most complex class, and it ties everything together.

IBatchEntry Interface

It's up to the business developer to implement the worker code that will run in the batch process; all that our framework can reasonably do is to manage the process of launching that code on the server.

With this in mind, there are two approaches for implementation. First, the worker code may be written in a class that implements a specific interface so that our batch-processing engine knows how to invoke it. Once the class has been created within a DLL, that DLL can be installed on the batch server, and we can use a BatchJobRequest object to queue it up for processing. A second option is to deploy the same DLL on both client and server, and pass the worker object by value (using serialization) to the batch server for processing.

Figure 11-6 shows these two options.

image from book
Figure 11-6: Two ways for a client to submit a work job to the queue

In either case, the worker object containing the business code must ultimately be executed by our batch-processing framework. To make this practical, add a new class to the project named IBatchEntry , and change it to define an interface, as follows :

  using System; namespace CSLA.BatchQueue {   public interface IBatchEntry   {     void Execute(object state);   } }  

Any business class can implement this interface by providing a method that takes a single Object parameter. Our batch-processing framework will pass this " state " object from the client to the server, allowing the client to pass arbitrary information to the batch-processing method. Typically, this will be used to pass criteria so that the batch job does the work requested by the user, but that's all up to the business developer, not our framework code.

BatchEntryInfo Class

Each batch job will include a set of information that includes data about the user that submitted the request, and the request itself. For the latter, we need to know the job's ID, the priority of the job in the queue, the date and time it's scheduled to run, and its current status. Status is one of the following:

  • Active : An entry that's currently executing

  • Pending : An entry that's waiting to execute, and will as soon as there's a free thread in the queue

  • Holding : An entry that's scheduled to run at a future time

All of this data will be kept in a BatchEntryInfo object associated with the task. By keeping the data in a separate object, we simplify the process of retrieving a list of all entries in the batch queue. As I described earlier, all we need to do is loop through all queued entries, and make a collection of their BatchEntryInfo objects.

Add a new class named BatchEntryInfo to the project; this will be a simple informational class. A BatchEntryInfo object is created on the client when the business application submits a job to the batch-processing framework. As the object is created, we'll gather information about the user's identity and the client computer. Add the following code:

  using System; using System.Messaging; namespace CSLA.BatchQueue {  public enum BatchEntryStatus  {     Pending,     Holding,     Active   }   [Serializable()]   public class BatchEntryInfo   {     Guid _id = Guid.NewGuid();     DateTime _submitted = DateTime.Now;     string _user = System.Environment.UserName;     string _machine = System.Environment.MachineName;     MessagePriority _priority = MessagePriority.Normal;     string _msgID;     DateTime _holdUntil = DateTime.MinValue;     BatchEntryStatus _status = BatchEntryStatus.Pending;     public Guid ID     {       get       {         return _id;       }     }  public DateTime Submitted  {       get       {         return _submitted;       }     }     public string User     {       get       {         return _user;       }     }   public string Machine     {       get       {         return _machine;       }     }     public MessagePriority Priority     {       get       {         return _priority;       }       set       {         _priority = value;       }     }     public string MessageID     {       get       {         return _msgID;       }     }     internal void SetMessageID(string id)     {       _msgID = id;     }  public DateTime HoldUntil  {       get       {         return _holdUntil;       }       set       {         _holdUntil = value;       }     }   public BatchEntryStatus Status     {       get       {         if(_holdUntil > DateTime.Now &&  _status == BatchEntryStatus.Pending)  return BatchEntryStatus.Holding;         else           return _status;       }     }     internal void SetStatus(BatchEntryStatus status)     {       _status = status;     }     #region System.Object overrides     public override string ToString()     {       return _user + "@" + _machine + ":" + _id.ToString();     }     public bool Equals(BatchEntryInfo info)     {       return _id.Equals(info.ID);     }     public override int GetHashCode()     {  return _id.GetHashCode();  }     #endregion   } }  

In this quite long but fairly straightforward listing, we define an enumerated value to indicate the status of the entry, followed by the BatchEntryInfo class itself. Notice that the latter is marked as [Serializable()] so that it can be passed by value across the network, and also so that it can be serialized into an MSMQ message body on the server.

Many of these properties, such as the name of the client machine, are set automatically as the object is first created, and so they're read-only. Others, such as the ID and status of the MSMQ message, are to be set only by our frameworkso they're read-only properties with internal methods to set the values. Still others, such as HoldUntil , can be set by business code, so they're read-write properties.

The bulk of the code here just stores and exposes data about the user and machine where the entry was submitted, and about the entry's priority, status, and so forth. We also override some System.Object methods so that our object behaves in a friendly manner.

BatchEntries Class

A feature that we particularly want to enable is the ability for a client application to request a list of all the entries in the queue, regardless of their current status (holding, pending, or active). Since all pertinent information about each entry is in that entry's BatchEntryInfo object, all we need to do is create a collection of those objects on the server, and then return that collection to the client.

Such a collection is easily implemented as a custom collection object. Add a BatchEntries class to the project with the following code:

  using System; using System.Collections; using System.Messaging; using System.Runtime.Serialization.Formatters.Binary; namespace CSLA.BatchQueue {   /// <summary>   /// Contains a list of holding, pending and active batch   /// queue entries.   /// </summary>   [Serializable()]   public class BatchEntries : CollectionBase   {     /// <summary>     /// Returns a reference to an object with information about  /// a specific batch queue entry.  /// </summary>     public BatchEntryInfo this [int index]     {       get       {         return (BatchEntryInfo)List[index];       }     }   /// <summary>     /// Returns a reference to an object with information about     /// a specific batch queue entry.     /// </summary>     /// <param name="ID">The ID value of  /// the entry to return.</param>  public BatchEntryInfo this [Guid id]     {       get       {         foreach(BatchEntryInfo obj in List)           if(obj.ID.Equals(id))             return obj;         return null;       }     }     internal BatchEntries()     {       // prevent client creation     }     internal void Add(BatchEntryInfo value)     {       List.Add(value);     }   } }  

Since we'll be passing this collection from the server back to the client via remoting, we have marked it as [Serializable()] . Also note that its constructor is scoped as internal , so the object can't be created directly by client code. The only way a client can get an object of this type is by requesting it from our queue-processing framework, which ensures that we can create and populate the object on the server and then return it to the client.

Lastly, notice the Add() method. This will be used by our server-side code to populate the collection with the BatchEntryInfo objects that are currently in the queue. Client code can use this collection of BatchEntryInfo objects to display a list of entries in the queue to the user, or for whatever other purpose makes sense for the client application in question.

Client-Side BatchQueue Class

Our client code needs an easy way to interact with the batch-processing server, so we'll provide a client-side class named BatchQueue for this purpose. Each BatchQueue object will represent a connection to a specific batch-queue server process. This allows an enterprise environment to have many batch-processing servers running, and allows a client to create BatchQueue objects to interact with each of them as needed.

We'll be using remoting to communicate with the batch-queue server processes, so we'll use the remoting URL for the server process as an identifier for that server. When we create a BatchQueue object, we'll provide it with the URL of the server with which it should communicate. If no URL is provided, we'll have the BatchQueue object read the application's configuration file to find the default queue URL.

Add a BatchQueue class to the project. We'll start with the following code to construct the object, as follows:

  using System; using System.Messaging; using System.Runtime.Remoting.Channels; using System.Runtime.Remoting.Channels.Tcp; namespace CSLA.BatchQueue {   public class BatchQueue   {     string _queueURL;     Server.BatchQueue _server;     #region Constructors and Initialization     public BatchQueue()     {       _queueURL = ConfigurationSettings.AppSettings[         "DefaultBatchQueueServer"];     }     public BatchQueue(string queueServerURL)     {       _queueURL = queueServerURL;     }     Server.BatchQueue QueueServer     {       get       {         if(_server == null)   _server = (Server.BatchQueue)Activator.GetObject(typeof(Server.BatchQueue), _queueURL);   return _server;       }     }     #endregion   } }  

We provide two constructors: one where the client supplies the URL for the server, and the other where we read it from the configuration file. We also implement a QueueServer() method that caches the remoting proxy object for the server, so we only have to create it once. This is similar to the code we implemented for the client-side DataPortal class.

Also similar to DataPortal is the need to pass our BusinessPrincipal object to the server, in case the security model is set to CSLA . Add the following helper methods to make this easy:

  #region Security   string AUTHENTICATION   {     get     {       return ConfigurationSettings.AppSettings["Authentication"];     }   }   System.Security.Principal.IPrincipal GetPrincipal()   {     if(AUTHENTICATION == "Windows")     {  // Windows integrated security  return null;     }     else     {       // we assume using the CSLA Framework security       return System.Threading.Thread.CurrentPrincipal;     }   }   #endregion  

Following .NET best practices, we'll override the System.Object methods ToString() , Equals() , and GetHashCode() . You can see the details of this in the code download.

Now the primary purposes of the BatchQueue object are to allow client code to submit jobs to the queue, to get a list of jobs in the queue, and to remove jobs from the queue. Of course, all the real work happens on the server, but we need to implement methods to make these activities available to the client.

Tip 

As implemented in this chapter, any user can remove any entry in the queue. You may choose to enhance this implementation to restrict which users can remove entries.

There are many possible variations on submitting a job to the queuethe code download includes eight different implementations . The following list provides a sample of the types of options available:

  • The client creates a worker object (which implements IBatchEntry ) and submits it.

  • The client creates a worker object (which implements IBatchEntry ) and submits it, along with a " state " object.

  • As an option to the previous item, the client may specify the priority of the entry in the queue.

  • As an option to the previous item, the client may specify a date and time when the entry should start.

These are handled by providing overloaded Submit() methods. All of them will work in more or less the same way, but we'll examine a couple of variations here. This is the simplest:

  public void Submit(IBatchEntry entry)     {       QueueServer.Submit(new Server.BatchEntry(entry));     }  

In this case, the user is supplying us with a reference to an object she has created, which must be [Serializable()] and implement the IBatchEntry interface. We use this object to initialize a BatchEntry object, which is then submitted to the server via remoting. The Submit() method is running on the server, so this new BatchEntry object, along with the user-supplied object, is passed by value to the server.

The most complex version of Submit() is the one where the client specifies everything as shown here:

  public void Submit(IBatchEntry entry,  object state,                        DateTime holdUntil,  MessagePriority priority)     {       Server.BatchEntry job = new Server.BatchEntry(entry, state);       job.Info.HoldUntil = holdUntil;       job.Info.Priority = priority;  QueueServer.Submit(job);  }  

Here, the client provides us with not only a reference to its worker object, but also a " state " object that we'll pass through to the worker object's Execute() method when we run it on the server. The client is also providing a priority for the entry in the queue, and a date and time when the entry should be run.

Tip 

You can refer to the code download for other overloaded implementations of the Submit() method.

The client may also want to get a list of the entries in the queue on the server. The result is a BatchEntries collection object containing BatchEntryInfo objects for each entry in the queue. This collection object is created and populated on the server, so it contains the list of batch entries in the order that they will be executed. It's then returned to the client, so all we need to do is provide a simple method to invoke the server functionality, as shown here:

  public BatchEntries Entries    {      get      {        return QueueServer.GetEntries(GetPrincipal());      }    }  

The end result is that the client has a collection of BatchEntryInfo objects that can be used to display the list to the user, or in any other way that's deemed appropriate.

We also provide a Remove() method, as follows:

  public void Remove(BatchEntryInfo entry)     {       QueueServer.Remove(entry);     }  

The client must provide us with the BatchEntryInfo object corresponding to an entry in the queue on the server. We simply pass this off to the server so that it can remove the entry from the queue.

Though our current implementation uses ID values to identify entries, we require the entire object to be passed. This helps keep the overall system more abstract. Because the entire object is passed, we could change the way that entries are identified in the future without risk of breaking client code. The client has no idea what information we need out of the BatchEntryInfo object, so there's no dependency here.

Server-Side BatchQueue Class

Much of the code in the client-side BatchQueue class was making method calls to a server-side BatchQueue object. This is very similar in style to the client and server DataPortal objects, which were also designed to interact with each other in this way.

Add a new class file named BatchQueueServer to the project. This class will be exposed via remoting, so that the client-side BatchQueue object can interact with it. This means that it needs to inherit from MarshalByRefObject . Since it will be running on the server, we'll put it into a Server namespace as well:

  using System; using System.Security.Principal; using CSLA.Security; namespace CSLA.BatchQueue.Server {   /// <summary>   /// This is the entry point for all queue requests from   /// the client via remoting.   /// </summary>   public class BatchQueue : MarshalByRefObject   {   } }  

The full type name for this class is CSLA.BatchQueue.Server.BatchQueue , and it's available via remoting as an anchored object. Since we've already implemented the client-side BatchQueue code that calls this object, we already know that we need to implement the Submit() , GetEntries() , and Remove() methods, as follows:

  public void Submit(BatchEntry entry)     {       BatchQueueService.Enqueue(entry);     }     public void Remove(BatchEntryInfo entry)     {       BatchQueueService.Dequeue(entry);     }     public BatchEntries GetEntries(IPrincipal principal)     {       SetPrincipal(principal);  BatchEntries entries = new BatchEntries();  BatchQueueService.LoadEntries(entries);       return entries;     }  

The Submit() and Remove() methods merely ask the BatchQueueService class to do the actual submission and removal of the entry from the queue. The BatchQueueService class contains all the code to manage and interact with the queuewhich is good, because that code is multithreaded, and therefore relatively complex, as we'll see when we implement it.

The GetEntries() method creates an instance of the BatchEntries collection object and then has the BatchQueueService class populate the collection with the current list of entries in the queue. Again, we're totally encapsulating all the interaction with the queue inside the BatchQueueService class itself.

The Server.BatchQueue class in the download also includes copies of the AUTHORIZATION() and SetPrincipal() methods from the DataPortal server. Notice that the GetEntries() method calls SetPrincipal() before executing, thereby ensuring that we have access to the user's identity as our code runs. In this case, we're not using that information, but we could set up roles so that only certain users can submit, remove, or view the entries in the queue.

BatchEntry Class

Each entry in the queue is contained within a BatchEntry object. Put another way, the BatchEntry object is a container for the worker object, the state object, the BatchEntryInfo object, and the BusinessPrincipal object (if we're using CSLA .NET security).

Being a container for these objects isn't hard. However, the BatchEntry object is also responsible for managing the execution of the batch task when it's invoked. This code is a bit more complex, since it includes setting up the security context before the worker object is executed, and then ensures that the BatchQueueService knows when the job is complete so that the next batch job can be started.

Let's start by creating the BatchEntry class and adding the code to contain the four other objects, as follows:

  using System; using System.Security.Principal;  using System.Diagnostics;  namespace CSLA.BatchQueue.Server {  [Serializable()]  public sealed class BatchEntry   {     BatchEntryInfo _info = new BatchEntryInfo();     IPrincipal _principal;  IBatchEntry _worker;     object _state;  public BatchEntryInfo Info     {       get       {         return _info;       }     }   public IPrincipal Principal     {       get       {         return _principal;       }     }     public IBatchEntry Entry     {       get       {         return _worker;       }       set       {         _worker = value;       }     }     public object State     {       get       {         return _state;       }       set       {         _state = value;       }     }   } }  

We also have a couple of constructors that are used by the client-side BatchQueue object to create and initialize the BatchEntry object, as shown here:

  #region Constructors     internal BatchEntry(IBatchEntry entry)     {       _principal = GetPrincipal();       _worker = entry;     }     internal BatchEntry(IBatchEntry entry, object state)     {       _principal = GetPrincipal();       _worker = entry;       _state = state;     }     #endregion  

Note that these are scoped as internal , so the client can't create a BatchEntry object directlyit must use the client-side BatchQueue object to create the BatchEntry as it's being submitted.

The real work in this class is done when we execute the batch job. It's up to the BatchQueueService to determine that our entry should be run, based on its priority and whether it's being held to run at a specific time. BatchQueueService then delegates the work of actually launching the job to our BatchEntry code. The BatchQueueService runs our BatchEntry object's Execute() method on a background thread (actually a thread out of the thread pool).

Quite a lot happens as part of this process. Here's the complete code:

  #region Batch execution     // this will run in a background thread in the     // thread pool     internal void Execute(object state)     {       IPrincipal oldPrincipal =         System.Threading.Thread.CurrentPrincipal;;       try       {         // set this thread's principal to our user         SetPrincipal(_principal);         try         {           // now run the user's code           _worker.Execute(_state);           System.Text.StringBuilder sb =             new System.Text.StringBuilder();           sb.AppendFormat("Batch job completed\n");           sb.AppendFormat("Batch job: {0}\n", this.ToString());           sb.AppendFormat("Job object: {0}\n",             (object)_worker.ToString());           System.Diagnostics.EventLog.WriteEntry(BatchQueueService.Name, sb.ToString(),               EventLogEntryType.Information);         }         catch(Exception ex)         {           System.Text.StringBuilder sb =             new System.Text.StringBuilder();           sb.AppendFormat("Batch job failed due to execution error\n");           sb.AppendFormat("Batch job: {0}\n", this.ToString());           sb.AppendFormat("Job object: {0}\n",             (object)_worker.ToString());           sb.Append(ex.ToString());   System.Diagnostics.EventLog.WriteEntry(BatchQueueService.Name, sb.ToString(),             EventLogEntryType.Warning);         }       }       finally       {         BatchQueueService.Deactivate(this);         // reset the thread's principal object         System.Threading.Thread.CurrentPrincipal = oldPrincipal;       }     }     #endregion  

We start by calling SetPrincipal() , which works just like the SetPrincipal() method in the server-side DataPortal that is, it checks if we're using CSLA .NET security, and if so it sets the user's BusinessPrincipal as the current principal object for the thread.

Remember that this particular thread is from the thread pool, so we don't know what principal object it used to havewe just need to ensure that it has our principal object for the duration of the process. However, we store the original principal value first, and restore the thread's CurrentPrincipal to its original value when we're done. This way, we're only impersonating our specific user for as long as we're running this particular entry.

When we go to execute the job itself, we're calling the worker object's Execute() method via the IBatchEntry interface, passing it the "state" object as a parameter. Notice that nowhere in our framework do we care what is in this "state" objectit's passed from the client to the worker object unchanged, thereby allowing the client to pass parameter values or criteria to the worker based on what it needs.

Also notice that the worker object's Execute() call is wrapped in a try catch block. If it succeeds, we write an entry to the system's application event log indicating success; otherwise we catch the exception and write a failure message to the log. Since our batch-queue processor is running as a Windows service, we can't show dialog boxes or otherwise inform the user of status, so the application event log is an appropriate place to do so.

Tip 

By writing this information to the system's application event log, we enable management of the system. There are commercial tools available that allow monitoring and reporting against entries in the event log, and these can be used to track whether jobs succeed or fail.

Alternatively, we might choose to write this information to a table in a database. We could then create utilities so that the user could easily generate reports on that data to see whether the user's batch entries succeeded or failed.

The BatchEntry class also includes the AUTHORIZATION property and GetPrincipal() and SetPrincipal() security methods. We also follow .NET best practices by overriding the standard System.Object methods ToString() , Equals() , and GetHashCode() . Refer to the code download for the implementations of all these methods.

BatchJobRequest Class

To submit a job to the batch processor, the client code must supply a worker object that implements the IBatchEntry interface. One way to do this is to have the client create such an object, in which case it will be passed by value from the client to the batch server. A simple worker class looks like this:

 public class MyWorker : IBatchEntry {   void IBatchEntry.Execute(object state)   {     // Do batch processing work here   } } 

Then we could submit the worker to the queue like this:

 MyWorker worker = new MyWorker(); BatchQueue queue = new BatchQueue(); queue.Submit(worker); 

This is an elegant approach, since it means that the client can create and initialize the worker object before sending it to the batch server. As we discussed earlier, though, the drawback to this approach is that the DLL containing the worker code must be installed on both client and server, which can complicate deployment.

An alternative is for the client to create a BatchJobRequest object, which contains the assembly and type name of a class on the batch server that is to be run. The client merely supplies the assembly name and class name as String valuesthe actual worker object is dynamically loaded on the server. This approach means that the client can't initialize the worker object, since it never physically exists on the client machineall parameter or criteria data must be passed via the "state" object instead. The benefit, however, is that the DLL containing the worker code only needs to be installed on the batch server, not on the client. This can simplify deployment quite a lot.

In this case, the worker class would be the same as the one used earlier, but its DLL would only be installed on the batch server. The client code to submit the job is quite different ( assuming the MyWorker class is compiled into Worker.dll ), as shown here:

 BatchJobRequest worker =   new BatchJobRequest("Worker.MyWorker", "Worker"); BatchQueue queue = new BatchQueue(); queue.Submit(worker); 

The client creates our generic BatchJobRequest , providing it with the type and assembly information necessary to load the worker object on the server dynamically.

A BatchJobRequest object is also a worker object, and it therefore implements IBatchEntry . It's special, however, because what it does when executed on the server is to load the real worker object to do the actual work. Add a BatchJobRequest class to the project with the following code:

  using System; namespace CSLA.BatchQueue {   [Serializable()]   public class BatchJobRequest : IBatchEntry   {     string _assembly = string.Empty;     string _type = string.Empty;     public BatchJobRequest(string type, string assembly)     {       _assembly = assembly;       _type = type;     }     public string Type     {       get       {         return _type;       }       set       {         _type = value;       }     }     public string Assembly     {       get       {         return _assembly;       }       set       {         _assembly = value;       }     }     // This method runs on the server - it is called by BatchEntry,     // which is called by Server.BatchQueueService     void IBatchEntry.Execute(object state)     {   // create an instance of the specified object     IBatchEntry job =       (IBatchEntry)         AppDomain.CurrentDomain.CreateInstanceAndUnwrap(_assembly, _type);     // execute the job     job.Execute(state);   }   #region System.Object overrides   public override string ToString()   {     return "BatchJobRequest: " + _type + "," + _assembly;   }   #endregion  } }  

The constructor, variables , and properties of this class interact so that the client code can set the assembly name and class name of the actual worker class. This is pretty much what we do when we configure remoting, for instance. In the configuration file, we provide the assembly and type name of the class to be exposed via remoting, and that class is dynamically loaded. We're doing the same thing here, but we're writing the code to load the class dynamically.

The Execute() method is run on the server when our entry is invoked. Of course, we have no business logic herewe simply use the assembly and type names to load the real worker object and then call its Execute() method, as follows:

 // create an instance of the specified object       IBatchEntry job =         (IBatchEntry)           AppDomain.CurrentDomain.CreateInstanceAndUnwrap(_assembly, _type);       // execute the job       job.Execute(state); 

The CreateInstanceAndUnwrap() method dynamically loads the assembly into memory, and then creates an instance of the class based on its name.

Tip 

This is analogous to the CreateObject() method that's used to create COM objects dynamically, but this approach is used to create .NET objects dynamically.

We then cast the value so that we're accessing the object via its IBatchEntry interface, and then we call its Execute() method.

Note that there's no exception handling here. We already implemented error handling in BatchEntry , so if any exceptions occur here, they'll be caught by BatchEntry , which will record the failure into the system's application event log.

BatchQueueService Class

I've saved the best, and most complex, for last. At this point, we have all the parts necessary to create a batch entry and to retrieve batch information, and so forth. What we're lacking is the actual batch-queue processing service itself.

This queue processor is tricky. It must put new entries into MSMQ, retrieve entries from MSMQ when they're to be run or removed, and ensure that entries are only run when they're scheduled, based on the HoldUntil property in the BatchEntryInfo object. On top of that, we may want to configure the processor to run more than one entry at a time, so it must have a way to launch up to a set number of entries simultaneously .

As if all that wasn't enough, we're in a multithreaded environment. The Windows service itself will run on one thread, each remoting request will automatically run on a different thread, and each of our jobs should execute on yet another thread, so that they don't block each other, remoting, or the Windows service.

Creating multithreaded code is almost always quite difficult. Anytime more than one thread can interact with a single variable or object, we need to add code to ensure that the threads don't conflict with each other. Unfortunately, as soon as we start blocking one thread while another is running, we have to worry about issues like deadlocks (where threads block each other and get stuck forever) and performance (threads can block each other unnecessarily and thus cause performance problems).

Windows Service Best Practices

Another thing to consider is that we'll ultimately be creating a Windows service, so why then are we putting all the code to implement the service into our DLL, rather than into the service project itself? It turns out that this is a best-practices implementation because it simplifies the debugging and testing of our service code.

If we create the service code directly in a Windows Service project, then the only way to execute that code is by installing and starting the service. Unfortunately, it's somewhat difficult to use the VS .NET debugger to step through a Windows service. This means that we typically end up debugging the code by writing a lot of application event log entries!

However, if we put all the actual service code into a DLL, we can easily call the code from a Windows Service project. More importantly we can also create a Console Application project that calls the DLLand of course we can use the VS .NET debugger to debug a console application. Once we've fully debugged and tested our code from the console, we can run it from the Windows Service application.

Creating the Class

Let's start by adding a BatchQueueService class to the project and writing some basic code, including the Imports statements for all the namespaces we'll be using, as follows:

  using System; using System.Collections; using System.Threading; using System.Messaging; using System.Runtime.Remoting; using System.Runtime.Remoting.Channels; using System.Runtime.Remoting.Channels.Tcp; using System.IO; using System.Runtime.Serialization.Formatters.Binary; namespace CSLA.BatchQueue.Server {   public class BatchQueueService   {     static TcpServerChannel _channel;     static MessageQueue _queue;     static Thread _monitor;     static System.Timers.Timer _timer;     static volatile bool _running;     static Hashtable _activeEntries =       Hashtable.Synchronized(new Hashtable());     static AutoResetEvent _sync = new AutoResetEvent(false);     static ManualResetEvent _waitToEnd =       new ManualResetEvent(false);     static BatchQueueService()     {       _timer = new System.Timers.Timer();       _timer.Elapsed +=         new System.Timers.ElapsedEventHandler(Timer_Elapsed);     }     public static string Name     {       get       {         return LISTENER_NAME;       }     }   } }  
Tip 

We'll create the LISTENER_NAME property and Timer_Elapsed() method later on.

Notice that everything is marked as static . The code in this class will be invoked by the Windows service and our Server.BatchQueue objects, and by making all the methods static , we make it very easy to use from any other code in our component.

Moving on to specifics, we declare a TcpServerChannel variable, which will handle inbound remoting requests from clients . This is different from when we created the server-side DataPortal host, because in this case we're not using IIS to host our code. Since we're creating our own remoting host as a Windows service, we need to configure remoting to listen for requests manually. With the DataPortal , we were able to allow IIS to handle those details.

We also declare a MessageQueue variable that will provide us with access to our underlying MSMQ queue.

One rule about Windows service creation is that the main thread can never be blockedor at least it must always be free to interact with the Windows Service manager in a timely manner. This means that our main thread must be a background threadand so we declare the _monitor variable, which will be the thread on which we do all our work.

We also have a System.Timers.Timer object that raises its events on a background thread from the thread pool. We'll be using it to wake up our process so it can run entries that are being held until a specific time. Since we're creating a Windows service, we can't use any "busy wait" techniquesinstead, we need to suspend our threads anytime they're not doing productive work. The timer will fire at the time when our next entry should be processed, and it will unblock the main thread in our service so that it can launch the batch entry.

Tip 

For more information on multithreading in .NET, please refer to Visual Basic .NET Threading Handbook . [2]

The _running variable is used to indicate whether we're trying to exit. When creating multithreaded applications, it can be very difficult to shut the application down properly, since we need to allow all our threads to terminate before we're completely done. We'll be using this variable as a flag to indicate that we're trying to shut down.

The _activeEntries variable points to a synchronized Hashtable object, as shown here:

 static Hashtable _activeEntries = Hashtable.Synchronized(new Hashtable()); 

_activeEntries will contain references to the BatchEntry objects that are currently executing, and it will be accessed by multiple threads, which can be awkward . We can't actually allow multiple threads to alter the data in the Hashtable at the same time. Fortunately, the .NET Framework takes this into account, and provides a synchronized wrapper for the Hashtable object. We can use Microsoft's code rather than trying to implement our own!

Tip 

This is true for most objects in the System.Collections namespace, so we can easily get synchronized wrappers for Queue , Stack , and other collection objects as well.

Finally, we have the _sync and _waitToEnd variables that point to synchronization objects that we'll use to block our main thread when it's not doing productive work. For instance, if there's nothing in the queue to process, the main thread will block on _sync . When a new entry is placed into the queue, the Enqueue() method (running on a background thread through remoting) will signal the _sync object, thus unblocking our main thread so that it can process the new entry.

Tip 

Remember that this code will be running on a background thread, not on the primary thread for our Windows service. Because we can't block the primary thread, we're creating our own main thread that will run our code and can block as needed.

The use of all these variables will become clearer as we implement the remainder of our service code.

Starting the Service

When our service first starts up (either from a Windows service or from a console application, for debugging) it needs to do quite a bit of setup work. We need to open our MSMQ queue (and create it if necessary). Then, the main processing thread needs to be started so that it can read entries from the queue and process them. Finally, we need to configure remoting so that clients can call our Server.BatchQueue object to submit, remove, or list the entries in the queue.

All of this will be done in a Start() method, which can be called by the Windows service as it starts up, or by a test console application for debugging purposes, as follows:

  public static void Start()     {       _timer.AutoReset = false;       // open and/or create queue       if(MessageQueue.Exists(QUEUE_NAME))         _queue = new MessageQueue(QUEUE_NAME);       else         _queue = MessageQueue.Create(QUEUE_NAME);       _queue.MessageReadPropertyFilter.Extension = true;       // start reading from queue       _running = true;       _monitor = new Thread(new ThreadStart(MonitorQueue));       _monitor.Name = "MonitorQueue";       _monitor.Start();       // start remoting for Server.BatchQueue       if(_channel == null)       {         // set application name (virtual root name)         RemotingConfiguration.ApplicationName = LISTENER_NAME;         // set up channel         Hashtable properties = new Hashtable();         properties["name"] = "TcpBinary";         properties["port"] = PORT.ToString();         BinaryServerFormatterSinkProvider svFormatter =           new BinaryServerFormatterSinkProvider();         //TODO: uncomment the following line for .NET 1.1         //svFormatter.TypeFilterLevel =         //  System.Runtime.Serialization. (merge with next line)         //    Formatters.TypeFilterLevel.Full;         _channel = new TcpServerChannel(properties, svFormatter);         ChannelServices.RegisterChannel(_channel);         // register our class         RemotingConfiguration.RegisterWellKnownServiceType(typeof(Server.BatchQueue), "BatchQueue.rem",           WellKnownObjectMode.SingleCall);       }       else         _channel.StartListening(null);     }  

First, we open (and possibly create) the MSMQ queue where we'll store all pending and scheduled entries, as shown here:

 // open and/or create queue       if(MessageQueue.Exists(QUEUE_NAME))         _queue = new MessageQueue(QUEUE_NAME);       else         _queue = MessageQueue.Create(QUEUE_NAME); 

The name of the queue is loaded from the application configuration file via the QUEUE_NAME property, as shown here:

  static string QUEUE_NAME    {      get      {        return @".\private$\" +          ConfigurationSettings.AppSettings["QueueName"];       }    }  

Notice that we're using a private queue hereonly our service code will ever interact with the queue, so there's no need for it to be public. Using a private queue is simpler, because we don't need to have Active Directory installed in our environmentwe just need to have the messaging subsystem installed on our server.

 _queue.MessageReadPropertyFilter.Extension = true; 

We'll be using the Extension property on our queued messages to store the execution time of our tasks. The Extension property is a Byte array that we can use to store application-specific information. By default, this value isn't retrieved when a message is read from the queue, so we specifically indicate that we want this value to be retrieved. Now that the queue is open, we can start the background thread that runs our main code to process messages in the queue, as follows:

 // start reading from queue     _running = true;     _monitor = new Thread(new ThreadStart(MonitorQueue));     _monitor.Name = "MonitorQueue";     _monitor.Start(); 

We set the _running variable to true , indicating that we want to continue running. Setting it to false indicates that we're trying to shut down and that our code should exit.

We then create a new thread object that will execute our MonitorQueue() method (which we'll create shortly), give the thread a meaningful name for debugging purposes, and start it. At this point, the main thread starts reading entries from the MSMQ queue and executing them.

The final step during startup is to configure remoting so that we can accept client requests. We start by configuring our TcpServerChannel object (if necessary), as follows:

 // start remoting for Server.BatchQueue       if(_channel == null)       {         // set application name (virtual root name)         RemotingConfiguration.ApplicationName = LISTENER_NAME;         // set up channel         Hashtable properties = new Hashtable();         properties["name"] = "TcpBinary";         properties["port"] = PORT.ToString();         BinaryServerFormatterSinkProvider svFormatter =           new BinaryServerFormatterSinkProvider();         //TODO: uncomment the following line for .NET 1.1         //svFormatter.TypeFilterLevel =         //  System.Runtime.Serialization. (merge with next line)         //    Formatters.TypeFilterLevel.Full;         _channel = new TcpServerChannel(properties, svFormatter);         ChannelServices.RegisterChannel(_channel); 

Under .NET 1.0, this wasn't quite so complex, but under 1.1 we need to set the TypeFilterLevel property on the formatter object, which requires a bit of extra code. If we don't set this property, remoting will prevent any system objects (such as collections) from being transferred across the network by valueand we count on that functionality. This change was made by Microsoft to tighten the default security of the .NET Framework.

Notice that the name (virtual root) of our remoting object and the port on which we listen come from the application configuration file, via the LISTENER_NAME and PORT properties, as shown here:

  static string LISTENER_NAME     {       get       {         return ConfigurationSettings.AppSettings["ListenerName"];       }     }     static int PORT     {       get       {         return Convert.ToInt32(ConfigurationSettings.AppSettings["ListenerPort"]);       }     }  

By reading these values from the configuration file, we provide for flexible deployment and configuration of the service. When this service is installed on a server, an administrator can ensure that there are no port or virtual root collisions on the server by adjusting these values.

Once we have a channel configured, we can register our Server.BatchQueue class so that it's available, as follows:

 // register our class         RemotingConfiguration.RegisterWellKnownServiceType(typeof(Server.BatchQueue), "BatchQueue.rem",           WellKnownObjectMode.SingleCall);       }       else         _channel.StartListening(null); 

At this point, remoting is configured, and our service will accept inbound requests from clients. Each inbound request will cause the creation of a Server.BatchQueue object running on a background thread from the thread pool.

Stopping the Service

Because we're running in a multithreaded environment in which our main processing is on one background thread, remoting requests are on other threads, and our batch jobs are executing on yet other threads, thus stopping the service is somewhat complex. Before we can close, we need to ensure that all these threads have completedand while we're waiting for them to complete, we need to ensure that new threads aren't started up!

Fortunately, we can use various thread-synchronization objects and techniques to simplify the coding of all this. The trouble is that any simple mistake in this code can cause our service to shut down improperly, or to remain running indefinitely! The Stop() method is called by the Windows service or a console application, as follows:

  public static void Stop()     {       // stop remoting for Server.BatchQueue       _channel.StopListening(null);       // signal to stop working       _running = false;       _sync.Set();       _monitor.Join();       // close the queue       _queue.Close();       if(_activeEntries.Count > 0)       {         // wait for work to end         _waitToEnd.WaitOne();       }     }  

First, we stop listening for client requests via remoting, thus preventing clients from submitting new entries, as shown here:

 // stop remoting for Server.BatchQueue       _channel.StopListening(null); 

Then we tell our main processing thread to stop by setting _running to false . Of course, that thread might be blocked by the _sync object, so we set the _sync object to unblock the thread. It may still take some time for that thread to stop, and we don't want to go any further until it does, so we call the Join() method on that thread which suspends our current thread until the _monitor thread terminates, as follows:

 // signal to stop working       _running = false;       _sync.Set();       _monitor.Join();       // close the queue       _queue.Close(); 

Once the queue-reading thread is terminated , we can close the MSMQ queue, since nothing will be interacting with it.

At this point, we're no longer accepting new client requests, and our queue-reading thread is terminated, so we won't start any new batch jobs. However, there might still be active batch jobs running, and we're not done until they're complete, too:

 if(_activeEntries.Count > 0)       {         // wait for work to end         _waitToEnd.WaitOne();       } 

If there are active jobs running, then we wait on the _waitToEnd synchronization object. When the last job completes, it will signal this object, unblocking our thread so that we can terminate.

Processing Queued Entries

When our service starts up, it creates a background thread to run a MonitorQueue() method. This method is responsible for reading messages from the MSMQ queue and launching batch entries when appropriate. While this process is fairly complex, we'll break it down into smaller parts, starting with the core loop, as shown here:

  // this will be running on a background thread     static void MonitorQueue()     {       while(_running)       {         ScanQueue();         _sync.WaitOne();       }     }  

This loop will run until the service shuts down by having _running set to false . Notice that when we're not scanning the queue for entries to process, we're blocked on the _sync object due to the call to its WaitOne() method. The WaitOne() method blocks our thread until some other thread signals the _sync object by calling its Set() method.

The _sync object is signaled in the following cases:

  • When a new entry is submitted by a client

  • When the timer object fires to indicate that a schedule entry should be run

  • When an active batch-entry completes

  • When we're shutting down and the Stop() method wants us to terminate

In the first three cases, when we wake up we'll immediately call the ScanQueue() method, which scans the queue to see if there are any entries that should be executed. Now, in normal MSMQ code, we'd simply call the MessageQueue object's Receive() method to read the next entry in the queue. Unfortunately, this won't work in our case.

The trouble here is that MSMQ has no concept of scheduled messages. It understands priorities, and always processes the highest priority messages first, but it has no way of leaving messages in the queue if they're not ready for processing. Because of this, we can't simply call the Receive() method to read the next message from the queue. Instead, we need to scan the queue manually to see if any entries should actually be processed at this time.

While we're doing this, we also need to see if there are any entries that are scheduled for future processing. If there are, then we need to find out when the next one is to be executed so that we can set our timer object to fire at that time, as follows:

  // this is called by MonitorQueue     static void ScanQueue()     {       Message msg;       DateTime holdUntil;       DateTime nextWake = DateTime.MaxValue;       MessageEnumerator en = _queue.GetMessageEnumerator();       while(en.MoveNext())       {         msg = en.Current;         holdUntil =           DateTime.Parse(System.Text.Encoding.ASCII.GetString(msg.Extension));         if(holdUntil <= DateTime.Now)         {           if(_activeEntries.Count < MAX_ENTRIES)           {             ProcessEntry(_queue.ReceiveById(msg.Id));           }   else           {             // the queue is busy, go to sleep             return;           }         }         else         {           if(holdUntil < nextWake)           {             // find the minimum holdUntil value             nextWake = holdUntil;           }         }       }       if(nextWake < DateTime.MaxValue && nextWake > DateTime.Now)       {         // we have at least one entry holding, so set the         // timer to wake us when it should be run         _timer.Interval =           nextWake.Subtract(DateTime.Now).TotalMilliseconds;         _timer.Start();       }     }  

Manually scanning through the entries in the queue is actually pretty easy. We simply create a MessageEnumerator object, which is like a collection of entries in the queue. This object can be used to loop through all the entries by calling its MoveNext() method, as shown here:

 MessageEnumerator en = _queue.GetMessageEnumerator();       while(en.MoveNext()) 

We're using the Extension property of each MSMQ Message object to store the date and time when the entry should be processed. For each message, we take this value and convert it to a Date object so that we can easily see if its date or time is in the future, as follows:

 msg = en.Current;         holdUntil =           DateTime.Parse(System.Text.Encoding.ASCII.GetString(msg.Extension));         if(holdUntil <= DateTime.Now) 

If the message is scheduled to run now (or in the past), then we'll execute it immediately (assuming the queue isn't busy), as follows:

 if(_activeEntries.Count < MAX_ENTRIES)         {           ProcessEntry(_queue.ReceiveById(msg.Id));         }         else         {           // the queue is busy, go to sleep           return;         } 

The MAX_ENTRIES property returns the maximum number of worker threads that we can run, based on the following configuration file setting:

  static int MAX_ENTRIES    {      get      {        return Convert.ToInt32(ConfigurationSettings.AppSettings["MaxActiveEntries"]);      }    }  
Tip 

Note that the configuration file settings are read once , when the Windows service starts up. They're then cached in memory by the .NET runtime. To change these settings, we need to stop and restart the service.

If the number of active jobs is less than the maximum, then we call ProcessEntry() to execute the entry. Otherwise, we can exit the entire method because the queue is busy. As soon as one of the active entries completes, it will signal the _sync object, thus causing our thread to scan the queue again and run any pending entries.

If we do want to process the entry, we call ProcessEntry() , passing it the Message object from the queue. Notice that here we're calling a "receive" method ReceiveById() . This not only gives us the Message object, but also removes the message from the queue.

On the other hand, if this particular entry is scheduled for future execution, we see if its scheduled date is the soonest we've seen, and if so we record its date and time, as follows:

 else         {           if(holdUntil < nextWake)           {             // find the minimum holdUntil value             nextWake = holdUntil;           }         } 

After we've looped through all the entries in the queue, we check to see if there are entries scheduled in the future. If there are, then we set our timer object to fire when the next entry is to be run, as shown here:

 if(nextWake < DateTime.MaxValue && nextWake > DateTime.Now)       {         // we have at least one entry holding, so set the         // timer to wake us when it should be run         _timer.Interval =           nextWake.Subtract(DateTime.Now).TotalMilliseconds;         _timer.Start();       } 

When the timer fires, we simply signal the _sync object, releasing our main thread to scan the queue so that it can process the entry, as follows:

  // this runs on a thread-pool thread     static void Timer_Elapsed(object sender, System.Timers.ElapsedEventArgs e)     {       _timer.Stop();       _sync.Set();     }  

Notice that the timer's Stop() method is called, so it won't fire again unless it's reset by the ScanQueue() method. If we have no entries scheduled for future processing, then there's no sense having the timer fire, so we just turn it off.

In the case that ScanQueue() did find an entry to execute, it calls the ProcessEntry() method. This actually does the work of launching the entry on a background thread from the thread pool, as shown here:

  static void ProcessEntry(Message msg)     {       // get entry from queue       BatchEntry entry;       BinaryFormatter formatter = new BinaryFormatter();       entry = (BatchEntry)formatter.Deserialize(msg.BodyStream);       // make active       entry.Info.SetStatus(BatchEntryStatus.Active);       _activeEntries.Add(entry.Info.ID, entry.Info);       // start processing entry on background thread       ThreadPool.QueueUserWorkItem(new WaitCallback(entry.Execute));     }  

The first thing we do is to deserialize the BatchEntry object out of the MSMQ Message object, as follows:

 // get entry from queue BatchEntry entry; BinaryFormatter formatter = new BinaryFormatter(); entry = (BatchEntry)formatter.Deserialize(msg.BodyStream); 
Tip 

The Body property of a Message object uses XML formatters, like those for web services. In our case, we're doing deep serialization using the BinaryFormatter , so we get a complete copy of the object, rather than a copy of just a few of its properties. This requires that we use the BodyStream property instead, so that we can read and write our own binary data stream into the Message object.

Once we have the BatchEntry object, we also have access to the four objects it contains: the worker, "state," BatchEntryInfo , and BusinessPrincipal objects.

The next thing we need to do is mark this entry as being active. Remember that the Message object is no longer in the MSMQ queue at this stage, so it's up to us to keep track of it as it runs. That's why we created the _activeEntries Hashtable object, as shown here:

 // make active       entry.Info.SetStatus(BatchEntryStatus.Active);       _activeEntries.Add(entry.Info.ID, entry.Info); 

Since this is a synchronized Hashtable , we don't have to worry about threading issues as we add the entry to the collection. The Hashtable object takes care of any synchronization issues for us.

Tip 

Because our active entries are only stored in an in-memory Hashtable , if the Windows service or server machine crashes during the processing of a batch job, that batch job will be lost. Solving this is a nontrivial problem. Though we could easily keep a persistent list of the active jobs, there's no way to know if it's safe to rerun them when the service comes back online. We have no way of knowing if they were halfway through a task, or if restarting that task would destroy data.

The final step is to execute the BatchEntry object, which in turn will execute the worker object. We want this to run on a background thread so that it doesn't tie up any of our existing threads.

Although we could manually create a thread each time we want to run a background task, this is somewhat expensive. There's quite a bit of overhead involved in creating threads, so it's best to reuse existing threads when possible. In fact, it would be ideal to have a pool of existing threads to which we could assign tasks when needed. Luckily, the .NET Framework includes a ThreadPool class that manages exactly such a thread pool on our behalf . We can simply add our BatchEntry object's Execute() method to the list of tasks that will be processed by a thread in the pool, as follows:

 // start processing entry on background thread       ThreadPool.QueueUserWorkItem(new WaitCallback(entry.Execute)); 

Note that the ThreadPool class maintains its own queue of tasks to perform; this pool is shared across our entire Windows process and is used by remoting, timer objects, and various other .NET runtime code, and is available for our use. As soon as a thread is available from the thread pool, it will be used to execute our code.

Tip 

The number of threads in the thread pool is managed by the .NET runtime. The maximum number of threads in the pool defaults to 25 per processor on the machine.

At this point, the batch task will run on the background thread in the thread pool, and our main thread can go back to scanning the queue for other entries that are ready to run.

Marking an Entry as Complete

Earlier, we wrote code to implement the BatchEntry object. In its Execute() method, it invokes the actual worker object via its IBatchEntry interface, as shown here:

 // now run the user's code         _worker.Execute(_state); 

This code is wrapped in a try finally block, thereby guaranteeing that when the worker is done processing, we'll call a method on the BatchQueueService to indicate that the entry is complete, as follows:

 finally       {         BatchQueueService.Deactivate(this); 

This Deactivate() method is responsible for removing the entry from the list of active tasks. It also signals the _sync object to release the main thread so that the latter scans the queue to see if another entry should be executed, as follows:

  // called by BatchEntry when it is done processing so     // we know that it is complete and we can start another     // job if needed     internal static void Deactivate(BatchEntry entry)     {       _activeEntries.Remove(entry.Info.ID);   _sync.Set();       if(!_running && _activeEntries.Count == 0)       {         // indicate that there are no active workers         _waitToEnd.Set();       }     }  

It also checks to see if we're trying to close down the service. If we're trying to do that ( _running is false ), and this was the last active entry, we signal the _waitToEnd object to indicate that it's safe to exit the entire service, as shown here:

 if(!_running && _activeEntries.Count == 0)     {       // indicate that there are no active workers       _waitToEnd.Set();     } 

Remember that in the Stop() method, we block on _waitToEnd if there are any active tasks. This code therefore ensures that when the last task completes, we notify the main service thread that everything is complete so that the service can terminate. Of course, the service will only terminate if the server machine is shutting down, or if the user has requested that the service should be stopped by using the management console.

Enqueue/Dequeue/LoadEntries

The Server.BatchQueue object also accepts requests from clients to submit, remove, and list batch entries. In all three cases, the Server.BatchQueue object calls into our BatchQueueService class to enqueue, dequeue, or list the entries in the queue. The Enqueue() method adds a new entry to the queue:

  internal static void Enqueue(BatchEntry entry)     {       Message msg = new Message();       BinaryFormatter f = new BinaryFormatter();       msg.Label = entry.ToString();       msg.Priority = entry.Info.Priority;       msg.Extension =         System.Text.Encoding.ASCII.GetBytes(entry.Info.HoldUntil.ToString());       entry.Info.SetMessageID(msg.Id);       f.Serialize(msg.BodyStream, entry);       _queue.Send(msg);       _sync.Set();     }  

Here we set some members of the Message object, including its Label , Priority , and Extension properties. In our case, we want to store the date/time when the entry should run, so we convert that value to a byte array and put it into the property, as follows:

 msg.Label = entry.ToString();       msg.Priority = entry.Info.Priority;       msg.Extension =       System.Text.Encoding.ASCII.GetBytes(entry.Info.HoldUntil.ToString()); 

Then we put the Message object's Id property value into our BatchEntryInfo object for later reference, as shown here:

 entry.Info.SetMessageID(msg.Id); 

Finally, we serialize the BatchEntry object (and thus the objects it contains) into a binary format, which is loaded into the MSMQ Message object's BodyStream property, as follows:

 f.Serialize(msg.BodyStream, entry); 

Once the Message object is all configured, we submit it to MSMQ, as shown here:

 _queue.Send(msg); 

MSMQ will automatically sort the entries in the queue based on their priority, and our ScanQueue() method will ensure that the entry won't run until its HoldUntil time. Of these two, the HoldUntil value has precedence. In other words, an entry will never run before its HoldUntil time. After its HoldUntil time is passed, an entry will be changed from Holding to Pending status in the queue, and will run in priority order along with any other Pending entries in the queue at that time.

It's very likely that the main queue reader thread is blocked on the _sync object, so we need to wake it up so that it can process our new entry in the queue. We do this by calling the Set() method on the _sync object, as follows:

 _sync.Set(); 

At this point, the queue will be scanned and our entry will be launchedassuming, of course, that it isn't scheduled for future processing, and the queue isn't already busy.

The client can also remove an entry from the queue. We implement this in the Dequeue() method, which finds the queue entry based on its Label value, and removes it from the MSMQ queue, as shown here:

  internal static void Dequeue(BatchEntryInfo entry)     {       string msgID;       string label = entry.ToString();       MessageEnumerator en = _queue.GetMessageEnumerator();       _queue.MessageReadPropertyFilter.Label = true;       while(en.MoveNext())       {         if(en.Current.Label == label)         {           // we found a match           msgID = en.Current.Id;         }       }       if(msgID != null)         _queue.ReceiveById(msgID);     }  

Remember that the Message object's label is set to the ToString() value of our BatchEntryInfo object. That value contains the unique GUID we assigned to the BatchEntry when it was created, so it's a good value to use when searching for a specific entry in the queue.

We create a MessageEnumerator object and use it to scan through all the entries in the queue. If we find one with a matching Label value, we record the Message object's Id property and exit the loop. Finally, if we got a Message Id value, we remove the message from the queue based on its Id , as shown here:

 if(msgID.Length > 0)       _queue.ReceiveById(msgID); 

The last bit of functionality that we need to provide is the ability to get a list of the entries in the queue, including the entries that are actively being processed. We'll be populating a BatchEntries collection object with this data, but the data itself will come from both our Hashtable and the MSMQ queue, as follows:

  internal static void LoadEntries(BatchEntries list)    {      // load our list of BatchEntry objects      BinaryFormatter formatter = new BinaryFormatter();      Server.BatchEntry entry;      // get all active entries      lock(_activeEntries.SyncRoot)      {        foreach(DictionaryEntry de in _activeEntries)        {          list.Add((BatchEntryInfo)de.Value);        }      }   // get all queued entries    Message [] msgs = _queue.GetAllMessages();    foreach(Message msg in msgs)    {      entry =       (Server.BatchEntry)formatter.Deserialize(msg.BodyStream);      entry.Info.SetMessageID(msg.Id);      list.Add(entry.Info);    }  }  

First, we scan through the active entries in the Hashtable , adding each entry's BatchEntryInfo object to the BatchEntries collection object, as follows:

 // get all active entries       lock(_activeEntries.SyncRoot)       {         foreach(DictionaryEntry de in _activeEntries)         {           list.Add((BatchEntryInfo)de.Value);         }       } 

Note that we've enclosed this in a lock block based on the Hashtable object's SyncRoot property. The lock statement is used to synchronize multiple threads, to ensure that only one thread can run the code inside the block structure at any one time. If a thread is running this code, any other thread wishing to run the code is blocked at the lock statement until the first thread exits the block.

Note 

This is a classic implementation of a critical section, which is a common synchronization technique used in multithreaded applications.

This technique is required anytime we want to enumerate through all the entries in a synchronized Hashtable object. The Hashtable object's synchronization wrapper automatically handles synchronization for adding and removing entries from the collection, but it doesn't protect the enumeration process, so we need this lock block to make sure that no other threads collide with our code.

Once we have a list of the active entries, we scan the MSMQ queue to get a list of all the pending and future scheduled entries, as follows:

 // get all queued entries       Message [] msgs = _queue.GetAllMessages();       foreach(Message msg in msgs)       {         entry =           (Server.BatchEntry)formatter.Deserialize(msg.BodyStream);         entry.Info.SetMessageID(msg.Id);         list.Add(entry.Info);       } 

Since we're creating a collection of BatchEntryInfo objects, we must deserialize the BatchEntry objects contained in each of the Message objects so that we can get at the BatchEntryInfo object. That object is then added to the BatchEntries collection object.

The end result is that the BatchEntries collection object has a list of all the active, pending, and holding batch entries in our queue. That collection object can then be returned to the client as a result of its remoting call. At this point, our CSLA.BatchQueue assembly is complete. We can move on to debugging it via a console application, and then to running it within a Windows service application.

Creating a Console Application for Debugging

Since all the actual work for our service is encapsulated in the CSLA.BatchQueue assembly, creating a host applicationeither a Windows service or a console applicationis very easy. All we need to do is reference the appropriate CSLA assemblies, and call the Start() and Stop() methods on the CSLA.BatchQueue.Server.BatchQueueService class as appropriate.

Add a new console application named BatchQueueTest to the CSLA solution. Then add a reference to the CSLA projects that our code uses (directly or indirectly) as shown in Figure 11-7.

image from book
Figure 11-7: Adding references to the BatchQueueTest project

Though we'll only be using CSLA.BatchQueue , it relies on classes from CSLA , and CSLA relies on classes from CSLA.BindableBase .

Now we can write the code to host our service. Update Module1 as follows:

  using CSLA.BatchQueue.Server;  Module Module1   Sub Main()  Console.WriteLine("Server on thread {0}",       AppDomain.GetCurrentThreadId)     Console.WriteLine("Starting...")     BatchQueueService.Start()     Console.WriteLine("Started")     Console.WriteLine("Press ENTER to end")     Console.ReadLine()     Console.WriteLine("Stopping...")     BatchQueueService.Stop()     Console.WriteLine("Stopped")  End Sub End Module 

In this test, all we need to do is start up the service by calling BatchQueueService.Start() , and then wait until the user presses the Enter key. At that point, we can call the Stop() method to shut down the service.

Our code will run here just like it would in a Windows service, but we have the ability to set our console application as the startup project in VS .NET so that we can run it within the debugger. We also need to add a new item to the project: an application configuration file. App.config should contain the following:

 <?xml version="1.0" encoding="utf-8" ?> <configuration>  <appSettings>   <add key="Authentication" value="Windows" />   <add key="QueueName" value="BatchQueue" />   <add key="ListenerName" value="CSLABatch" />   <add key="ListenerPort" value="5050" />   <add key="MaxActiveEntries" value="2" />   </appSettings>  </configuration> 

This defines the various configuration values we're using in our code. The specific values may need to be adjusted to work within your specific environment. For instance, port 5050 may already be in use on your server, or you may want to increase or decrease the number of entries that the queue will execute on different threads at any given time.

Note 

Remember that any DLLs containing worker object code must also reside in the same directory as BatchQueueTest.exe .

At this point, we should be able to run the batch-queue service at the console. If we put business DLLs containing our worker objects into the same directory as the console application, we can submit jobs to the queue and have them processed.

Creating the Windows Service Project

We can also create a Windows service host for the batch-queue processing code. Happily, doing so in VS .NET is pretty straightforward, as it's a standard project type. Add a new Windows Service project to the CSLA solution, and name it BatchQueue .

Add references to the CSLA.BindableBase , CSLA , and CSLA.BatchQueue projects, just like we did for the console application. Also, add an App.config file to the project with the same contents as the preceding console application.

Rename Service1 to BatchQueue (both the cs file and all references to Service1 in the file's codeyou should find five of them). In that component, import the CSLA.BatchQueue namespace, as shown here:

 using System.ServiceProcess;  using CSLA.BatchQueue.Server;  

Then all we need to do is call our Start() and Stop() methods in the service's OnStart() and OnStop() handlers, as follows:

 protected override void OnStart(string[] args)     {  BatchQueueService.Start();  }     protected override void OnStop()     {  BatchQueueService.Stop();  } 

And that's all there is to it! Our BatchQueueService class contains all the code to make the service work, including ensuring that all the actual work occurs on background threads, so the main service thread remains available for interaction with the operating system and the Windows Service manager.

The only other thing we need to do is add an installer class to the project. This is required for all Windows service assemblies, because it contains the information necessary to install and configure the service on the server. This installer object is automatically invoked by the .NET runtime as our service is installed or uninstalled on a system.

Add a new installer class called BatchQueueInstaller to the project as shown in Figure 11-8.

image from book
Figure 11-8: Adding an Installer class to the project

It needs to use the System.ServiceProcess namespace, as shown here:

  using System.ServiceProcess;  

Within the Installer class itself, we need to declare and initialize a couple of variables, as follows:

  ServiceInstaller _serviceInstaller = new ServiceInstaller();     ServiceProcessInstaller _processInstaller =       new ServiceProcessInstaller();     void InitInstaller()     {       _processInstaller.Account = ServiceAccount.LocalSystem;       _serviceInstaller.StartType = ServiceStartMode.Automatic;       _serviceInstaller.ServiceName = "CSLABatchQueue";       Installers.Add(_serviceInstaller);       Installers.Add(_processInstaller);     }  

The ServiceInstaller object is used to specify information about the Windows service, including its startup mode and the name of the service. The ServiceProcessInstaller object is used to specify the login credentials for the Windows service process. In this case, we're indicating that the service should run under the local System account on the server.

Both of these objects are then added to the Installers collection so that they're available to the installation process.

We need to initialize these variables in three different cases: install, uninstall, and rollback of an install. Each of these cases causes an event to be raised, so all we need to do is handle these events and call the initialization method from each, as shown here:

  private void BatchQueueInstaller_BeforeInstall(object sender,       System.Configuration.Install.InstallEventArgs e)     {       InitInstaller();     }     private void BatchQueueInstaller_BeforeRollback(object sender,       System.Configuration.Install.InstallEventArgs e)     {       InitInstaller();     }     private void BatchQueueInstaller_BeforeUninstall(object sender,       System.Configuration.Install.InstallEventArgs e)     {       InitInstaller();     }  

This code is run by the .NET runtime as our service is being installed or uninstalled, or during the rollback of an installation. InitInstaller() configures .NET Framework objects to provide the information required by the .NET runtime to install or uninstall the service properly. We simply make sure to call this method before any attempt is made to install, rollback the install, or uninstall the service.

At this point, our Windows service is complete, and can therefore be installed on a server by using the installutil.exe command-line utility, as shown here:

  > installutil batchqueue.exe  

This .NET utility will install the service on the machine by invoking our installer object. To uninstall the service, use the /u switch, as follows:

  > installutil batchqueue.exe /u  
Tip 

Note that uninstall won't complete while the Service Management console is open. If the console is open, it must be closed and reopened to allow the uninstall to complete.

Once the service is installed, we can use the Windows Service Management console to start, stop, and otherwise interact with the service as shown in Figure 11-9.

image from book
Figure 11-9: Use the Services control panel to start or stop the service.

When the service is started, it will open the MSMQ queue and listen for client requests via remoting. At this point, with the service running, we can create batch worker DLLs and then submit them from client applications.

Creating and Running Batch Jobs

Now that we have a batch-processing utility, we should walk through the process of actually using it. This comes in two parts: creating code to run in the batch processor, and creating client code to submit the task.

Creating a Batch Job

To create a batch job, all we need to do is create an assembly that references CSLA.BatchQueue and includes a class that implements the IBatchEntry interface. What we do beyond that is entirely up to uswe could be generating a report, processing data from a database, or doing virtually any other task that we might want to run on the server.

For instance, in our ProjectTracker solution, we can add a new Class Library project named PTBatch . Since this code will interact with our CLSA-based ProjectTracker.Library.dll assembly, we need to reference the following assemblies:

  • CSLA.dll

  • CSLA.BindableBase.dll

  • CSLA.Server.DataPortal.dll

  • CSLA.Server.ServicedDataPortal.dll

  • CSLA.BatchQueue.dll

The first four of these assemblies should be referenced using the Browse button in the Add Reference dialog box, and they should come from the ProjectTracker.Library project's bin directory. The CSLA.BatchQueue.dll assembly should be referenced from the bin directory of the CSLA.BatchQueue project. Of course, we also need to reference the ProjectTracker.Library project to get access to our business classes.

In this new project, we can add a class named ProjectJob with the following code:

  using System; using CSLA.BatchQueue; using ProjectTracker.Library; namespace PTBatch {   public class ProjectJob : IBatchEntry   {     void IBatchEntry.Execute(object state)     {       ProjectList projects = ProjectList.GetProjectList();       Project project;       foreach(ProjectList.ProjectInfo info in projects)       {         project = Project.GetProject(info.ID);         project.Name = project.Name + " (batch)";         project.Save();       }     }   } }  

The class implements the IBatchEntry interface so that it can be invoked by our batch-processing service. Within the Execute() method, we place the code that's to be run on the server. In this case, we're simply getting a list of all the projects in the system, then looping through them to update their Name properties.

Tip 

In fact, this code illustrates a rather poor approach to updating all the Name values of our Project objects. This sort of thing is probably better done through a stored procedure within the database server itself. If we must do this sort of thing on the application server, we'd be better off using direct ADO.NET code to update the records. I've used this implementation merely as an example of how to construct a batch job that runs on the server to do some business processing, not as an ideal example for large-scale batch processing.

Build this assembly, and then copy the PTBatch.dll file from the bin directory to the directory containing our BatchQueue.exe Windows service file, so that it's available to the batch-processing service.

Note 

We also need to copy ProjectTracker.Library.dll into the batch-service directory, as it's required by PTBatch.dll .

The caveat here is that our batch code will be using the application configuration file for the batch-processing service , which means that the configuration file for the service must include any configuration entries that will be used by our batch worker code. In our case, this means that the standard CSLA .NET configuration entries for security, and either database connection strings or DataPortal URLs, must be included. We'll have a look at such a file in the next section.

Also be aware that the user account under which the Windows service is running must have access to the databases, and to any other system resources used by our code. The System account of the server is rarely ideal for this purpose, so we should create an account specifically for the batch service to run under. This way, we can grant that account appropriate database access, and so forth.

Submitting a Batch Job

We can now create code in our client to submit the batch job. We can do this from Windows, the Web, or web-service interface code, or from other batch-job objects.

In any case, the client project needs to have a reference to CSLA.BatchQueue.dll , and its configuration file needs to have an entry specifying the URL of the default batch server. For instance, add the following line to the App.config file in the PTWin project:

  <add key="DefaultBatchQueueServer"       value="tcp://localhost:5050/cslabatch/batchqueue.rem" />  
Tip 

You'll have to change the port and virtual root name to match those from the BatchQueue host application's configuration file.

With that done, we can create a BatchQueue object and call its Submit() method to submit a job for processing. In most cases, we'll use the BatchJobRequest object to do this, so we don't need to install the worker DLL on the client workstation.

For instance, in our Windows Forms client, we could add a menu option to submit our project batch job like this:

  private void mnuProjectUpdate_Click(object sender,      System.EventArgs e)    {      CSLA.BatchQueue.BatchQueue batch =        new CSLA.BatchQueue.BatchQueue();      batch.Submit(new CSLA.BatchQueue.BatchJobRequest("PTBatch.ProjectJob", "PTBatch"));    }  

All we have to do is to create an instance of the BatchQueue object, and then call its Submit() method, and pass a worker object as a parameter. In this case, we're passing a BatchJobRequest object that has been initialized with the type and assembly name of our PTBatch.ProjectJob class.

To see this work, take the following steps:

  • Copy PTBatch.dll and ProjectTracker.Library.dll to the directory containing the batch-service application.

  • Add the CSLA .NET and application-specific configuration-file entries to the service-configuration file.

  • Set up the Windows service to run under a user account that has access to our database server.

By copying PTBatch.dll and ProjectTracker.Library.dll to the batch-service directory, we make that code available for use by the batch engine. This is important, since the BatchJobRequest object we created in PTWin contains the type and assembly information to create an instance of our worker class from the PTBatch.dll assembly.

Since our worker code will run in the batch-server process, it will use the configuration file settings of the batch-server application. Because of this, we need to make sure that the configuration file includes the database-connection strings required by our code to access the PTracker database. The end result is that the configuration file should look like this:

 <?xml version="1.0" encoding="utf-8" ?> <configuration>   <appSettings>  <add key="Authentication" value="CSLA" />  <add key="QueueName" value="BatchQueue" />     <add key="ListenerName" value="CSLABatch" />     <add key="ListenerPort" value="5050" />     <add key="MaxActiveEntries" value="2" />  <add key="DB:PTracker" value="data source=    server    ;                       initial catalog=PTracker;                       integrated security=SSPI" />     <add key="DB:Security" value="data source=    server    ;                       initial catalog=Security;                       integrated security=SSPI" />     <add key="DefaultBatchQueueServer"       value="tcp://localhost:5050/cslabatch/batchqueue.rem" />  </appSettings> </configuration> 

We include not only the settings to configure the batch server, but also the database connection strings for our databases.

Also notice that we've included the DefaultBatchQueueServer setting in this file, too. This is useful because it allows our worker code to submit new batch entries to the queue, if we so desire . (An example where this could be important is a daily job that needs to resubmit itself to run the next day.) Of course another alternative for this would be to extend the batch scheduler to understand the concept of recurring jobs.

Before we can run the PTWin application and choose the menu option to submit a new entry, the queue service must be running. We can either start the Windows service, or we can run the console-application version of the service. Either way, once it's running, the service listens on the port we specified (5050 in the example) for inbound remoting requests.

The PTWin client application can then call the Submit() method of the client-side BatchQueue class, which contacts the server via remoting to submit the job. The end result should be that the batch job runs, all of our Project objects' Name values are updated, and an entry is written into the application event log that indicates success, as shown in Figure 11-10.

image from book
Figure 11-10: Completed batch job log entry

Using this technique, we can easily create and submit almost any type of batch job to the server, including the creation of reports, processing of large data sets, and so forth.

[2] Tobin Titus, et al., Visual Basic .NET Threading Handbook (Berkeley, CA: Apress, 2002).



Expert C# Business Objects
Expert C# 2008 Business Objects
ISBN: 1430210192
EAN: 2147483647
Year: 2006
Pages: 111

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net