Chapter 7: Runtime Services


This chapter covers the runtime services architecture within Windows Workflow Foundation. The Workflow API defines the base infrastructure for several runtime service types, including workflow persistence and tracking. In addition, several out-of-the-box services implement the base infrastructure. These services and examples of each can be found in this chapter.

The second half of the chapter covers custom runtime service development. The chapter covers alternatives to the out-of-the-box services and how to extend the base framework, and provides examples that illustrate how to develop custom services.

Out-of-the-Box Services

The following sections discuss each runtime service type provided out of the box with Windows Workflow Foundation. These sections describe the concepts and instructions on how to use these runtime services as well as the drivers behind why each service type was developed.

Table 7-1 lists each out-of-the-box service type and corresponding implementations.

Table 7-1: Out-of-the-Box Services
Open table as spreadsheet

Service Type

Description

Out-of-the-Box Implementations

Scheduling

Responsible for creating and scheduling new workflow instances for execution

DefaultWorkflowScheduler Service, ManualWorkflow SchedulerService

Work Batch

Enables behavior to maintain a stable and consistent execution environment

DefaultWorkflowCommitWork BatchService, Shared ConnectionWorkflowCommit WorkBatchService

Persistence

Provides an infrastructure for maintaining workflow instance state

SqlWorkflowPersistence Service

Tracking

Enables rich logging and tracking capabilities to monitor workflow execution

SqlTrackingService

Workflow Loader

Provides functionality to create workflow instances when the CreateWorkflow method is called

DefaultWorkflow LoaderService

Data Exchange

Manages custom communication services

ExternalDataExchange Service

Scheduling Services

Scheduling services are those that inherit from WorkflowSchedulerService. These services dictate the threading model that workflow instances use when started. The workflow runtime has to have one - and only one - scheduling service specified before starting. It is also important to have options related to threading because workflows can be hosted in any application type. Windows Forms applications, for example, might require a different threading model than an ASP.NET web application does.

Think about a Windows Forms application. Conventional wisdom tells developers to keep their Windows applications as fluid and responsive as possible. This commonly means running intense or long-running processes on background threads.

Conversely, consider an ASP.NET application. Spawning threads to perform work in the background doesn’t add anything to the overall user experience. Any work that needs to be done should be completed by the time a response is returned to the client - not to mention the fact that IIS does not like it when you start to use a lot of its threads.

Considerations regarding ASP.NET and Windows Workflow Foundation are covered in Chapter 13.

Because requirements differ depending on the situation, Windows Workflow Foundation offers two distinct scheduling services out of the box. As its name implies, DefaultWorkflowSchedulerService is used by default when no scheduling service is specified explicitly. ManualWorkflowScheduler Service is also provided for use when required. The following sections describe each of these scheduling services.

DefaultWorkflowSchedulerService

As mentioned, DefaultWorkflowSchedulerService is automatically used by the workflow runtime when no other scheduling service is specified because the runtime has to have a scheduling service in order to start workflow instances.

This scheduling service uses threads from the thread pool to spawn workflow instances. Therefore, the host functions on its own thread, and workflow instances use their own thread to execute. One scenario where this is essential is in Windows Forms applications where the UI thread needs to remain available for user interaction and should not be bogged down with long-running or intensive processes.

Commonly in examples and in the host that is generated when you create a new console workflow project, you see code that forces the host to wait until after a workflow is completed. This is because the runtime is using DefaultWorkflowSchedulerService, and the console application completes and exits as soon as a workflow instance is started on another thread. Obviously, this is not optimal because any workflows started never get a chance to finish. The following code is an example of this pattern:

  using(WorkflowRuntime workflowRuntime = new WorkflowRuntime()) {     AutoResetEvent waitHandle = new AutoResetEvent(false);     workflowRuntime.WorkflowCompleted += delegate(object sender,         WorkflowCompletedEventArgs e)     {         // notify the host its OK to continue         waitHandle.Set();     };     WorkflowInstance instance =         workflowRuntime.CreateWorkflow(typeof(Temp.Workflow1));     instance.Start();     // wait until the workflow instance is done     waitHandle.WaitOne(); } 

This code waits to exit until the workflow instance completes by using the AutoResetEvent class. When the WaitOne method of this class is called, it blocks further execution on that thread until the Set method is subsequently called. That is why the WorkflowCompleted event is handled where the Set method is called. Therefore, when the workflow instance has finished executing, the final line is able to progress, and the application is able to finish.

ManualWorkflowSchedulerService

Unlike DefaultWorkflowSchedulerService, ManualWorkflowSchedulerService creates and executes workflow instances on a thread borrowed from the workflow runtime host. Therefore, you should consider using this alternative service when workflow instances should not be taking up too many threads or when the runtime host needs to wait for a response from the instance before doing anything else. An example of the former scenario is with ASP.NET. Because ASP.NET does not like all of its thread being used up by other processes, using ManualWorkflowSchedulerService here makes perfect sense.

Unlike with DefaultWorkflowSchedulerService, you need to explicitly add ManualWorkflow SchedulerService to the workflow runtime. When you do this, ManualWorkflowSchedulerService is used in place of the default service because the runtime can have one, and only one, scheduling service. The following code illustrates how to add and use ManualWorkflowSchedulerService:

  WorkflowRuntime workflowRuntime = new WorkflowRuntime(); ManualWorkflowSchedulerService scheduler = new ManualWorkflowSchedulerService(); workflowRuntime.AddService(scheduler); workflowRuntime.StartRuntime(); WorkflowInstance instance = workflowRuntime.CreateWorkflow(typeof(MyWorkflow)); instance.Start(); scheduler.RunWorkflow(instance.InstanceId); 

First, a new instance of the manual scheduler class is created and added to the runtime’s services using the AddService method as usual. In addition, a WorkflowInstance is created and started as it normally would be. The difference comes in when the RunWorkflow method of the scheduler object is called. This call is required to actually start your workflow’s execution. Remember, this is a blocking call until your workflow is finished executing or idles.

Because you will most likely use ManualWorkflowSchedulerService in an ASP.NET environment, there is more on this topic in Chapter 13.

Work Batch Services

Workflow work batch services, also known as commit batch services, enable you to commit work batches at specified persistence points. Work batching, which was introduced in Chapter 5, is the process of building up a queue of work and then performing that work in the context of one transaction to make sure everything ends up in a consistent and stable state. Work batch services are responsible for making sure a transaction exists before the batching occurs and then starting the batch process.

All batch services inherit from the System.Workflow.Runtime.Hosting.WorkflowCommitWork BatchService class. This abstract class has a method called CommitWorkBatch, which performs the work described previously. Even though WorkflowCommitWorkBatchService is abstract, Commit WorkBatch is a concrete method - meaning that it can be called from any child classes by using the base keyword. You use this method when the base implementation contains only the logic you need in your new class. In addition, this method is passed a CommitWorkBatchCallback delegate that, when called, in turn calls the method that commits the work.

A work batching example is included later in this chapter, in the “Developing Persistence Services” section. First, however, here’s a quick review of how this functionality is structured. If your solution requires that discrete chunks of code be queued up over time but executed at the same time, you need to create a custom class that implements the IPendingWork interface. The following example shows a skeleton class for implementing this interface:

  public class MyWork : IPendingWork {     public void Commit(Transaction transaction, ICollection items)     {         foreach (object o in items)         {             // do some work on the object here         }     }     public void Complete(bool succeeded, ICollection items)     {         // do some work when done here if you want     }     public bool MustCommit(ICollection items)     {         return true;     } } 

The method that does all the work here is Commit. This method is called when the batch is being committed. Again, the commitment process is kicked off by the workflow runtime when a persistence point is reached. This method is passed a collection of objects that represent each chunk of work you added. You need to loop through this collection and do whatever work is required on each object. Note that in the code, the foreach loop is simply extracting a System.Object reference from the collection. However, the objects in this collection can be of any type.

The next step is to actually get objects into this collection so they can be iterated over. To do this, you use the static WorkBatch property of the WorkflowEnvironment class. You can add objects to this collection using its Add method from anywhere in your code, whether it’s within the host or inside the workflow. This method takes an IPendingWork reference and an object reference. This object reference is added to the collection, which is then passed to the Commit method of your IPendingWork class. This is a pretty simple, but very elegant, architecture for transactional behavior in your workflow solutions.

DefaultWorkflowCommitWorkBatchService

Because a workflow batch service is always required, DefaultWorkflowCommitBatchService class is included out of the box. This class implements the basic functionality that can execute a work batch in the context of a transaction. Generally, this service provides all the batching functionality you need, and you don’t even have to think about it because it is added to the runtime automatically. However, there may be situations, specifically dealing with database transactions, when this service cannot handle all your needs. That is where SharedConnectionWorkflowCommitWorkBatchService comes in.

SharedConnectionWorkflowCommitWorkBatchService

With what is probably the longest class name in the Windows Workflow Foundation API, Shared ConnectionWorkflowCommitBatchService enables you to use SQL Server transactions on a remote SQL Server database between various services that connect to that database. .NET transactions were introduced in Chapter 6, and this topic is beyond the scope of this book. However, this runtime service enables you to bypass the Distributed Transaction Coordinator (DTC), which would otherwise be necessary if the workflow host were running on a box other than the SQL Server.

Using this service is as simple as adding an instance of the class to the runtime as you do with any other service. The service class’s constructor takes a string representing the connection string to the database. As a rule of thumb, if you are using both the SQL persistence and tracking services (discussed later), you should take advantage of the shared connection batch service to ensure that you are using SQL Server transactions and not adding extra overhead to your transactions by using the DTC.

Persistence Services

Persistence services exist to support the tenet that workflows must be long running and stateful. When a workflow instance is waiting for work to be performed or after important work has been completed, saving the workflow’s state can help maintain a stable system.

The first way persistence services help is that they allow running workflow instances that are consuming valuable system resources to be taken out of memory and placed in a durable storage medium. The storage could be a database, an XML file, or any other method of a developer’s choosing. The process of taking a workflow’s data and execution state and persisting it is also known as dehydration. Conversely, when a workflow instance is ready to do work, the process of rehydration occurs to bring the workflow back to life.

Persistence services also help maintain the stability of a workflow instance by saving its state after important work has been performed. Think of a scenario that has a complex set of steps that make up a single transaction. After all the steps in the transaction have completed, why risk losing the work that has just been done? Saving state to a durable store can help you maintain important information about a process in progress.

Persistence services in Windows Workflow Foundation inherit from the System.Workflow.Runtime.Hosting.WorkflowPersistenceService class. This abstract class defines methods for writing and subsequently loading a workflow instance’s state to and from a persistence store. These actions are supported by the SaveWorkflowInstanceState and LoadWorkflowInstanceState methods, respectively.

Two other important methods that persistence services are forced to implement are SaveCompleted ContextActivity and LoadCompletedContextActivity. These two methods deal with saving and loading activity scopes that are used for transactional compensation. When a transactional scope completes, its state is stored for later, when it may be needed for a rollback.

After workflow instances are persisted, they can be restored inside a host that is different from the original host. Although this is extremely useful for scenarios where multiple people or entities are involved in a process, it could also cause problems. If a workflow instance is saved to a persistence store and then loaded by two different hosts at the same time, the results could be pretty disastrous. Therefore, the WorkflowPersistenceService class supports the concept of instance locking. This safeguard basically flags a persisted workflow as unavailable, or in use, by another host and prevents multiple hosts from loading an instance simultaneously. As described in the upcoming “Locking Workflow Instances” section, workflow persistence services should implement custom locking if this behavior is required.

SqlWorkflowPersistenceService

This is the only persistence service provided out of the box, and it uses a Microsoft SQL Server database as its backing store. Like any persistence service, this class inherits from WorkflowPersistence Service. The database schema this service uses is defined by Microsoft and is discussed in more detail in the next section.

This service provides you with options regarding when to persist a workflow instance to the database, how long to maintain ownership of an instance, and how often to check the database for expired instances. Like some of the other runtime services (such as DefaultWorkflowCommitWorkBatchService, Shared ConnectionWorkflowCommitWorkBatchService, and SqlTrackingService, which are discussed later in this chapter), the SQL workflow persistence service enables you to retry operations if they do not succeed initially. You enable this behavior by setting each service’s EnableRetries property. However, you cannot configure the number of retries allowable - this value is hard coded in the API.

Preparing SqlWorkflowPersistenceService

SqlWorkflowPersistenceService relies on the backing of a SQL Server database. Fortunately, the Workflow SDK provides scripts for creating the database schema and stored procedures.

This book assumes that you have ready access to a graphical database management tool, such as SQL Serrver Management Studio, which ships with SQL Server 2005. If you do not have access to SQL Server 2005, Microsoft offers a free and very functional database engine called SQL Server Express (available for download at http://msdn.microsoft.com/sql/express/). Although SQL Server Express has some limitations, it is more than adequate for learning and testing purposes.

The great thing about SQL Server 2005 Express Edition, as opposed to the previous free enngine (MSDE), is that it has a free GUI tool called SQL Server Management Studio Express. As of this ttext’s writing, this tool was available as a separate download from the Express database engine. Itt’s a great alternative to SQL Management Studio for maintaining your SQL Server databases in a developpment environment.

The first step in preparing the workflow persistence data store is creating the database itself, which you can accomplish with Management Studio’s user interface or by running the following T-SQL DDL:

  CREATE DATABASE WorkflowPersistence 

In this case, a database called WorkflowPersistence is created. (The database name is up to you, of course.) It is not even a requirement that a new database be created for the persistence store. You could create the supporting tables and stored procedures in an existing enterprise or application database. However, you should logically separate the persistence structure from other database entities to maintain a good division of business data and workflow support objects.

The next step in preparing the SQL persistence service is creating the database tables. A script called SqlPersistenceService_Schema.sql (located in C:\<WINDOWS DIR>\Microsoft.NET\Framework\ v3.0\Windows Workflow Foundation\SQL\<LANGUAGE DIR>) is used for this purpose. Open this file in SQL Management Studio, and select the appropriate database in which to run it - in this case, WorkflowPersistence. Click the Execute button on the toolbar or press F5 to execute the script. If everything goes well, two new tables should appear in the database: InstanceState and CompletedScope.

The InstanceState table is used to store workflow instances that have been dehydrated and that are in a waiting state. It includes the following columns:

  • uidInstanceID - A unique identifier that corresponds to the InstanceId property of a workflow instance.

  • state - The serialized and compressed version of the workflow instance’s state as it stood before it was persisted.

  • status - Indicates the current status of the workflow instance. Possible values are 0 for executing, 1 for completed, 2 for suspended, 3 for terminated, and 4 for invalid.

  • unlocked - A status flag indicating the instance’s locked state.

  • blocked - A bit column indicating whether a workflow instance is waiting for some external stimulus or timeout. This essentially tells you whether a workflow instance is idle.

  • info - An ntext column that contains extra information if the workflow instance was suspended or terminated. For example, SuspendActivity has an Error property that allows supporting information to be stored if the workflow was suspended for some reason.

  • modified - A datetime column indicating when the workflow instance was last modified.

  • ownerID - Represents the runtime host that has possession of this instance.

  • ownedUntil - A datetime column that says how long the current owner will maintain ownership of the instance.

  • nextTimer - A datetime column indicating the date and time at which the workflow instance’s next timer will expire. (The “SqlWorkflowPersistenceService and Delays” section later in this chapter discusses timers in more detail.)

The CompletedScope table stores state information of activities that are participants in transactional behavior. When a transaction scope completes, its state is written to this table for later compensation if necessary.

To complete the initial SQL persistence setup, you need to add the stored procedures to the database, using a script called SqlPersistenceService_Logic.sql (again located in C:\<WINDOWS DIR>\ Microsoft.NET\Framework\v3.0\Windows Workflow Foundation\SQL\<LANGUAGE DIR>). Open the script in SQL Management Studio, and run it. Table 7-2 describes the stored procedures that appear in the database after you execute the script.

Table 7-2: SQL Persistence Stored Procedures
Open table as spreadsheet

Stored Procedure

Description

DeleteCompletedScope

Deletes a record from the CompletedScope table

InsertCompletedScope

Inserts a record into the CompletedScope table

InsertInstanceState

Inserts a record into the InstanceState table

RetrieveAllInstance Descriptions

Gets a few key columns from the InstanceState table for all records

RetrieveANonblocking InstanceStateId

Gets the instance ID of one nonblocking record from InstanceState

RetrieveCompletedScope

Retrieves the state of a specific scope

RetrieveExpiredTimerIds

Gets all the InstanceState records that have expired timers

RetrieveInstanceState

Gets a specific InstanceState record

RetrieveNonblocking InstanceStateIds

Gets the instance IDs of all nonblocking records from InstanceState

UnlockInstanceState

Sets an InstanceState record’s owner to NULL, which unlocks it

Using SqlWorkflowPersistenceService

Now that you have prepared the SqlWorkflowPersistenceService infrastructure, you are ready to use it in workflow applications.

The first step in using this runtime service is creating an instance of the SqlWorkflowPersistence Service class. This class has a few overloaded constructors that enable you to set various options when creating an instance. The first overload takes a string as its only parameter, which represents the connection string to the persistence database.

The second overload takes four parameters, the first being the connection string. The next parameter is a Boolean value that indicates whether to unload workflow instances when they become idle. By default, this option is false, and it is important to set it to the desired value during object construction because it is not exposed as a public property to set later. The third parameter is a TimeSpan that indicates how long the runtime will maintain ownership of a persisted workflow instance - the default value is one year. The fourth and final parameter is another TimeSpan instance that indicates how often to check the database for expired timers - the default value is two minutes.

The final constructor overload takes an instance of a NameValueCollection object, which you can preload with applicable parameter values. The valid parameters are similar to the parameters in the previous overload - they are ConnectionString, OwnershipTimeoutSeconds, UnloadOnIdle, and LoadIntervalSeconds. This overload gives you the option of selectively setting these four values. The only required parameter is, of course, ConnectionString. If any of the four values is not explicitly specified, the default value is used.

After you have your SqlWorkflowPersistenceService instance, you need to add it to the Workflow Runtime’s services using the AddService method. The following is an example of what this might look like. Notice that the code is using the NameValueCollection overload of the service’s constructor to selectively set its properties.

  WorkflowRuntime runtime = new WorkflowRuntime(); NameValueCollection parms = new NameValueCollection(); parms.Add("ConnectionString", "Initial Catalog=WorkflowPersistence;" +     "Data Source=localhost;Integrated Security=SSPI;"); parms.Add("UnloadOnIdle", "true"); SqlWorkflowPersistenceService persistenceService =     new SqlWorkflowPersistenceService(parms); runtime.AddService(persistenceService); 

After this code is called, everything is ready to go. Workflow instances running within this host are persisted at valid persistence points (covered next).

Persistence Points

The SqlWorkflowPersistenceService takes care of actually writing a workflow instance’s state to the persistence database, but it is the workflow runtime that dictates when this occurs. The occasions when the runtime tells the persistence service to save a workflow’s state are called persistence points. If a persistence service exists in the runtime, there are two occasions that always cause persistence to occur: just before a workflow instance completes and just before a workflow is terminated.

A few other events cause a workflow instance to be persisted. You can cause this to happen programmatically by calling the Unload or TryUnload method of the WorkflowInstance class. Unload is a blocking call that synchronously waits until the workflow can be persisted. TryUnload simply tries to unload the workflow instance, and if it cannot, it returns without doing so. Its Boolean return value indicates success or failure to persist.

These methods have a few things in common. First, if either method is called and a persistence service does not currently exist inside the runtime, an InvalidOperationException is thrown. Both methods also persist a workflow instance only when its status is idle or suspended. Finally, if a workflow is successfully persisted, the WorkflowUnloaded event of WorkflowRuntime is raised.

Another persistence point is when an activity that has been decorated with the PersistOnClose attribute has successfully completed. You can use this attribute to force workflows to persist themselves after crucial pieces of code have run. The following is a short example of using this attribute:

  [PersistOnClose] public class MyActivity : Activity {    ... } 

The final way you can cause persistence with a persistence point is to the set the UnloadOnIdle property of SqlWorkflowPersistenceService during that object’s construction. The code at the beginning of the “Using SqlWorkflowPersistenceService” section shows the creation of a SqlWorkflowPersistence service that sets this value. If you set this value to true, the workflow instance is persisted when it has no immediate work to do. Obviously, this is a good time to persist and unload a workflow instance from system memory.

Aborting Workflow Instances

The WorkflowInstance.Abort method is specifically for workflows executed in a runtime that have a persistence service. If this method is called from the host, the workflow is aborted in a synchronous manner, meaning that the method does not return until the runtime has successfully aborted the instance.

What’s interesting about this method and persistence is that all work performed since the last persistence point is thrown away. So calling Abort is kind of like an undo for workflows. After the workflow has been aborted, the runtime can retrieve the instance in its previous state by calling the GetWorkflow method of the WorkflowRuntime class.

SqlWorkflowPersistenceService and Delays

What about a scenario in which a workflow instance has been persisted to the persistence store but a Delay activity’s execution is pending? SqlWorkflowPersistenceService handles this by keeping track of when its next persisted workflow will fire such an event.

Whenever a workflow instance’s state is saved, the persistence service asks when its next timer will expire. The time at which this expiration will occur is stored in an internal object that is intelligent enough to determine, out of all the workflow instances it is managing, which workflow instance will expire next. At that time, the workflow instance is loaded from the persistence store, and the Delay activity is executed. All this happens automatically with no extra code needed from you.

Of course, if the host is not running when a timer expires, it cannot load the workflow and execute it. This is a common scenario if the host is a Windows Forms or web application, as opposed to a Windows service that is running continuously. In any event, SqlWorkflowPersistenceService covers this scenario as well.

The service uses its private loadingInterval field, which is of type TimeSpan, to determine how often to check the persistence store for expired timers. If expired timers are found, the workflow instances are loaded and executed as though the timers had just expired. This check is also performed when the persistence service is started.

Locking Workflow Instances

Workflow locking allows only one host to load a workflow instance at a time. This behavior is supported by SqlWorkflowPersistenceService out of the box. The service automatically sets the unlocked, ownerID, and ownedUntil columns in the persistence database.

When a workflow host pulls an instance’s state from the data store because there is work to do, SqlWorkflowPersistenceService sets the aforementioned columns in the database, which in effect stakes a claim on the instance. After the work has completed and the instance is again unloaded from the runtime, ownerID and ownedUntil are set to null, and unlocked is set to 1. However, if a workflow host other then the locking owner tries to load the instance while these values are set, it gets an exception of type WorkflowOwnershipException.

Tracking Services

Tracking services enable you to identify the data related to workflow execution, capture and store that data in a durable medium, and then query that data for analysis at a later time. What kind of information can be tracked? Events and data related to workflows and activities.

The following sections cover some of the benefits of using workflow tracking, key concepts, and the out-of-the-box tracking functionality Windows Workflow Foundation provides. The base Workflow framework includes a SQL tracking service and a tracking service that monitors for terminated workflows and writes data to the Windows Event Log.

Monitoring Workflows

Tracking enables you to monitor currently running and completed workflows. This is a very powerful asset for workflow consumers. Imagine being able to know where all your workflows are in their progress at any given time. Further, imagine being able to say something like “Give me all ‘create order’ workflows that were created in the past 12 hours and are related to customer XYZ” or “Show me all workflows that are currently stuck in the ‘waiting for confirmation’ state.”

Key Performance Indicators and Regulatory Compliance

The functionality provided by Windows Workflow Foundation to track important business data produced by executed workflow instances enables you to create and track Key Performance Indicators (KPIs). KPIs are metrics on which an organization’s operations can be evaluated.

Generally, KPIs have two components: an actual value and a goal on which the value can be judged. For example, a company could have a KPI that represents its sales performance when compared with salesplan data. Another example might be the percentage of help-desk tickets that are resolved within 24 hours, with a target of 75 percent.

Because you can extract business data from workflows using the tracking infrastructure, creating KPIs based on business processes becomes easier. In addition, workflow KPIs can be provided to end users in real time so that critical, time-sensitive decisions can be made.

In addition to producing KPIs, tracking workflows can assist public companies that are mandated to comply with such legislature as Sarbanes-Oxley. Having access to workflow- and processes-oriented software in such environments is extremely useful because the software itself documents its purpose by being declarative. This is an important part of regulatory compliance. Furthermore, tracking workflow execution can provide valuable information for auditors or for other compliance-related purposes.

Tracking Architecture

The following sections cover the architectural entities in Windows Workflow Foundation that support tracking. The major pillars of the tracking infrastructure are the following:

  • The tracking runtime

  • Tracking profiles

  • Tracking channels

  • Tracking services

The Tracking Runtime

The tracking runtime is actually built into the workflow runtime and is not an extensible part of the architecture. It is the liaison between workflow instances, which provide interesting events and data for tracking as well as the tracking services.

The runtime is responsible for delivering information from workflows to the appropriate services so that the correct information is captured. It does this by asking each tracking service (discussed later) what it is interested in. By doing this, the runtime can accurately and efficiently provide the correct data to the correct parties.

Tracking Profiles

Tracking profiles define what is interesting related to workflow execution. Items that can be tracked include workflow events, activity events and data, and user events and data. A tracking profile is completely customizable so that you can say that you are interested only in workflow events or in a specific subset of events.

There are a few classes relating to tracking profiles in the Windows Workflow Foundation API with which you should be familiar. The System.Workflow.Runtime.Tracking.TrackingProfile class is the container for all the rules that define the events and data that will be tracked during execution. The TrackingProfile class has a property called ActivityTrackPoints, which is a collection that holds ActivityTrackPoint classes.

In its MatchingLocations property, the ActivityTrackPoint class holds ActivityTrackingLocation classes, which define the types of activities that should be tracked.

You specify activities of interest in the ActivityTrackingLocation class constructor, which takes a Type instance. You can specify activities as low or high in the inheritance chain, as desired. For example, you might be interested in only the events that fire on custom-developed activities. Conversely, you could pass a Type instance representing the base Activity class so that every activity in a workflow is tracked. However, that happens only if the MatchDerivedTypes property is set to true. This property indicates whether classes that inherit from the Type specified in the constructor are tracked as well.

The ActivityTrackingLocation class has a property called ExecutionStatusEvents, which is a collection that holds values from the ActivityExecutionStatus enumeration. The values in this enumeration represent events that could occur on a given activity. Only the events added to the ExecutionStatusEvents collection are tracked. The following values are valid:

  • Initialized

  • Executing

  • Canceling

  • Closed

  • Compensating

  • Faulting

After you configure the ActivityTrackingLocation instance, you need to add it to the Activity TrackPoint instance using its MatchingLocation property. At this point, you can add the ActivityTrackPoint instance to TrackingProfile in its ActivityTrackPoints property.

Similar to the ActivityTrackPoint and ActivityTrackingLocation classes, the WorkflowTrackPoint and WorkflowTrackingLocation classes support workflow-level tracking. The way these workflow tracking classes interact with one another and with the TrackingProfile class is virtually identical to their activity counterparts.

The TrackingWorkflowEvent enumeration provides the events that can be tracked for workflows. These values are added to a WorkflowTrackingLocation class in its Events property. The following event values are valid:

  • Created

  • Completed

  • Idle

  • Suspended

  • Resumed

  • Persisted

  • Unloaded

  • Loaded

  • Exception

  • Terminated

  • Aborted

  • Changed

  • Started

Because of the way the tracking runtime handles profile caching, a versioning scheme is used so that newer profiles can be differentiated from their older counterparts. The Version property of TrackingProfile lets the runtime know that there have been changes to the profile and that it should be reloaded and used by the tracking service. This property is of type System.Version.

You have a couple of options for defining tracking profiles in your applications. The first method is to define profiles programmatically by using the TrackingProfile class. To do this, you use .NET code and the classes introduced previously, such as WorkflowTrackPoint and ActivityTrackPoint. The following is an example of creating a simplistic tracking profile using C#. This profile is interested in all activity events and all workflow events.

  public static TrackingProfile GetTrackingProfile() {     // create a new profile and give it a version     TrackingProfile profile = new TrackingProfile();     profile.Version = new Version(1, 0, 0, 0);     ActivityTrackPoint activityTrackPoint = new ActivityTrackPoint();     ActivityTrackingLocation activityTrackingLocation =         new ActivityTrackingLocation(typeof(Activity));     // setting this value to true will match all classes that are in     // the Activity class' inheritance tree     activityTrackingLocation.MatchDerivedTypes = true;     // add all of the activity execution status as something we are interested in     foreach (ActivityExecutionStatus aes in         Enum.GetValues(typeof(ActivityExecutionStatus)))     {         activityTrackingLocation.ExecutionStatusEvents.Add(aes);     }     activityTrackPoint.MatchingLocations.Add(activityTrackingLocation);     profile.ActivityTrackPoints.Add(activityTrackPoint);     WorkflowTrackPoint workflowTrackPoint = new WorkflowTrackPoint();     WorkflowTrackingLocation workflowTrackingLocation =         new WorkflowTrackingLocation();     // add all of the tracking workflow events as something we are interested in     foreach (TrackingWorkflowEvent twe in         Enum.GetValues(typeof(TrackingWorkflowEvent)))     {         workflowTrackingLocation.Events.Add(twe);     }     workflowTrackPoint.MatchingLocation = workflowTrackingLocation;     profile.WorkflowTrackPoints.Add(workflowTrackPoint);     return profile; } 

This code creates a new TrackingProfile instance and immediately assigns a new version. The version number is hard-coded here, but this should obviously be more dynamic in a real-world application. Next, an ActivityTrackPoint instance is created that matches all activities. It does this because the MatchDerivedTypes property is set to true. In addition, every possible activity execution status is tracked due to the loop that iterates through the ActivityExecutionStatus enumeration. Finally, a workflow tracking point is created and added to the profile using a method that is similar to adding the activity tracking point. Again, each possible workflow event type is tracked in this example.

The following XML represents the same tracking profile as the one defined in the preceding code. Notice that the events the profile is interested in are the same events that were added using the enumeration loops in the code version. Defining tracking profiles in an XML format allows a more dynamic configuration scheme than does defining profiles in code. The XML can be swapped out very easily without your having to recompile or redistribute assemblies.

  <TrackingProfile    xmlns="http://schemas.microsoft.com/winfx/2006/workflow/trackingprofile"    version="1.0.0.0">     <TrackPoints>         <WorkflowTrackPoint>             <MatchingLocation>                 <WorkflowTrackingLocation>                     <TrackingWorkflowEvents>                         <TrackingWorkflowEvent>Created</TrackingWorkflowEvent>                         <TrackingWorkflowEvent>Completed</TrackingWorkflowEvent>                         <TrackingWorkflowEvent>Idle</TrackingWorkflowEvent>                         <TrackingWorkflowEvent>Suspended</TrackingWorkflowEvent>                         <TrackingWorkflowEvent>Resumed</TrackingWorkflowEvent>                         <TrackingWorkflowEvent>Persisted</TrackingWorkflowEvent>                         <TrackingWorkflowEvent>Unloaded</TrackingWorkflowEvent>                         <TrackingWorkflowEvent>Loaded</TrackingWorkflowEvent>                         <TrackingWorkflowEvent>Exception</TrackingWorkflowEvent>                         <TrackingWorkflowEvent>Terminated</TrackingWorkflowEvent>                         <TrackingWorkflowEvent>Aborted</TrackingWorkflowEvent>                         <TrackingWorkflowEvent>Changed</TrackingWorkflowEvent>                         <TrackingWorkflowEvent>Started</TrackingWorkflowEvent>                     </TrackingWorkflowEvents>                 </WorkflowTrackingLocation>             </MatchingLocation>         </WorkflowTrackPoint>         <ActivityTrackPoint>             <MatchingLocations>                 <ActivityTrackingLocation>                     <Activity>                         <Type>                          System.Workflow.ComponentModel.Activity,                          System.Workflow.ComponentModel, Version=3.0.0.0,                          Culture=neutral, PublicKeyToken=31bf3856ad364e35                         </Type>                         <MatchDerivedTypes>true</MatchDerivedTypes>                     </Activity>                     <ExecutionStatusEvents>                         <ExecutionStatus>Initialized</ExecutionStatus>                         <ExecutionStatus>Executing</ExecutionStatus>                         <ExecutionStatus>Canceling</ExecutionStatus>                         <ExecutionStatus>Closed</ExecutionStatus>                         <ExecutionStatus>Compensating</ExecutionStatus>                         <ExecutionStatus>Faulting</ExecutionStatus>                     </ExecutionStatusEvents>                 </ActivityTrackingLocation>             </MatchingLocations>         </ActivityTrackPoint>     </TrackPoints> </TrackingProfile> 

In addition, you can serialize the tracking profiles created in code to XML by using the TrackingProfile Serializer class. The following is an example of how to do this by using a StringWriter instance:

  // create an instance of the profile serializer TrackingProfileSerializer serializer = new TrackingProfileSerializer(); // create the string writer StringWriter sw = new StringWriter(new StringBuilder(),    CultureInfo.InvariantCulture); serializer.Serialize(sw, profile); 

The code here simply creates a new instance of TrackingProfileSerializer and StringWriter, which work together to serialize the profile object to a string. After you serialize the profile to a string, you can save it to a file or use it in some other way. For example, you could call sw.ToString() and store the resulting value wherever you please.

Tracking Channels

Tracking channels inherit from the System.Workflow.Runtime.Tracking.TrackingChannel class. These entities do the actual persistence of tracking data when needed. Therefore, the tracking channels know how to communicate with a specified tracking store. For example, the SqlTrackingService (discussed in more detail in the next section) uses a class called SqlTrackingChannel that is responsible for storing tracking data in a SQL Server database. (Don’t go looking for that class in the documentation - it is a private nested class inside SqlTrackingService.) There can be one channel per tracking service per workflow instance, and tracking channels are single threaded.

Tracking Services

Tracking services inherit from the System.Workflow.Runtime.Tracking.TrackingService class and provide the profiles and channels to the runtime. Tracking service instances are added to the workflow runtime in the host, just as with any other runtime service type.

Tracking services can optionally implement the IProfileNotification interface, which represents the ability to check tracking profiles for changes or removal. This interface exposes two public events: ProfileRemoved and ProfileUpdated. The tracking service that implements this interface should then provide behavior that checks the tracking profiles used for the service and raises one of these two events when appropriate. For an example, the SqlTrackingService implements this interface and uses a value passed to its constructor to determine how often to check for profile changes. Every time that interval passes, the SQL tracking service checks its profile table for changes and raises the relevant event.

Bringing It Together

All these entities come together to allow a cohesive tracking infrastructure. Figure 7-1 shows the interactions between tracking entities. The runtime sits between workflow instances and the tracking services. The runtime interacts with the services to get tracking profiles that are relevant for any given workflow instance. It says something like, “I am a workflow of type X; provide me with all profiles that I am interested in based on who I am.” The runtime can contain any number of tracking services. So you could have one tracking service writing a specific type of information to one data store and another doing something completely different.

image from book
Figure 7-1

Workflow instances provide the tracking runtime with events and data, which is then filtered and sent to the appropriate tracking channels. The channels are responsible for writing that information to a medium. This medium can be anything, including a database, a file, XML, the console window, the Event Log, and so on.

After data is written to a durable store, you can review it with software that uses the workflow tracking query entities. You can perform searches for specific workflow types, instances, or workflows that meet specific criteria based on data values in activities. However, you need to implement this querying behavior specifically for each tracking store. There is no base infrastructure to support querying.

Tracking User Data

In addition to tracking activity and workflow events predefined in the workflow API, you can specify specific points in code at which data should be tracked - or, more accurately, could be tracked. This means that you can indicate that particular pieces of data at particular points of execution might be interesting enough to track. However, the relevant tracking profile must specify that the same data is interesting for it to get tracked. These points in code and the data are referred to as user track points.

The following is an example of modifying a tracking profile using C# to watch for a piece of data called FirstName with a value of Todd. This condition is specified by adding an ActivityTrackingCondition instance to the Conditions property of the UserTrackingLocation class. In addition, the ArgumentType property specifies the type of the data that will be tracked. Finally, the UserTrackingLocation instance is added to the UserTrackPoint class in its MatchingLocations collection.

  UserTrackPoint userTrackPoint = new UserTrackPoint(); WorkflowDataTrackingExtract extract =     new WorkflowDataTrackingExtract("MyWorkflow.myTestString"); extract.Annotations.Add("My annotation..."); userTrackPoint.Extracts.Add(extract); UserTrackingLocation userTrackingLocation = new UserTrackingLocation(); userTrackingLocation.ActivityType = typeof(MyActivity); userTrackingLocation.Conditions.Add(     new ActivityTrackingCondition("FirstName", "Todd")); userTrackingLocation.ArgumentType = typeof(string); userTrackPoint.MatchingLocations.Add(userTrackingLocation); profile.UserTrackPoints.Add(userTrackPoint); 

Notice that an instance of the WorkflowDataTrackingExtract class is added to the UserTrackPoint.Extracts collection. This specifies the workflow members that are to be extracted and then tracked when a user track point is matched. In the example, the code specifies that a member called myTestString on the MyWorkflow class should be tracked.

In addition to extracting workflow members and storing their values, you can save annotations that go along with these values by using the WorkflowDataTrackingExtract.Annotations property. Annotations are strings that describe the extracted data and can be used to provide greater context when viewing tracked data.

Now that you know what user track points are and how they are tracked, you need to define the user track points. You do this by calling the Activity.TrackData or ActivityExecutionContext .TrackData method. Both of these methods have two overloads: one that takes an object representing user data and another that takes a user data object as well as a string representing the data’s identifier key.

The following is a simple example of forcing the tracking infrastructure to track a piece of data that the previous profile code would match. This code could be found in a workflow’s code-beside class. Because a workflow is an activity, it can access this method by using the base keyword.

  base.TrackData("FirstName", "Todd"); 

SqlTrackingService

The out-of-the-box service for tracking is SqlTrackingService, which uses SQL Server as its storage medium. Like SqlPersistenceService, this service uses a defined database schema to perform its functions. The SQL tracking service uses tables that store information related to events and data specified in tracking profiles. In addition, these tracking profiles are stored in the database and are associated with a particular workflow type.

SqlTrackingService implements the IProfileNotification interface, which defines the functionality that informs the tracking runtime when a profile is removed or updated. IProfileNotification exposes two events: ProfileRemoved and ProfileUpdated. The tracking runtime subscribes to these events and takes the appropriate actions when either is raised.

The ProfileRemoved event passes a ProfileRemovedEventArgs object that contains a reference to the type of workflow that had its profile removed. The ProfileUpdated event passes a ProfileUpdated EventArgs object, which then passes a reference to the workflow type in question as well as a reference to the tracking profile that it should now be using.

The SqlTrackingService uses an internal class, SqlTrackingChannel, as its channel for writing to the backing SQL database. Because this class implements IPendingWork, workflow instances and the tracking database can be kept in sync much better than if transactional behavior were not used. However, the transactional behavior is not required. The SqlTrackingService has a property called IsTransactional, which is in turn passed to the SqlTrackingChannel class. Then whenever some data needs to be written to the database, the transactional flag is inspected, and if it is set to true, a work item is added to the batch for later execution. If the flag is set to false, the database write happens immediately.

Preparing SqlTrackingService

Before using SqlTrackingService in your workflow software, you have to set up and configure the database. As with SqlWorkflowPersistenceService, the first step is manually creating the tracking database by using the CREATE DATABASE T-SQL command, like this:

  CREATE DATABASE WorkflowPersistence 

Next, you must add the tables and stored procedures to the newly created database by running the Tracking_Schema.sql and Tracking_Logic.sql scripts, which are located in C:\<WINDOWS DIR>\ Microsoft.NET\Framework\v3.0\Windows Workflow Foundation\SQL\<LANGUAGE DIR>. These scripts create the database objects. Table 7-3 lists the tables that are created to support SQL tracking.

Table 7-3: SQL Tracking Tables
Open table as spreadsheet

Table

Description

Activity

Records in this table represent activities that make up workflow definitions. So the table contains things like myCodeActivity rather than CodeActivity.

ActivityExecutionStatus

Holds possible values for an activity’s execution status.

ActivityExecution StatusEvent

Stores data related to the execution of activities in workflow instances.

ActivityInstance

Represents a particular activity instance in a workflow instance.

AddedActivity

Stores information about activities added to a running workflow instance using dynamic update. (Dynamic update is covered in Chapter 11.)

DefaultTrackingProfile

Holds the default tracking profile for all workflow instances that have no profile specified.

EventAnnotation

Holds event annotations that were specified in code.

RemovedActivity

Stores information about activities removed from a running workflow instance using dynamic update. (Dynamic update is covered in Chapter 11.)

TrackingDataItem

Holds data values that are extracted from workflow activity instances.

TrackingDataItem Annotation

Holds annotations that were specified in code.

TrackingPartitionInterval

Holds a single row, which specifies the interval at which the workflow tracking data will be partitioned. The default is m, for monthly.

TrackingPartition SetName

Stores data surrounding the partitioned tables that are created and managed by the tracking runtime.

TrackingProfile

Stores tracking profiles related to workflow records stored in the Workflow table in their XML form.

TrackingProfileInstance

Data in this table represents an instance of a tracking profile in XML form.

TrackingWorkflowEvent

Holds possible values for workflow events, including Completed, Idle, and Aborted.

Type

Stores metadata about types that have been tracked, including types representing activities and workflows.

UserEvent

Represents user events captured during runtime about a workflow instance.

Workflow

The first time a particular version of a workflow is executed and tracked, its definition is stored here.

WorkflowInstance

When events and data related to workflow instances are tracked, they tie back to a record in this table. Records are identified by the same workflow instance IDs as in the code.

WorkflowInstanceEvent

Holds data related to workflow instance events.

After creating the database objects, using the SQL tracking service in your workflows is extremely simple. All you need to do is create an instance of the service class and add it to the runtime. The following is an example of what this might look like:

  WorkflowRuntime workflowRuntime = new WorkflowRuntime(); SqlTrackingService trackingService = new SqlTrackingService(     "Initial Catalog=WorkflowTracking;" +     "Data Source=localhost;Integrated Security=SSPI;"); trackingService.IsTransactional = false; workflowRuntime.AddService(trackingService); 

A new instance of the SqlTrackingService class is created by using a constructor that takes a string representing the tracking database’s connection string. In addition, the IsTransactional property of the service object is set to false, indicating that tracking data should be written to the database as events occur. Finally, the service instance is added to the workflowRuntime object using its AddService method.

Profiles and the SqlTrackingService

As noted in Table 7-3, tracking profiles used with SqlTrackingService are stored in the Tracking Profile table. The SqlTrackingService infrastructure provides a default profile, which is stored in the DefaultTrackingProfile table.

To create a new tracking profile and link it to a particular workflow type, you need to call the stored UpdateTrackingProfile procedure that was created during the initial setup of the tracking database. The following code is an example of how to do this:

  private void CreateNewSqlTrackingProfile(TrackingProfile profile, Version version) {     // create the necessary objects to serialize the profile to a string     TrackingProfileSerializer tpf = new TrackingProfileSerializer();     StringBuilder sb = new StringBuilder();     StringWriter stringWriter = new StringWriter(sb);     tpf.Serialize(stringWriter, profile);     Type workflowType = typeof(MyWorkflow);     // create the database objects     SqlConnection con = new SqlConnection(CONNECTION_STRING);     SqlCommand com = new SqlCommand("UpdateTrackingProfile", con);     com.CommandType = CommandType.StoredProcedure;     // add required parameters to the SQL command     com.Parameters.AddWithValue("@TypeFullName", workflowType.ToString());     com.Parameters.AddWithValue("@AssemblyFullName",         workflowType.Assembly.FullName);     com.Parameters.AddWithValue("@Version", version.ToString());     com.Parameters.AddWithValue("@TrackingProfileXml", stringWriter.ToString());     try     {         // create the new profile in the database         con.Open();         com.ExecuteNonQuery();     }     finally     {         con.Close();     } } 

The CreateNewSqlTrackingProfile method is passed not only a new TrackingProfile instance, but also a Version object. This allows tracking profiles in the TrackingProfile table to be versioned per workflow type. This code accesses the stored procedure by using the tracking SQL script included in Windows Workflow Foundation, UpdateTrackingProfile. This stored procedure takes parameters that identify the profile you are updating as well as the new profile definition in the @TrackingProfileXml parameter.

Querying SqlTrackingService

One of the really useful features of the SQL tracking infrastructure is its ability to query for workflows that have been tracked. There are several attributes you can use to specify which workflow instances you want to see, including the type of workflow, its status, and even specific values of activity properties.

Classes that participate in the querying process include SqlTrackingQuery, SqlTrackingQueryOptions, and TrackingDataItemValue.

SqlTrackingQueryOptions provides properties that enable you to filter the query results. You use the StatusMinDateTime and StatusMaxDateTime properties to specify the time window in which workflow instances were started. The WorkflowType property takes a reference to a Type instance representing a specific workflow. The WorkflowStatus property takes a value from the WorkflowStatus enumeration. Its values include Running, Completed, Suspended, Terminated, and Created. Both the WorkflowType and WorkflowStatus properties are nullable. If either of these properties is not set or set to null, that property is not used to filter the returned workflow instances.

The following code uses a SQL tracking query to search the workflow tracking database for helpdesk workflows that are considered old (a workflow that started more than two days ago and that is still running):

  private static void FindOldTickets(string user) {     SqlTrackingQuery query = new SqlTrackingQuery(CONNECTION_STRING);     SqlTrackingQueryOptions queryOptions = new SqlTrackingQueryOptions();     // filter workflow types to the HelpdeskWorkflow     queryOptions.WorkflowType = typeof(MyWorkflow);     // only get running workflows     queryOptions.WorkflowStatus = WorkflowStatus.Running;     // look for workflows which were started more than two days ago     queryOptions.StatusMaxDateTime = DateTime.Now.AddDays(-2);     // if a user was provided; use it as a filter     if (user != null)     {         TrackingDataItemValue dataItemValue = new TrackingDataItemValue();         dataItemValue.QualifiedName = "CreateTicket";         dataItemValue.FieldName = "AssignedEmployee";         dataItemValue.DataValue = user;     }     // add the criteria to the query options     queryOptions.TrackingDataItems.Add(dataItemValue);     // perform the query     IList<SqlTrackingWorkflowInstance> matches = query.GetWorkflows(queryOptions);     Console.WriteLine("Found " + matches.Count + " matching workflows.");     foreach (SqlTrackingWorkflowInstance instance in matches)     {         Console.WriteLine("   Workflow Instance: " +             instance.WorkflowInstanceId.ToString());     } } 

There are two objects doing most of the work in this example. First, the SqlTrackingQueryOptions instance uses its StatusMaxDateTime property to specify that workflow instances older than two days should be retrieved. Next, the TrackingDataItemValue instance is configured to look for activities called CreateTicket that have their AssignedEmployee property set to the user string passed to this method. Finally, the SqlTrackingQuery.GetWorkflows method is called and returns a list of workflows that match the specified criteria.

Data Maintenance

Because workflow tracking can cause quite a bit of data to be written your SQL Server database, especially if you specify verbose tracking options, the SQL tracking service offers data maintenance capabilities. When you set the PartitionOnCompletion property of the tracking service instance, the tracking tables can be partitioned according to a specified time period.

The partition interval is dictated by a value in the TrackingPartitionInterval table in the tracking database. It has only one column, called Interval, and the value you set in this column specifies the partitioning behavior. For example, if you set the Interval column to m, the tracking service will create tables that are separated by month. Other valid values are d (for daily partitions) and y (for yearly partitions). You can set this value manually with a SQL script or through code, using the SetPartitionInterval stored procedure in the tracking database. This stored procedure takes one parameter, @Interval, which expects one of the interval codes (m, d, or y).

Tables are automatically created, along with new partition sets, when necessary. This happens when a new record is added to the TrackingPartitionSetName table. These records point to the separate partitioned tables and contain information such as when the partition was created and the last date that the partition should contain.

The previous paragraphs discuss the scenario in which partitioning occurs in real time as the application hosting the tracking service executes. This may not be optimal if you want more control over when the partitioning occurs. The SQL tracking database offers the PartitionWorkflowInstance stored procedure, which performs the same partitioning as previously mentioned, on demand. For example, you could set up a SQL Server job that runs the partitioning stored procedure nightly.

The Workflow Monitor

The Workflow Monitor sample application that ships with the Windows Workflow Foundation SDK connects to a SQL Server tracking database and enables you to query for and view executed workflow instances and, even better, workflows that are currently executing. Figure 7-2 shows this application in action.

image from book
Figure 7-2

The first thing that probably catches your eye is the workflow designer on the right side of the screen. It is obvious that this is a graphical representation of a workflow definition. However, it also represents the execution path of a workflow instance. The check marks on each activity represent a successful execution of the attached activity. In addition, they provide visual indicators as to which activities were executed and which were not. For instance, the figure shows a workflow instance of type MyWorkflow that has an IfElse activity. In this example, it was the left path, isLargeValue, that was executed.

The top-right window in the Workflow Monitor shows workflow instances that have executed or are currently executing. The two workflows represent all of the instances that have been tracked in the database. However, take a look at the Workflow Status toolbar below the main menu. This gives you options for searching tracked workflow instances. Search criteria can include workflow status, a date range, and specific data values of activity properties. You can also use a workflow instance ID to search for a specific workflow instance. Any workflow instances matching the search criteria appear in the Workflows area on the left.

The bottom-left area of the application, the Activities window, shows the activities that were executed in the currently selected workflow instance. The activities that did not execute given the instance’s path are not listed. In this example, isSmallValue and codeActivity2 were not executed and, therefore, are not shown in the Activities window.

This application is a simple but extremely compelling example of the power of Windows Workflow Foundation. Developed by using the platform’s out-of-the-box capabilities, this application provides a great deal of insight into workflow execution. Imagine the possibilities when this idea is tailored to a specific organization and its business needs.

The TerminationTrackingService Application

TerminationTrackingService is another sample application that comes with the SDK. It enables the user to automatically log workflow instance terminations to the Windows Event Log. This gives the operations folks a simple method with which to quickly start troubleshooting workflow issues.

By default, TerminationTrackingService attempts to write to the event source WinWF unless another event source is provided using the constructor overload that takes a NameValueCollection instance. You can specify an alternative event source by adding a value with the key EventSource.

The following code shows an example of how to use TerminationTrackingService. By now, a lot of this code should look familiar. However, notice the EnsureEventLog method and its call from the Main method. This method uses the System.Diagnostics.EventLog class to make sure that a desired event log and event source exist. After that, an instance of TerminationTrackingService is created and added to the workflow runtime.

  public static void Main(string[] args) {     EnsureEventLog("WinWF", "Workflow Log");     WorkflowRuntime workflowRuntime = new WorkflowRuntime();     TerminationTrackingService terminationService =         new TerminationTrackingService();     workflowRuntime.AddService(terminationService);     WorkflowInstance instance =         workflowRuntime.CreateWorkflow(typeof(MyWorkflow));     instance.Start(); } private static void EnsureEventLog(string eventSource, string eventLog) {     if (!EventLog.SourceExists(eventSource))     {         EventLog.CreateEventSource(eventSource, eventLog);     } } 

If MyWorkflow is terminated either through the Terminate activity or through an unhandled exception, data describing this occurrence is written to the Workflow Log event log with the event source WinWF.

Figure 7-3 shows an example of an event written because of an exception that was manually thrown from inside a workflow.

image from book
Figure 7-3

This tracking service works by supplying the runtime with a tracking profile that is specifically interested in the event TrackingWorkflowEvent.Terminated. In addition, TerminationTrackingService gets a tracking channel that knows how to write the formatted data to the event log. This service is a simple but effective example of how workflow tracking can provide valuable information about workflow execution.

The Workflow Loader Service

The abstract WorkflowLoaderService class serves as the base class for services that load and create new instances of workflows. In most cases, the out-of-the-box behavior that is provided will suffice, but this extensibility point enables you to take essentially any input and create a workflow out of it. All that matters is that the custom class code can take the input and provide an activity tree.

Like the transaction and scheduler services, workflow loader services have a default type. Default WorkflowLoaderService is added automatically to the runtime if no other loader service is specified with the AddService method of WorkflowRuntime.

The Data Exchange Service

There isn’t a lot more to cover related to the ExternalDataExchangeService that wasn’t already covered in Chapter 5. Basically, this runtime service acts as a container for added local communication service instances. ExternalDataExchangeService itself is added to the workflow runtime through its AddService method, as with any other runtime service.

See Chapter 5 for an extensive discussion of the data exchange service and local communication services.



Professional Windows Workflow Foundation
Professional Windows Workflow Foundation
ISBN: 0470053860
EAN: 2147483647
Year: 2004
Pages: 118
Authors: Todd Kitta

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net