Section 9.2. Queued Calls


9.2. Queued Calls

WCF provides support for queued calls using the NetMsmqBinding. Instead of transporting the message over TCP, HTTP, or IPC, WCF transports the message over MSMQ. WCF packages the WCF SOAP message into an MSMQ message and posts it to a designated queue. Note that there is no direct mapping of WCF messages to MSMQ messages, just like there is no direct mapping of WCF messages to TCP packets. A single MSMQ message can contain multiple WCF messages, or just a single one, according to the contract session mode, as discussed at length later on. Instead of sending the WCF message to a live service, the client posts the message to an MSMQ queue. All that the client sees and interacts with is the queue, not a service endpoint. As a result, the calls are inherently asynchronous and discounted. The calls will execute later on when the service processes the messages (which makes it asynchronous) and the service or client may interact with local queues (which enables disconnected calls).

9.2.1. Queued Calls Architecture

As with every WCF service, the client interacts with a proxy, as shown in Figure 9-2.

Figure 9-2. Queued calls architecture


However, since the proxy is configured to use the MSMQ binding, it does not send the WCF message to any particular service. Instead, it converts the call (or calls) to an MSMQ message (or messages) and posts it to the queue specified in the endpoint's address. On the service side, when a service host with a queued endpoint is launched, the host installs a queue listener, similar conceptually to the listener associated with a port when using TCP or HTTP. The queue's listener detects there is a message in the queue; it de-queues the message; then it creates the host side's chain of interceptors, ending with a dispatcher. The dispatcher calls the service instance as usual. If multiple messages are posted to the queue, the listener can create new instances as fast as the messages come off the queue, thus ending with asynchronous, disconnected, and concurrent calls.

If the host is offline, messages will simply be pending in the queue. The next time the host is connected, the messages will be played to the service. Obviously, if both the client and the host are alive and running and are connected, then the host will process the calls immediately.

9.2.2. Queued Contracts

A potentially disconnected call made against a queue cannot possibly return any values because no service logic is invoked at the time the message is dispatched to the queue. Not only that, but the call may be dispatched to the service and processed after the client application has shut down, when there is no client available to process the returned values. In much the same way, the call cannot return to the client any service-side exceptions, and there may not be a client around to catch and handle the exception anyway. In fact, WCF disallows using fault contracts on queued operations. Since the client cannot be blocked by invoking the operation, or rather, the client is only blocked for the briefest moment it takes to queue up the message, the queued calls are inherently asynchronous from the client's perspective. All of these are the classic characteristics of one-way calls. Consequently, any contract exposed by an endpoint that uses the NetMsmqBinding can only have one-way operations, and WCF verifies this at the service (and proxy) load time:

 //Only one-way calls on queued contracts [ServiceContract] interface IMyContract {    [OperationContract(IsOneWay = true)]    void MyMethod( ); } 

Because the interaction with MSMQ is encapsulated in the binding, there is nothing in the service or client invocation code pertaining to the fact that the call is queued. The service and client code look like any other WCF client and service code, as shown in Example 9-1.

Example 9-1. Implementing and consuming a queued service

 //////////////////////// Service Side /////////////////////////// [ServiceContract] interface IMyContract {    [OperationContract(IsOneWay = true)]    void MyMethod( ); } class MyService : IMyContract {    public void MyMethod( )    {...} } //////////////////////// Client Side /////////////////////////// MyContractClient proxy = new MyContractClient( ); proxy.MyMethod( ); proxy.Close( ); 

9.2.3. Configuration and Setup

When you define an endpoint for a queued service, the endpoint address must contain the queue's name and designation; that is, the type of the queue. MSMQ defines two types of queues: public and private. Public queues require an MSMQ domain controller installation and can be accessed across machine boundaries. Applications in production often require public queues due to the secure and disconnected nature of public queues. Private queues are local to the machine they reside on, and do not require a domain controller. Such a deployment of MSMQ is called a workgroup installation. During development, and for private queues they set up and administer, developers usually resort to workgroup installation. You designate the queue type (private or public) as part of the queued endpoint address:

 <endpoint    address  = "net.msmq://localhost/private/MyServiceQueue"    binding  = "netMsmqBinding"    ... /> 

In the case of a public queue, you can omit the public designator and have WCF infer the queue type. With private queues, you must include the designator. Also note that there is no $ sign in the queue's type.

9.2.3.1. Workgroup installation and security

When you're using private queues in a workgroup installation, you must disable MSMQ security on the client and service sides. Chapter 10 discusses how to secure WCF calls, including queued calls. Briefly, the default MSMQ security configuration expects users to present certificates for authentication, and MSMQ certificate-based security requires an MSMQ domain controller. Alternatively, selecting Windows security for transport security over MSMQ requires Active Directory integration, which is not possible with MSMQ workgroup installation. For now, Example 9-2 shows how to disable MSMQ security.

Example 9-2. Disabling MSMQ security

 <system.serviceModel> ...         <endpoint name = ...            address  = "net.msmq://localhost/private/MyServiceQueue"            binding  = "netMsmqBinding"            bindingConfiguration = "NoMSMQSecurity"            contract = "IMyContract"         /> ...    <bindings>       <netMsmqBinding>          <binding name = "NoMSMQSecurity">             <security mode = "None">             </security>          </binding>       </netMsmqBinding>    </bindings> </system.serviceModel> 

9.2.3.2. Creating the queue

On both the service and the client side, the queue must exist before client calls are queued up against it. There are several options for creating the queue. The administrator (or the developer, during development) can use the MSMQ control panel applet to create the queue, but that is a manual step that should be automated. The host process can use the API of System.Messaging to verify that the queue exists before opening the host. The class MessageQueue offers the Exists( ) method for verifying that a queue is created, and the Create( ) methods for creating a queue:

 public class MessageQueue : ... {    public static MessageQueue Create(string path);//Nontransactional    public static MessageQueue Create(string path,bool transactional);    public static bool Exists(string path);    public void Purge( );    //More members } 

If the queue is not present, the host process can first create it and then proceed to open the host. Example 9-3 demonstrates this sequence.

Example 9-3. Verifying a queue on the host

 ServiceHost host = new ServiceHost(typeof(MyService)); if(MessageQueue.Exists(@".\private$\MyServiceQueue") == false) {    MessageQueue.Create(@".\private$\MyServiceQueue",true); } host.Open( ); 

In the example, the host verifies against the MSMQ installation on its own machine that the queue is present before opening the host. If it needs to, the hosting code creates a queue. Not the use of the TRue value for the transactional queue, as discussed later on. Note also the use of the $ sign in the queue designation. The obvious problem with Example 9-3 is that it hardcodes the queue name. It is preferable to read the queue name from the application config file by storing it in an application setting. But there are additional problems even with that approach. First, you have to constantly synchronize the queue name in the application settings and in the endpoint's address. Second, you have to repeat this code in every case of a queued service. Fortunately, it is possible to encapsulate and automate the code in Example 9-3 in my ServiceHost<T>, as shown in Example 9-4.

Example 9-4. Creating the queues in ServiceHost<T>

 public class ServiceHost<T> : ServiceHost {    protected override void OnOpening( )    {       foreach(ServiceEndpoint endpoint in Description.Endpoints)       {          QueuedServiceHelper.VerifyQueue(endpoint);       }       base.OnOpening( );    }    //More members } public static class QueuedServiceHelper {    public static void VerifyQueue(ServiceEndpoint endpoint)    {       if(endpoint.Binding is NetMsmqBinding)       {          string queue = GetQueueFromUri(endpoint.Address.Uri);          if(MessageQueue.Exists(queue) == false)          {             MessageQueue.Create(queue,true);          }       }    }    //Parses the queue name out of the address    static string GetQueueFromUri(Uri uri)    {...} } 

In Example 9-4, ServiceHost<T> overrides the OnOpening( ) method of its base class. This method is being called before opening the host, but after calling the Open( ) method. ServiceHost<T> iterates over the collection of configured endpoints. For each endpoint, if the binding used is NetMsmqBindingthat is, queued calls are expectedServiceHost<T> calls the static helper class QueuedServiceHelper, passing in the endpoint and asking it to verify the queue. The VerifyQueue( ) method of QueuedServiceHelper parses out of the endpoint's address the queue's name and uses code similar to Example 9-3 to create the queue if needed.

Using ServiceHost<T>, Example 9-3 is reduced to:

 ServiceHost<MyService> host = new ServiceHost<MyService>( ); host.Open( ); 

The client too must verify that the queue exists before dispatching calls to it. Example 9-5 shows the required steps on the client side.

Example 9-5. Verifying the queue by the client

 if(MessageQueue.Exists(@".\private$\MyServiceQueue") == false) {    MessageQueue.Create(@".\private$\MyServiceQueue",true); } MyContractClient proxy = new MyContractClient( ); proxy.MyMethod( ); proxy.Close( ); 

Yet again, you should not hardcode the queue name and instead read the queue name from the application config file by storing it in an application setting. Yet again, you will face the challenge of constantly keeping the queue name synchronized in the application settings and in the endpoint's address. You can use QueuedServiceHelper directly on the endpoint behind the proxy, but that forces you to create the proxy (or a ServiceEndpoint instance) just to verify the queue. You can extend my QueuedServiceHelper to streamline and support client-side queue verification, as shown in Example 9-6.

Example 9-6. Extending QueuedServiceHelper to verify the queue on the client side

 public static class QueuedServiceHelper {    public static void VerifyQueue<T>(string endpointName) where T : class    {       ChannelFactory<T> factory = new ChannelFactory<T>(endpointName);       VerifyQueue(factory.Endpoint);    }    public static void VerifyQueue<T>( ) where T : class    {       VerifyQueue<T>("");    }    //Same as Example 9-4    public static void VerifyQueue(ServiceEndpoint endpoint)    {...}    //More members } 

Example 9-6 adds two methods to QueuedServiceHelper. The version of the VerifyQueue<T>( ) method that takes an endpoint name uses the channel factory to read that endpoint from the config file, and then calls the VerifyQueue( ) method of Example 9-4. The version of VerifyQueue<T>( ) that takes no arguments uses the default endpoint of the specified contract type from the config file. Using QueuedServiceHelper, Example 9-5 is reduced to:

 QueuedServiceHelper.VerifyQueue<IMyContract>( ); MyContractClient proxy = new MyContractClient( ); proxy.MyMethod( ); proxy.Close( ); 

Note that the client needs to verify the queue for each queued contract.

9.2.3.3. Queue purging

When the host is launched, it may already have messages in queues, received by MSMQ while the host was offline, and the host will then start processing these messages. Dealing with this very scenario is one of the core features of queued services, enabling you to have disconnected services. While this is exactly the sort of behavior you would like when deploying a queued service, it is typically a hindrance in debugging. Imagine a debug session of a queued service. The client issues a few calls, the service begins processing the first call, and while stepping through the code you notice a defect. You stop debugging, change the service code, and relaunch the host, only to have it process the remaining messages in the queue from the previous debug session, even if those messages break the new service code. Usually, messages from one debug session should not seed the next one. The solution is to programmatically purge the queues when the host shuts down, in debug mode only. You can streamline this with my ServiceHost<T>, as shown in Example 9-7.

Example 9-7. Purging the queues on host shutdown during debugging

 public static class QueuedServiceHelper {    public static void PurgeQueue(ServiceEndpoint endpoint)    {       if(endpoint.Binding is NetMsmqBinding)       {          string queueName = GetQueueFromUri(endpoint.Address.Uri);          if(MessageQueue.Exists(queueName) == true)          {             MessageQueue queue = new MessageQueue(queueName);             queue.Purge( );          }       }    }    //More members } public class ServiceHost<T> : ServiceHost {    protected override void OnClosing( )    {       PurgeQueues( );       //More cleanup if necessary       base.OnClosing( );    }    [Conditional("DEBUG")]    void PurgeQueues( )    {       foreach(ServiceEndpoint endpoint in Description.Endpoints)       {          QueuedServiceHelper.PurgeQueue(endpoint);       }    }    //More members } 

In the example, the QueuedServiceHelper class offers the static method PurgeQueue( ). As its name implies, PurgeQueue( ) accepts a service endpoint. If the binding used by that endpoint is NetMsmqBinding, PurgeQueue( ) extracts the queue name out of the endpoint's address, creates a new MessageQueue object, and purges it. ServiceHost<T> overrides the OnClosing( ) method, which is called when the host shuts down gracefully. It then calls the private PurgeQueues( ) method. PurgeQueues( ) is marked with the Conditional attribute, using DEBUG as a condition. This means that while the body of PurgeQueues( ) always compiles, its call sites are conditioned on the DEBUG symbol. In debug mode only, OnClosing( ) will actually call PurgeQueues( ). PurgeQueues( ) iterates over all endpoints of the host, calling QueuedServiceHelper.PurgeQueue( ) for each.

The Conditional attribute is the preferred way in .NET for using conditional compilation and avoiding the pitfalls of explicit conditional compilation with #if.


9.2.3.4. Queues, services, and endpoints

WCF requires you to always dedicate a queue per endpoint for each service, meaning, a service with two contracts needs two queues for the two corresponding endpoints:

 <service name  = "MyService">    <endpoint       address  = "net.msmq://localhost/private/MyServiceQueue1"       binding  = "netMsmqBinding"       contract = "IMyContract"    />    <endpoint       address  = "net.msmq://localhost/private/MyServiceQueue2"       binding  = "netMsmqBinding"       contract = "IMyOtherContract"    /> </service> 

The reason is that the client actually interacts with a queue, not a service endpoint, and in fact, there may not even be a service at all, only a queue. Two distinct endpoints cannot share queues because they will get each other's messages. Since the WCF messages in the MSMQ messages will not match, WCF will silently discard those messages it deems invalid, and you will lose the calls. Much the same way, two polymorphic endpoints on two services cannot share a queue, because they will get each other's messages.

9.2.3.5. Exposing metadata

WCF cannot exchange metadata over MSMQ. Consequently, it is customary even for a service that will always have only queued calls to also expose a MEX endpoint or to enable metadata exchange over HTTP-GET, because the service's clients still need a way to retrieve the service description and bind against it.




Programming WCF Services
Programming WCF Services
ISBN: 0596526997
EAN: 2147483647
Year: 2004
Pages: 148
Authors: Juval Lowy

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net