WebLogic JMS Application Design


In the previous section, we looked at the different throttling mechanisms WebLogic JMS provides to help offset temporary spikes in message production. In general, you need to design your messaging applications so that the consumers can keep up with the producers over long periods of time. If messaging applications tend to produce more messages than they can consume , eventually the application will fall so far behind that it runs into physical resource constraints such as running out of memory or disk space. In this section, we will discuss some design considerations for messaging applications.

Choosing a Destination Type

When designing a JMS application, a commonly asked question is whether to use queues or topics. Trying to think in terms of destination type often leads to confusion. Instead, you should think about what type of messaging your application requires. In general, point-to-point style messaging should use queues while publish-and-subscribe style messaging should use topics. Of course, there is no hard and fast rule, and it is possible to use either type of destination with most applications if you are willing to do enough work. When using queues, some things to remember are

  • Each message will be processed by one consumer.

  • Messages will remain on the queue until they are consumed or expire.

  • Persistent messages are always persisted .

  • Using message selectors becomes expensive as the number of messages in the queue gets large.

When using topics, some things to remember are

  • Each message can be processed by every consumer.

  • Unless using durable subscriptions, messages will be processed only if at least one consumer is listening at the time the message is sent.

  • Persistent messages are persisted only when durable subscriptions exist.

  • Using message selectors with topics becomes expensive as the number of consumers gets large.

    Best Practice  

    Choose your destination type based on the type of messaging. Point-to-point messaging implies queues, and publish-and-subscribe messaging implies topics.

Locating Destinations

In WebLogic JMS, a destination physically resides on a single server. JMS provides two ways for an application to obtain a reference to a destination. You can look up the destination in JNDI and cast it to the appropriate destination type, Queue or Topic , or use the QueueSession.createQueue(destinationName) or TopicSession.createTopic(destinationName) methods to locate an existing queue or topic. These create methods require an application to pass destination names using a vendor-specific syntax. For WebLogic JMS, the syntax is jms-server-name/jms-destination- name ; for example, to obtain a reference to a destination named queue1 that resides in the JMS server named JMSServer1 , you would use a destination name of JMSServer1/queue1 . When referring to a distributed destination, the JMS server name and forward slash should be omitted because a distributed destination spans JMS servers. We will talk more about distributed destinations later.

Given that the createQueue() and createTopic() methods require specifying the WebLogic JMS server name, we feel that these methods are of limited value because we generally want to hide the location of the destination from the application. Any advantages that might be gained by using these methods rather than JNDI are likely to be offset by requiring the application to understand what JMS server the destination lives in. As a result, we recommend using JNDI to obtain references to JMS destinations. JNDI lookups, though, are relatively expensive so applications should attempt to look up JMS destinations once and cache them for reuse throughout the life of the application.

Best Practice  

Use JNDI to locate destinations. Caching and sharing JMS destinations throughout the application will help to minimize the impact of the JNDI lookup overhead.

Choosing the Appropriate Message Type

Choosing a message type is the second design choice you face when designing JMS applications. TextMessage is one of the more commonly used message types simply because of the type of data typically exchanged. As the popularity of XML increases , TextMessage popularity increases because the JMS specification does not explicitly define a message type for exchanging XML documents. Unfortunately, serializing a string is more CPU intensive than serializing other Java primitive types. Using strings as the message payload often implies that the receiver must parse the message in order to extract the data encoded in the string. WebLogic JMS also provides an XMLMessage type. The primary advantage of the XMLMessage type is the built-in support for running XPATH -style message selectors on the body of the message.

Let s take a minute to talk more about XML messages. Exchanging XML messages via JMS makes it easy to think about solving many age-old application integration problems. While JMS uses Java and Java already provides a platform-independent way of exchanging data, not all messaging applications are written in Java. Fortunately, many popular legacy messaging systems, such as IBM s WebSphere MQ, offer a JMS API in addition to their other language bindings. BEA is also providing a C API to WebLogic JMS as alpha code on its developer s Web site at http://dev2dev.bea.com/resourcelibrary/utilitiestools/environment.jsp . We hope that this will become a supported part of the product in the future. This solves the message exchange part of the problem.

XML solves the data exchange part of the problem by providing a portable, language-neutral format with which to represent structured data. As a result, it is not surprising to see many JMS applications using XML messages as their payload. Of course, the portability and flexibility of XML do not come without a cost. Not only are XML messages generally sent using TextMessage objects, which makes their serialization more costly, but they also generally require parsing the data in order to convert it into language object representations that the application can manipulate more easily. All of this requires that the receivers to do more work just to be able to get the message into a form where it can be processed.

This is not to say that you should avoid XML messages completely. XML is the format of choice for messages that cross application and/or organizational boundaries. When talking about applications here, we define an application as a program or set of programs that are designed, built, tested , and more importantly, deployed together as a single unit. What we want to caution you on is using XML message formats everywhere ”even within the boundaries of a single application ”just to be using XML. Use XML where it makes sense, and use other binary representations where XML is not required.

Best Practice  

Use XML messages for inter-application messaging and binary messages for intra-application messaging.

When choosing the message type for an application, there are several things you should consider. A well-defined message should have just enough information for the receiver to process it. You should consider how often the application will need to send the message and whether the message needs to be persistent. Message size can have a considerable impact on network bandwidth, memory requirements, and disk I/O. Keep messages as small as possible without removing information needed to be able to process the message efficiently .

Once you decide on the information to pass, use the simplest message type that can do the job. If possible, use a message format that is directly usable by the receiver, such as a MapMessage or ObjectMessage . If you need to pass a string message, converting it to a byte array and sending it using a BytesMessage may perform better than sending it as a string in a TextMessage , especially for large strings.

If you have only a few primitive types to send in a message, try using a MapMessage instead of an ObjectMessage for better performance. As the number of fields gets larger, however, the mapping code itself can add complexity. You might find that the performance benefit is outweighed by the additional maintenance burden . MapMessage also provides some extensibility in the sense that you can add new name-value pairs without breaking existing consumers. When you are using an ObjectMessage , passing objects that implement Externalizable rather than Serializable can improve the marshalling performance, at the added cost of requiring you to provide implementations of the readExternal() and writeExternal() methods.

We should caution you about the implications of using an ObjectMessage to pass data across application boundaries. ObjectMessage uses Java serialization, which relies on the sender and the receiver having the same exact versions of the class available. This can lead to tightly coupled producers and consumers ”even though you are using asynchronous messaging for communication. It might be possible to escape some of this coupling by using Externalizable objects, but this just means that your externalization code has to deal with object versioning.

Best Practice  

Use the simplest and smallest message type that is directly usable by the receiving application. Favor MapMessage when sending collections of primitive data types, and avoid ObjectMessage to reduce coupling between disparate systems.

Compressing Large Messages

XML messages tend to be larger than their binary counterparts. As you can imagine, larger messages will require more network bandwidth, more memory, and more importantly, more storage space and disk I/O to persist. If the messages are infrequent, this may not be an issue, but as the message frequency increases, the overhead will compound and start affecting the overall health of your messaging system. One way to reduce this impact is to compress large messages that carry strings as their payload. Compressed XML messages may actually provide a more compact representation of the data than any binary format.

The java.util.zip package provides everything you need to support message compression and decompression . The performance impact of message compression, however, is not clear-cut and needs to be evaluated on a case-by-case basis. When considering the use of compression, there are a number of things to weigh. First, are your messages big enough to warrant compression? Small messages do not generally compress very well so using compression can, in some cases, actually increase the size of your message.

Second, will the extra overhead of compressing and decompressing every message prevent your applications from meeting your performance and scalability requirements? Compressing messages can be thought of as a crude form of throttling because the compression step will slow down your message producers. Of course, this isn t necessarily a good thing because your consumers will also have to decompress the message before processing it.

Finally, will compression significantly reduce your application s network and memory resource requirements? If the producers and consumers are not running inside the same server as the destination, compressing the messages can reduce the network transfer time for messages. It can also reduce the memory requirements for your WebLogic JMS server. If the messages are persistent, it can also reduce the amount of disk I/O for saving and retrieving the messages. When the persistent store type is JDBC, it can reduce the network traffic between the WebLogic Server and the database. If the producers, consumers, and JMS server are all running inside the same WebLogic Server instance, many of these benefits may be outweighed by the additional CPU and memory overhead of compression and decompression.

Best Practice  

The decision on whether to use compression is something that needs to be carefully considered . If the producers, consumers, and destinations are collocated inside a WebLogic Server instance, it is generally better not to use compression.

Selecting a Message Acknowledgment Strategy

WebLogic JMS retains each message until the consumer acknowledges that it has received the message. Only at this point can WebLogic JMS remove the message from the server. Committing a transaction is one way for an application to acknowledge a message has been received. If transactions are not being used, an application uses message acknowledgments to acknowledge that a message, or set of messages, has been received. Message acknowledgments and transactions are mutually exclusive. If you specify both transactions and acknowledgments, WebLogic JMS will use transactions and ignore the acknowledgment mode.

Your application s message acknowledgment strategy can have a significant impact on performance and scalability. WebLogic JMS defaults to using AUTO_ACKNOWLEDGE mode. This means that WebLogic JMS will automatically acknowledge each message after the receiver processes it successfully. Using AUTO_ACKNOWLEDGE mode can reduce the chance of duplicate messages; however, it comes at a cost because the receiver s run time must send an acknowledgment message to the JMS server after each message to tell the server to remove the message.

If your application can tolerate duplicate messages, JMS defines the DUPS_OK_ACKNOWLEDGE mode to allow the receiver s run time to acknowledge the messages lazily. WebLogic Server 8.1 does not currently do anything special for DUPS_OK_ACKNOWLEDGE mode; this mode behaves exactly like AUTO_ACKNOWLEDGE mode. Another technique that gives you a little more control is using CLIENT_ACKNOWLEDGE mode to explicitly acknowledge groups of messages rather than each message individually. While message duplication is still possible, it typically occurs only because of a failure where your receiver has already processed some messages but had not acknowledged them. You could imagine building a strategy that tries to detect duplicate messages when starting up and/or recovering from a failure condition.

For stand-alone, nondurable subscribers, WebLogic JMS caches local copies of the message in the client JVM to optimize network overhead. Because this caching strategy also includes message acknowledgment optimizations, these subscribers really will not benefit from using the aforementioned CLIENT_ACKNOWLEDGE strategy; AUTO_ACKNOWLEDGE should perform equally well.

In addition to the standard JMS message acknowledgment modes, WebLogic JMS provides two additional acknowledgment modes through the weblogic.jms.WLSession interface:

NO_ACKNOWLEDGE.     This mode tells WebLogic JMS not to worry about message acknowledgments and simply provide a best-effort delivery of messages. In this mode, WebLogic JMS will immediately delete messages after they have been delivered, which can lead to both lost and duplicate messages. Applications that want to maximize performance and scalability and can tolerate both lost and duplicate messages should use this acknowledgment mode.

MULTICAST_NO_ACKNOWLEDGE.     This mode tells WebLogic JMS to use IP multicast to deliver messages to consumers. As the name implies, this mode has similar semantics to the NO_ACKNOWLEDGE mode.

We will talk more about using multicast sessions later in this section.

Best Practice  

Applications that explicitly acknowledge sets of messages will generally be faster and more scalable than those that acknowledge each message individually to minimize the possibility of receiving duplicate messages.

Designing Message Selectors

As we discussed earlier in the JMS Key Concepts section, message selectors allow consumers to further specify the set of messages they want to receive. Consumers specify a logical statement using an SQL WHERE clause-like syntax that the JMS provider evaluates against each message s headers and/or properties to determine whether the consumer should receive the message. WebLogic JMS adds another type of selector for use with the WebLogic JMS XMLMessage type. With this message type, you can specify XPATH expressions that evaluate against the XML body of the message.

In WebLogic JMS, all selector evaluation and filtering takes place on the JMS server, with the exception of multicast subscribers, which we will discuss later. For topics, WebLogic JMS evaluates each subscriber s message selector against every message published to the topic to determine whether to deliver the message to the subscriber.

For queues, the evaluation process is more complex. A message is always delivered to a queue, and WebLogic JMS will evaluate the message against each receiver s message selector until it finds a match. If no consumer s message selector matches the message, the message will remain in the queue. When a new consumer associates itself with the queue, WebLogic JMS will have to evaluate its message selector against each message in the queue. Because of this, message selectors generally perform better with topics than with queues. As with any such general statement, your mileage may vary depending on the exact circumstances in your application.

It is often better to split a destination into multiple destinations and eliminate the need for message selectors. For example, imagine an application that sends messages to the trade queue. If the application s consumers that use message selectors to select only buy or sell orders, we can split the trade queue into buy and sell queues and eliminate the need for a message selector. When a producer sends a buy message, it sends the message directly to the buy queue, and only buy consumers need to listen to that queue. Partitioning application in this way has other advantages besides performance. With this architecture, you can monitor each message type individually in each queue, and if performance does become an issue down the road, you can even separate the queues onto separate servers.

Best Practice  

Always evaluate the advantages and disadvantages of partitioning your application before deciding to use message selectors. Favor splitting destinations over the use of message selectors when there is a clear separation of message types.

Of course, there will be situations when using a message selector is unavoidable. In these situations, there are several things to keep in mind when designing your message selector strategy. First, what fields does your selector need to reference? Message header fields such as JMSMessageID , JMSTimestamp , JMSRedelivered , JMSCorrelationID , JMSType , JMSPriority , JMSExpiration , and JMSDeliveryTime and application-defined message properties are the fastest to access. Examining an XMLMessage message body adds a significant amount of overhead and, therefore, will be much slower.

Suppose that, in the previous example, the producer sends a message in XML format like the one shown here:

 <order type=buy>    <symbol>beas</symbol>    <quantity>5000</quantity> </order> 

The consumers can use an XPATH message selector like this:

 JMS_BEA_SELECT( 


Mastering BEA WebLogic Server. Best Practices for Building and Deploying J2EE Applications
Mastering BEA WebLogic Server: Best Practices for Building and Deploying J2EE Applications
ISBN: 047128128X
EAN: 2147483647
Year: 2003
Pages: 125

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net