11.4 The Forward Cache Integration Pattern

   

The Forward Cache integration pattern allows data to be proactively pushed from the backend data sources to a data cache service that is located closer to the presentation tier. Caching data closer to the presentation services improves performance and can allow continuous access when backend sources are offline. An ESB can be used to intelligently and reliably route data from the backend sources to the cache service. Although this pattern is discussed in the context of an enterprise portal, it can be applied generally to any situation that requires the collection and aggregation of data across a diverse set of distributed applications and other data sources.

Two forms of populating the cache from the backend applications will be discussed in this section:

  • Using reliable publish-and-subscribe messaging

  • Using ESB process definitions to "skim" the data from the process and forward it to the cache by inserting fan-out services into key points within the process itineraries

Both variants of the Forward Cache pattern rely on the concept of data forwarding, which is the act of making a copy of a message and propagating it to a data caching service.

11.4.1 Data Forwarding Using Publish-and-Subscribe

The basic model of the forward cache is quite simple. Using persistent messages on publish-and-subscribe message channels with durable subscriptions, an ESB can reliably forward change notifications to the cache, as illustrated in Chapter 11. The cache service can be implemented using the XML storage service (introduced in Chapter 7), which is used for the caching and aggregating of XML data.

Figure 11-17. Applications plugged into an ESB send data on pub/sub topics to be consumed by a portal cache
figs/esb_1117.gif


Because the ESB is coordinating the data flow between the applications and the cache service instead of just being used as a message bus, a variety of backend technologies can participate in publishing their data using the connection interface that best suits their needs.

The key to this pattern is creating and managing the change notifications in remote applications and regularly routing them to the cache. Trying to predict the most commonly queried data can be difficult. If the cache service has the ability to expire unused information, the cache can use subscriptions to subscribe to a broad range of topics, and allow the unused data to expire.

Because an ESB is used to integrate the backend applications, each application interface can be designed to simply post data to the endpoint that the ESB exposes. The fact that a publish-and-subscribe channel is being used to deliver the data is a configuration detail that is managed through ESB administrative tools instead of being coded into each application. The type of channel and the details of how the data gets routed can be manipulated over time based on analysis of usage and data access patterns. The result is that many requests for information can be satisfied directly from the cache without needing to make a request to the backend data sources (Chapter 11).

Figure 11-18. The portal server communicates with the portal cache through the ESB using a synchronous request/reply messaging pattern
figs/esb_1118.gif


The data being forwarded from backend data sources can be organized into topic trees using publish-and-subscribe channels to carry data. This allows the additional backend data sources to be easily joined into the data sharing network by selectively publishing to a particular topic branch within the topic hierarchy. As a result of designing the data channels this way, the portal cache can also subscribe to specific categories of data using wildcard subscriptions (Chapter 11). If the portal cache will be used to aggregate data, it would likely use a broader subscription mask, such as Asset_Class.*, to receive everything that is published under that topic namespace (using the Asset_Class topic hierarchy example from Chapter 5, Chapter 5), or it may narrow the selection by specifying the Asset_Class.#.Domestic.# and Asset_Class.#.International.# topic nodes individually. The individual systems on the backend may be much more selective, perhaps subscribing to the Asset_Class.#.Domestic.# topic node only. This selectivity allows you to reduce the overhead of the cache by limiting its scope.

Figure 11-19. Wildcard subscriptions on hierarchical topic trees allow a cache to selectively listen for data that is being passed between backend applications on pub/sub channels
figs/esb_1119.gif


The use of publish-and-subscribe channels in this type of pattern is another reason why a MOM is core to an ESB. However, this couldn't be accomplished with a MOM alone. Note that because an ESB is being used, one of the publishers in Chapter 11 is actually an external web service.

11.4.2 Data Forwarding Using Itinerary-Based Routing

An alternative to the publish-and-subscribe method of populating the cache is to use ESB process definitions to route messages to the cache service. The process definitions can use topics, queues, or any other type of transport that can have an endpoint defined for it. As illustrated in Figure 11-20, a process itinerary can use a fan-out service to route data to another application and a portal cache in parallel. The fan-out service makes a copy of the message and sends it to an alternate endpoint.

Figure 11-20. A process itinerary uses a fan-out service to route data to another application and a portal cache in parallel
figs/esb_1120.gif


A combination of process itineraries and publish-and-subscribe could also be used. The fan-out service itself could be implemented using pub/sub as the means for fanning out the data, and the exit endpoint of the fan-out service could be configured to use a pub/sub channel.

11.4.3 Other Considerations of the Forward Cache Pattern

The forward cache, as previously presented, is a relatively simple and straightforward approach if the only purpose of integrating the backend systems is to feed the portal with data. But it is highly likely that the backend systems could benefit in other ways by integrating and sharing data with each other. How does one take advantage of the effort put into the portal project and leverage that work into a more generic integration solution?

11.4.3.1 Integration first, portal second

Perhaps a better way to ask that question is "How can an integration project also be capable of populating caching services for access by enterprise portals?" One of the key benefits of the Forward Cache pattern is that it can be a side capability that is added onto an existing ESB integration strategy for backend data sources. The enterprise portal initiative may be the project that drives the backend integration, but it is likely part of a larger integration project. In that respect, you should be thinking about designing the integration strategy based on how the backend data sources and applications might best interact and share data with each other.

If the backend applications are first integrated with each other using an ESB, then populating the portal cache with data can be accomplished by inserting listeners into the existing integration pathways that forward copies of message data directly to the cache update endpoint.

If the data sharing is being done by posting messages to publish-and-subscribe channels, the portal cache can easily join in the data sharing network by subscribing to those channels. If the integration and data flow are being done using more formal business process definitions, the portal cache can be populated by inserting fan-out services into key business processes that intercept a message and route a copy of it to the cache.



Enterprise Service Bus
Enterprise Service Bus: Theory in Practice
ISBN: 0596006756
EAN: 2147483647
Year: 2006
Pages: 126

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net