9.3 Case Study: Migrating Toward Real-Time Integration

   

The accidental architecture has a certain technological brittleness that makes it difficult to migrate away from. This architectural brittleness also has organizational consequences. Making vast changes to the underlying integration infrastructure can be a daunting task.

Organizational and departmental coordination can be just as difficult to implement as any technological solutions. In an integration environment consisting of tightly coupled, point-to-point interfaces or time-dependent batch-transfer operations, getting all of the individual owners of the applications to agree upon and coordinate the changes required to move to a new integration infrastructure can be difficult and time consuming. And even if you can get the application owners in the same room and make them agree to all the details, the coordination of development schedules and deployment of upgrades to each individual application can be next to impossible.

An ESB allows you to migrate away from the accidental architecture incrementally, and in a fashion that fits the pace of each application's development group. In Chapter 4, we saw an example of how XML, CBR, and data transformation can provide a platform for independent migration. Here we will further explore how asynchronous communication and custom ESB service containers can provide a means for migrating away from the accidental architecture. We will do this by examining a case study of a company with a rather extreme accidental architecture.

VCR Corporation is a manufacturing company that relies exclusively on ETL for its end-to-end supply chain and logistics processing, and, predictably, suffers from all of the reliability, downtime, and overall latency problems just described. As illustrated in Figure 9-6, the Product Master application and the Master Product Index database are synchronized each night with the data across a number of its key applications, including Inventory, Shipping, Distribution Logistics, Supply Chain, CRM/SFA, Accounting/GL, and Order Management. Each night the data is gathered into the Product Master application and redistributed to all the other applications using an ETL process similar to the one described earlier.

Figure 9-7. Integration using ETL and nightly FTP batch transfers
figs/esb_0907.gif


Once gathered into the Master Product Index, a database export is performed to produce a number of flat files. Through a series of scripts and programs, the data is transformed and filtered for each target application. The flat files are delivered to the application using an FTP file transfer. Figure 9-7 shows the pattern sketch for this part of the process. Often the data is transferred to the destination, and transformed and filtered there.

Figure 9-8. ETL using FTP file transfer
figs/esb_0908.gif


Migrating away from this scenario toward an integration using an ESB can provide the following benefits:

  • Data sharing in real time.

  • Broadcast of data to multiple targets using a publish/subscribe messaging model.

  • Data skimming of messages to support new Business Activity Monitoring (BAM) applications.

  • SOA that allows each application to be viewed through an abstract event-driven service endpoint and to be referenced by choreography and process modeling tools.

  • Selective filtering of data using message channels and message filters. (This allows the routing of business data to be controlled by the ESB, instead of being hard-coded into each application or hand-coded into batch scripts.)

  • Reliable delivery of data using asynchronous store-and-forward features inherent in the ESB (minimizing the need for lengthy reconciliation).

  • Secured access using ACLs.

  • Centralized management facilities: administrative configuration control over data channels and ACL security using a management tool instead of coding into applications and ETL scripts.

During the normal course of work, each application will be able to keep the Product Master and other interested applications up to date in real time by posting its data to the bus as updates occur, instead of waiting for a nightly batch operation. Each application will also be able to selectively receive data as it is posted to the bus by other applications.

9.3.1 Inserting the ESB

The process by which VCR Corp. can migrate away from its existing means of file-based integration and toward a real-time exchange of business data begins by putting an ESB in the place of the FTP file transfer (Figure 9-8). Services will read and write data at file drop points in order to mimic FTP behavior.

Figure 9-9. Replace FTP transfer with ESB message-based data transfer, preserving flat file interface
figs/esb_0909.gif


Figure 9-9 shows the pattern sketch of this process. The ESB acts as an intermediary that performs the nightly data transfer instead of using the FTP link. The nightly export from the Master Product Index database will still need to occur while in this intermediate migration phase. The data can be fed into the ESB by having an ESB file-drop service read from the flat file that was produced by the export. The flat file data is then transferred across the bus using reliable messaging to each of the other applications, and is deposited as flat files in their respective FTP destination directories.

Figure 9-10. ESB file-drop service replaces FTP file transfer
figs/esb_0910.gif


At the receiving endpoints, another file service receives the message as an asynchronous event and writes the file onto disk in the FTP directory. The application is set up to poll a directory periodically to look for newly deposited files.

This provides the underpinnings for migrating toward a real-time exchange of live business data between all applications. So far, no changes have been required for any of the applications. The applications themselves are unaware that anything has changed, as they continue to read, write, and process the flat files the same way they always have.

There are also immediate benefits to making this change. It has removed one of the four main problems of the ETL solution: the unreliable FTP link. Now that the data is being transferred over message channels through the ESB, it is the responsibility of the ESB to get the data to where it needs to go, ensuring reliability and transactional integrity using the store-and-forward techniques discussed in Chapter 5.

9.3.2 Transforming and Routing the Data

In order to put the ESB to use, data channels, data transformations, and process flows need to be established.

9.3.2.1 Structured message channels

Once the ESB is inserted, you can start setting up structured data channels for each application. Not all of the applications need to see all of the data. In the pre-ESB stage, there may have been some data filtering during the export process via some SQL queries, or some filtering on the exported datafile via handwritten scripts. Now that the applications are plugged into the bus, data can be selectively dispatched through distinct routing paths to just the applications that need to see it. Initially, this can be done simply by routing the data over reliable publish-and-subscribe and point-to-point message channels (Figure 9-10).

Figure 9-11. Dispatching of data using reliable messaging channels over pub/sub or point-to-point queues
figs/esb_0911.gif


Through the underlying MOM, the data can be broadcast to the other interested applications using a publish-and-subscribe messaging model. Using unique subscription channels and filtering via message selectors, data can be segregated and distributed so that each application receives only the data that it wants to see.

One of the many benefits of having a MOM as a core part of the ESB is that you can take advantage of the routing capabilities that are built into the messaging layer. A major advantage of using an ESB rather than just a message bus to do this is that the definitions of the publish-and-subscribe and point-to-point channels and the ACL security can all be configured in the ESB service using administrative tools, instead of being coded into the applications themselves.

9.3.2.2 Assigning process definitions

The next step in the migration toward real-time integration is to create specific ESB process definitions. This allows more selective control of business process flow across applications and services. As part of this step, the batch-scripted transformation and filtering is converted to use specialized ESB transformation services. Figure 9-11 shows the pattern sketch of this stage in the migration.

Figure 9-12. ESB process definition with transformation and routing as services on the bus
figs/esb_0912.gif


9.3.3 Considerations

As you migrate toward a real-time integration using an ESB, there are important things to consider regarding how you structure and transfer data.

9.3.3.1 Streamlined data flow

As data exchange moves toward a near real-time occurrence, the way in which data is routed may change. In the batch processing model, data was overcommunicated. The Master Product Index needed to broadcast as much information as it possibly could in one shot because of the 24-hour interval between data exchanges. This forced a model in which data was disseminated more than it needed to be. With the introduction of the ESB, multiple applications can work on the same message data in an ordered process flow. Each application can process the message data as soon as it arrives, and pass it along to the next step in the process in a timely fashion. Therefore, it is not necessary to broadcast all information to all applications. A single message can carry the entire context of a particular business transaction, and it may travel from one place to the next along an ordered process flow.

As the FTP links are eventually removed and replaced with a more direct integration into the ESB container, the message itineraries and underlying message channels can be modified to meet the requirements of the new streamlined processing model.

9.3.3.2 Migrating to XML

Although not shown in these examples, part of the migration path can be the insertion of transformation services that convert the data from its current fixed format to structured XML data that is in a canonical form. This allows the benefits of common, reusable service definitions as described in Chapter 4 and Chapter 6.

9.3.3.3 Message-based atomicity

In an ESB-enabled application, data is posted directly from the application to the bus in the form of a message. The application can construct a fairly rich XML document that encompasses all of the known state information about the current business transaction. XML documents can be structured and hierarchical, and can therefore represent relationships and subcomponents that describe a business transaction, such as a purchase order and all its line items, its billing information, and so on.

An asynchronous message that represents a business transaction should be an atomic, self-contained, and self-describing unit of work. The data in the message should include any transient context information that is relevant to the current unit of work underway. This transient state can be carried with the message, or held by ESB-enabled state machines for as long as the transient state is relevant which is the lifetime of the message or the lifetime of the business transaction. Once the business transaction is complete, you no longer need the transient state (except for audit trail purposes, but that's another story). In Chapter 6 and Chapter 7, we explored cases in which this transient state can be tracked as part of managing asynchronous conversation and choreography, and where XML messages themselves can be stored in their native form for tracking, auditing, and logging purposes.

9.3.3.4 Think asynch

As a general rule, moving toward message-based atomicity also requires asynchronous processing. The philosophy and merits of asynchronous processing are discussed in detail in Chapter 5. In short, message-style communications require an integration architect to "think asynch" and treat applications accordingly. The ramification of this is that when an application makes a request of another application, it should almost never expect to get an immediate response, if a response is even warranted. Asynchronous messages are intended to get to their destinations "eventually." In practice, the definition of "eventually" could be measured in nanoseconds, but the applications should be prepared for a more significant amount of time between the sending of a message from one application to the receipt of the message by another.

Because we are migrating away from a purely post-processing flat-file scenario, asynchrony is expected. The applications processing the data are already expecting to receive it sometime after a particular event has happened sometimes up to a week later.

Achieving message-based atomicity requires you to think about how the data that is currently in the monolithic flat file can be broken up into individual records that represent atomic business transactions. Once this is achieved, the applications are ready to exchange data directly with each other in a real-time fashion. As the message exchange is more fully integrated into the applications, the messages can get smarter by carrying a more transient context that is known only by the application.

9.3.3.5 Removing the file interface

At any point along the way, you can begin replacing the file-drop service interface with a more direct interface into the applications using standard adapters that are ESB-enabled. As illustrated in Figure 9-12, the Product Master interface has been moved away from the database import/export process, and directly into the application using a JCA-compliant application adapter. Some of the other applications have been converted to use either adapters or a web service interface. ESB services for content-based routing, data transformation, auditing, and caching are inserted into the business processes.

Figure 9-13. Full migration to an ESB using application adapters, process definitions, web service interfaces, and file-drop service interfaces
figs/esb_0913.gif


A few of the applications are still using the file-drop service interface; this is perfectly fine. The nice thing about this approach is that the migration can stop at any time and resume at a later date. At each stage in the migration, the integration environment continues to work and the overall process improves.



Enterprise Service Bus
Enterprise Service Bus: Theory in Practice
ISBN: 0596006756
EAN: 2147483647
Year: 2006
Pages: 126

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net