|< Day Day Up >|| |
This section presents the Runtime patterns and Product mappings we used to demonstrate the Exposed Direct Connection application pattern using:
Web Services Gateway
Figure 14-1 shows the Runtime pattern and Product mapping we used to demonstrate the Exposed Direct Connection application pattern, using Web services technology across enterprise boundaries. The Partner A Secure Zone uses the same nodes and products as the source application for the intra-enterprise Direct Connection pattern in Figure 8-3 on page 153 (and Figure 9-1 on page 185). The path connector from Partner A Secure Zone includes firewalls, DMZ, and the Internet. Partner B's infrastructure is unspecified.
Figure 14-1: Exposed Direct Connection--Message Connection- Web services Product mapping
Figure 14-2 shows another Runtime pattern and Product mapping used in this scenario. It is based on IBM WebSphere Application Server V5.0.2 and the Direct Connection application pattern. It also includes the Web Services Gateway packaged with IBM WebSphere Application Server Network Deployment V5.0.2.
Figure 14-2: Exposed Direct Connection--Message Connection- Web Services Gateway Product mapping
The Partner A Secure Zone uses the same nodes and products as the source application for the intra-enterprise Direct Connection pattern in Figure 10-3 on page 220. As in Figure 14-1, the path connector from Partner A Secure Zone includes firewalls, DMZ, and the Internet. Partner B's infrastructure is unspecified.
A number of factors affect the design of a Web service. In this section we discuss some of the factors we considered when using Web services in our inter-enterprise sample application.
This section extends, for inter-enterprise use, the intra-enterprise Web services design considerations from the following sections:
8.4.1, "Design considerations" on page 153 for RPC style Web services
9.3.1, "Design considerations" on page 186 for document style Web services
We recommend reading the intra-enterprise Web services design considerations first.
One risk for a source application communicating via the Internet with a target application in another enterprise is the source application can be blocked because of problems in the Internet communication channel. You can minimize this risk by decoupling both applications as much as possible. One possibility is to use asynchronous messaging style communications, where the source and target applications don't need to be aware of each other or active at the same.
In our business scenario we describe two use cases, Update Inventory and Get Delivery Date. The Update Inventory use case does not require a response from the target application, so it can be implemented with a loosely coupled one-way request, as described in Chapter 8, "Using RPC style Web services" on page 147.
The Get Delivery Date use case is more complex because an in-parameter and a return value are needed. On first view this use case is a typical example of the Call variation of the Exposed Direct Connection application pattern, which has a more natural fit with a synchronous communication channel. If we assume that asynchronous communication between the source and target application is required in order to reduce coupling between partner applications, we can decompose this interaction into a one-way request and a separate one-way response.
In this scenario the target application could provide a RequestDeliveryDate service and the source application could provide a ResponseDeliveryDate service, which is used to communicate the response back to the source application.
In asynchronous communications using messaging systems, the source application sends the message to message-oriented middleware. The source application sends the message and control returns immediately back to the source application. Now the message-oriented middleware is responsible for delivering the message to the target application. Therefore, the source application is decoupled from the target application, and the communication channel, during the interaction.
If the source application needs a result from the target application in a later stage of the business process, then the source application must be known to the target application. This effort must take place either during the request or as an initial effort before the first interaction. The first solution results in a message overhead and the second requires a more complex infrastructure. Furthermore, the response has to be synchronized with the corresponding request.
In this section we describe approaches for implementing asynchronous communication with Web services between the source and target application. Common requirements of the approaches discussed include:
Both partner applications can determine the end point URL for each other.
The request interaction doesn't require an immediate response, except for a request acknowledgement.
The first two approaches have to be implemented during the application development phase of the source and target applications. The last approach uses an asynchronous transport protocol that is transparent to the application developer.
In the approaches presented we are focusing on the topic of decoupling the request channel from the source to the target application. The channel from the target application to the source can be de-synchronized with the same techniques if you reverse the roles in the figures.
A difficulty arises when a significant part of the transport channel between the source and target applications requires synchronized communications. These links will not have the same relaxed flexibility offered by asynchronous communications. A way to solve the problem is to introduce an asynchronous component into the channel, which is configured to present a synchronous interface to the synchronous connection. This "de-synchronized" link effectively acts as a "proxy" or "buffer" between the asynchronous and synchronous communications.
In this approach we assume that the transport protocol used between the enterprises is HTTP. Therefore, the communication is in principle synchronous, because HTTP is a synchronous protocol. We de-synchronize the communication at the application level within the Partner A enterprise, which is initiating the communication.
With our Update Inventory use case, the Web service does not require an output message so the request is defined as only having input parameters and the Web service invocation is one-way and non-blocking. The critical path is marked with a dotted ellipse in Figure 14-3 on page 305.
Figure 14-3: Using one-way Web service invocations
Advantages of one-way invocation include:
The source application is decoupled from the behavior of the channel between the enterprises.
The source application is decoupled from the behavior of the target application in the partner enterprise.
The source application is decoupled from any intermediary (such as a gateway) which is used as an exit point from the enterprise.
Some disadvantages of one-way invocation are:
Services provided by target applications must only implement one-way transmission style.
The delivery of the request message is not reliable.
Similar to the first approach, we assume that a synchronous transport protocol such as HTTP is used. In this approach, we try to give the source application an interface that is more service oriented. The service that the target application is offering to the source application should be asynchronous without any additional effort for the source application developer. Therefore, the critical path for de-synchronizing the source and target application is within the target enterprise, as shown in Figure 14-4 on page 306.
Figure 14-4: Using the Distributed Event-Based Architecture for de-synchronization
One approach is to open a separate thread in the target application to execute the request. Meanwhile, the request channel can be closed by sending back an empty response. However, such an approach may impact application server performance and scalability and is usually not recommended.
The Distributed Event-Based Architecture (DEBA) framework, which is available from IBM alphaWorks®, provides another approach. The Distributed Event-Based Architecture for Web services is basically an implementation of the Observer pattern. From a high-level view, DEBA is simply implementing the components of this pattern within the Web services context.
DEBA helps to introduce a multi-port communication in a Web services world simply by using a framework. You can find out more about the full power of this framework at:
For a brief introduction, a good starting point is:
Each service requester is an observer that is interested in a state or result of the service provider. The service provider is the subject of the observation. In standard Web services scenarios today, the requester connects to the provider and receives a response immediately. Following this tactic means that the observer has frequently to ask if there is any new information on the subject that the observer is interested in. For our scenario, the Observer pattern is a good solution. The observer (requester) attaches itself to the subject (provider) via a Web service invocation, and sends its own URL for the update response. For the update response, the subject initiates a separate Web service invocation to the observer.
Note that the requester and provider roles are temporary in the Observer pattern; they only indicate which component initiates the communication. This application of the Observer pattern shows how Web services can provide highly decentralized and distributed solutions, even when using SOAP to establish a direct connection between just two partners.
If you reduce the number of observers to one you can implement the Message variation of the Exposed Direct Connection application pattern. Keep in mind that this doesn't necessarily mean that the interaction from requester to provider is asynchronous at the implementation level. This depends on the communication protocol you are using between requester and provider. If you are using HTTP, then the communication is synchronous, even when you don't expect any return value.
Advantages of using de-synchronization include:
The asynchronous mechanism is transparent to the source application.
A receive acknowledge comes from the Partner B enterprise which is hosting the target application.
Some disadvantages of using de-synchronization include:
The source application is not decoupled from the behavior of the transport channel, such as the Internet.
Application development effort is required in the target application in the Partner B enterprise.
The architectural overview diagram for this approach is similar to the diagram for the one-way invocation in Figure 14-3 on page 305. In fact it is the same figure, but the transport protocol used is different.
The main disadvantage of previous approaches is that the asynchronous mechanism is not transparent for the source application or the target application at the implementation level. These approaches are not flexible for future migration to new transport technologies. For example, when applications switch to a new asynchronous transport protocol (such as from HTTP to JMS), then the asynchronous mechanism will be implemented twice.
Therefore, we want to briefly discuss locating the asynchronous mechanism in an asynchronous transport protocol, like Java Message Service (JMS).
In Figure 14-3 on page 305, we de-synchronized the communication from the source application to the target application. The channel back from the target application doesn't need to be the same channel or even the same transport protocol. It is also conceivable that the response is communicated to a different application and not to the initial source application.
In both cases, we simply use JMS to delegate the task of delivering the message to the target application to the message-oriented middleware. Using this approach give us the following advantages:
The asynchronous mechanism is transparent to the application level.
It is possible to switch to other transport protocols without affecting the application.
Using message-oriented middleware has the additional value of a reliable message exchange.
The price for this approach is more complex infrastructure.
One approach can be IBM WebSphere MQ with the MQ SupportPac™, MS81: WebSphere MQ internet pass-thru. Provided both partners are using WebSphere MQ, this SupportPac can be used to implement messaging solutions between remote applications across the Internet. For details, see:
The WS-ReliableMessaging draft standard is being developed to address the issue of interoperability between different reliable transport infrastructures. For further information on WS-ReliableMessaging, refer to:
Chapter 10, "Using the Web Services Gateway" on page 215 describes use of the Web Services Gateway in intra-enterprise scenarios. The gateway operates by adding a layer of abstraction that separates deployment from invocation. While this is important in an intranet environment, it is even more important in an Extended Enterprise environment because of the diversity of applications, environments, and users who will be interacting with the exposed service.
In inter-enterprise scenarios, the Web Services Gateway can be used to:
Secure your Web services using the gateway access control mechanisms
Act as an reverse proxy providing indirect access to your internal Web services
Provide a common access point for partners needing access to your infrastructure
Provide protocol transformation, so a HTTP/SOAP client can access a JMS/SOAP service, for example
As shown in the Web Services Gateway product mapping in Figure 14-2 on page 302, you may also want to access external Web services from inside your enterprise. In this case, the gateway:
Provides a single point for controlling access to external services.
Hides changes to the external Web services from your internal client applications.
The Product mapping in Figure 14-2 on page 302 places the gateway on the secure internal network behind the DMZ. The advantage of this configuration is the gateway can provide access services using protocols that are not as firewall-friendly as HTTP. For example, the gateway can enable Web service clients to access EJB, JMS, and JavaBean applications.
The gateway could also be located in the DMZ. All the firewalls in our example are configured for network address translation, so SOAP over HTTP can be used to pass through the firewalls. This may not work if the gateway is using RMI to access an EJB on the secure network, for example.
The performance overhead of introducing an additional node, such as the gateway, should also be considered. There may be performance impacts due to:
Extending the path length.
Converting into an internal data representation.
|< Day Day Up >|| |