Architecting Web Services
Authors: Oellermann W.L.
Published year: 2001
What the presentation layer is to sharing content, the business layer is to sharing processes. This is a fundamental distinction that you must recognize to appreciate what can be accomplished through Web services. When you look at a distributed architecture as a sharing of application processes, the distinction of shared information can become less clear. That is partly because processes can facilitate the sharing of information. The business layer can share information through the presentation layer because it can control the content dynamically. For this reason, shared processes can appear as shared information to an outside observer. This is similar to the distinction of a dynamic Web page and a static Web page to an unknowing viewer. Can they tell the difference between pages where logic is or isn't involved?
Information shared over the Web actually consists of nothing more than references to content provided by other sites. This is all the presentation layer can support, because no logic is available. Without logic, this information can only provide a very limited set of functionality restricted to the existing content. It would likely be independent of who you are, where you are located, what day it is, or any other variable components . Any functionality it could provide would be predetermined and incapable of responding dynamically.
Through the sharing of processes, we can share information that has more meaning because logic that can deliver customized data is involved. Data can be specific to the caller, the environment, or any other criteria. This sharing is accomplished through a request-response mechanism, a concept familiar to most developers. This mechanism allows for variable information to be sent to, and received from, the recipient. This recipient would be a process. This process may be part of an existing application, or it may have been built specifically for the Web service. This distinction typically has no bearing on the use of the process in either situation.
Whereas applications are exposed through a user interface, processes are exposed through a programmatic interface. This interface is designed specifically for other systems, rather than people, to interact with. Processes are essentially defined to the outside world through their interface.
The term interface can frequently be used to reference both user interfaces and programmatic interfaces. It is necessary to distinguish its usage unless it is obviously referring to a process (programmatic interface) or an application (user interface).
The programmatic interface defines how external entities (applications, objects, and so on) can interact with the process. This communication is implemented through payloads sent to the process via a request and returned from the process via a response. When defining the processes' interface, there are three different categories that they can fall into. These categories define what the process is trying to accomplish and can be most easily described through their interface structure.
The first process category is a light request and heavy response (see Figure 1-4). This process is designed to provide a fairly generic response to the caller because the request is very basic, containing no information for the process to consider in providing its response. This can be appropriate when you are providing the same information regardless of who the caller is.
Figure 1-4: Light request/heavy response process
A sharing of processes is usually associated with the submission of data or the retrieval of information based on some data you provide. This information could be who you are, what company you are with, or just some base data that needs to be analyzed . Although this is a process, it can be defined as shared information based on specific information you are sending. This process would be executed through a heavy request and a heavy response (see Figure 1-5).
Figure 1-5: Heavy request/heavy response process
The final scenario is a heavy request with a light response (see Figure 1-6). In this case you are only sharing information "upstream." An example would be a process in which you are submitting information to the service owner and getting a simple acknowledgement back that the information was received. Perhaps the service will spend some time with it and get back to you later via email or even another shared process.
Figure 1-6: Heavy request/light response process
The difference between shared information and shared processes should be fairly clear now. It might be easier to think of the distinction as a methodology instead of a technology. Deploying a shared process means exposing functionality that is much more powerful than simply exposing information. While you can think of shared processes as shared information, and can use one to accomplish the other, you limit the effectiveness of your Web services if you don't understand the distinction.
The idea of sharing processes is nothing new to the world of computers-at least sharing processes in a contiguous environment. COM (Component Object Model) and CORBA (Common Object Request Broker Architecture) have been around for a while and allow us to reuse objects in a single system (see Figure 1-7). These methods are proprietary in nature, as they are designed to take advantage of specific features and services in the operating system. Although these technologies are declared standards, unfortunately there is not unanimous support behind them in the industry, and they are also ill prepared to handle the demands of integrating disparate systems.
Figure 1-7: Shared objects in a single system
The idea of reusable objects across distributed environments (that is, across distinct systems) is a more recent accomplishment brought about by the next generation of connectivity, such as DCOM (Distributed COM), IIOP (Internet Inter-ORB Protocol), and RMI (Remote Method Invocation). They allow us to escape beyond the confines of the local system (see Figure 1-8), but they require that the external system we communicate with use the specific protocol and object architecture our system has implemented. That limits the reach and extensibility of these methods tremendously. However, this advancement in object reuse is an important building block in the development of Web services.
Figure 1-8: Shared objects across systems in a closed environment
We need a way to share these objects over the Internet (see Figure 1-9), and these implementations only work well in a closed environment. That means that as long as you are in a controlled, restricted environment (network), they work great. If you expose processes using these same methods on the Internet, there are some roadblocks .
Figure 1-9: Shared objects between distinct environments
This is brings us to a concept that has been a hot topic lately: enterprise application integration (EAI). Sharing applications between distinct systems can involve many challenges, and crossing organizational boundaries amplifies them. Bridges and gateways are often developed to fill the gaps, but these solutions are often cumbersome and configured to work only on a case-by-case basis. These solutions usually involve middleware technologies that are inherently ill prepared to handle this challenge.
First, these distributed technologies have platform and version dependencies. While some might not require a specific platform, you are generally restricted to a set of platforms and/or client software to comply with the service. You also have far less flexibility in updating or enhancing the components of the application independently because of interface or version dependencies.
Second, all of these architectures carry certain security issues that prevent them from working well on the Internet. Although Transmission Control Protocol/Internet Protocol (TCP/IP), the standard communication mechanism for the Internet, can be used as a transport for these architectures, information technology (IT) departments are not likely to open up the necessary ports on their firewall to allow them to cross organizational boundaries. Also, these methods require a tightly administered environment to work well, and this requires extensive cooperation between partners .
Finally, another limitation of these methods is performance. These are all Remote Procedure Call (RPC) mechanisms that depend on some level of session state. In fact, DCOM uses several packet transmissions to maintain a connection. This is very state-full behavior, and the Internet is a stateless infrastructure. The one limitation of the Internet is bandwidth, and streamlining required network communication is one way of limiting this impact.
You can see why, if we want to share applications across the Internet, these existing middleware solutions do not work. They may play a role in the back end of an application, but they clearly cannot play a role in the communication path between the service and its consumers. To meet these challenges, a new implementation, and in reality, a new paradigm, for building Web applications has emerged. This paradigm is called Web services.
Architecting Web Services
Authors: Oellermann W.L.
Published year: 2001