A large part of achieving interoperability today by using Web services depends on how well the WSDL and SOAP specifications have been interpreted from one vendor to another. For example, if I use vendor A's Web service stack and expose a Web service that has a stock data type exposed, vendor B's Web service stack needs to know exactly how to map that data type; otherwise , there may very well be problems.
To help mitigate these types of problems, I always make the following recommendations when designing solutions that need to interoperate between the .NET and Java platforms via Web services.
About six months ago, I worked on a project for a large financial customer. The concept that they were looking to develop involved moving a large amount of Customer Relationship Management (CRM) data from one system to another using Web services as the transport. The data types were quite complex, and given our previous experience with sending this type of data over a Web service, the development team decided that our process must mitigate the risks that could arise.
One of the first things that we did was sit down with the business analysts. (The analysts were responsible for driving the flow and requirements of the application, without wanting to know about the technical elements.) By "sit down," I do not mean to imply that this was a leisurely chat: we spent about a third of the project designing a data model from the ground up, building complex data types, fields, properties, methods , sequences, and database mappings.
Once we had an accurate representation of the data by using the Unified Modeling Language (UML), we created the actual classes for the objectsand created simple test harnesses to send them back and forth by using Web services. (In fact, these test harnesses were similar to the samples that you've been working with in this chapter.)
The key message here is that before any of the UI or underlying infrastructure for the application was written, we already knew what would (and what would not) work with using Web services as the underlying transport in our distributed approach. In the end, we did have to make some minor changes based on the choice to use Web services. These mostly involved choosing XSD-compliant data types. But we gained a tremendous advantage by making these changes early on in the project, as opposed to having to factor them in later on. This also gave us a great opportunity to run some performance tests on the data (because we had some skeptics in the group who'd labeled Web services as too slow).
To summarize, if you plan to send complex types back and forth over Web services, design them before any code is written. Build the complex data types first, build a test harness to prove that they work, and then work on designing a great application around them.
In general, the simpler the data type, the more chance it has in being successfully used in conjunction with Web services. As your data types become more complex, one simple rule to adhere to is to stick with XSD.
In Chapter 3, you saw how to define data types from XSD documents and vice versaand how to use a tool such as Visual Studio .NET to create XSD documents. Keeping data types that are exposed via Web services in line with an XSD model will help interoperability by ensuring compatibility between different versions of Web services toolkits.
In many cases, I also recommend that data types are removed from the underlying proxy file and stored in a location that's central to the solution. If the data types have been automatically generated from the XSD document, this is an easy approach to adopt and is one that you saw in the previous section's extended sample. Doing this not only can help separate data types used throughout the application from the proxies, but also is useful when both the .NET and Java environments need to agree on a single XML namespace for all types.
To achieve interoperability with Web services, I recommend keeping up to date with the latest publications and guidance from the Web Services Interoperability Organization (WS-I). This advice is best used when considering solutions from multiple vendors that might claim to be Web services compatible. Use the Basic Profile as a guide to selecting these vendors and keeping up to date with new profile releases from the WS-I.
This is something that we haven't covered specifically in this chapter so far, but it's worth mentioning. SOAP structure and encoding standards is a subject that has been under debate in the Web services community for some time, and it relates to the way that SOAP messages are structured in Web service requests .
Let me explain. There are two ways to structure a SOAP request: by using Document style or RPC style. Initially, when SOAP was first released, it supported only RPC style. The SOAP 1.0 specification was the first iteration to introduce support for both styles.
RPC style dictates that the SOAP body must contain a method name and a set of parameters. This style is tightly coupled to the service and driven by the interface itself. Here's an example:
<SOAP-ENV:Envelope...> <SOAP-ENV:Body> <m:BuyStocks xmlns:m="someURL"> <Ticker>COHO</Ticker> </m:BuyStocks> </SOAP-ENV:Body> </SOAP-ENV:Envelope>
By Document style, we mean that the request to call the method and its associated parameters are passed in the format of a document. This provides for a much more loosely coupled approach to passing calls and parameters to a Web service. The structure can look much more like this:
<SOAP-ENV:Envelope ...> <SOAP-ENV:Body> <BuyStocks Ticker="COHO"/> </SOAP-ENV:Body> </SOAP-ENV:Envelope>
Another element of the style and format (which can constitute a larger problem) is the encoding method used. Two types of encoding exist: Literal and SOAP. Literal is the encoding method that conforms to a particular XML Schema. SOAP encoding uses XML Schema data types to encode the data, but the document itself doesn't conform to any schema. The result is that with SOAP encoding, you can't validate against a schemawhich can affect transformations (using Extensible Stylesheet Language Transformations, or XSLT) and serialization to and from objects.
The .NET Framework and most of the Java Web service implementations found today support both RPC and Document style structures. One challenge that you might face is that in .NET, the default is Document style, whereas many of the Java toolkits standardize on RPC. The sample code throughout this book uses a mix based on which style generates the best results for interoperability.
When defining a Web service strategy that incorporates both .NET and J2EE, you should always try to standardize or at least head toward all services using a Document/Literal style. In addition, the WS-I Basic Profile 1.0 standard promotes the use of Document/Literal style (and actively disallows RPC/Encoding).
If you haven't already, you could soon be in a position where you want to develop a solution or application that needs to interoperate between the .NET and Java platform. At the time of this writingas mentioned in Chapter 5no formal specification for deploying and building Web services on the Java platform exists. As a result, you might encounter vendors who will provide you with this functionality before it becomes part of the specification.
Regardless of the vendor that you choose for your Java Web services stack, I recommend always making sure that you use the latest version of the distribution. Because specifications evolve so rapidly , tools in the XML-based Web services realm also move at a lightning-fast pace. Keeping up with the latest version (even a point release) will ensure that your solution not only works with the latest specifications, but also helps enable reliable and consistent interoperability.
In the final Web services sample in this chapter, you saw how an agent-service pattern was used to provide a layer of abstraction between the business logic (which in our case was the calling ASP.NET page) and the proxy file used to call the Web service itself.
Performing this abstraction gave you the option of adding more functionality to the Web service call without having to modify the caller logic. For example, in our example scenario, the agent was purely responsible for working out the URL from the UDDI registry. If you didn't have this agent layer, you'd have to do this work as part of the calling ASP.NET page, which in turn could have polluted your calling code. In addition, when you consider adding other functionality in the future (for example, authentication or clustering between a number of services), it's the additional agent layer that promotes a cleaner solution.
My final recommendation is to use UDDI for the discovery of Web services. In a situation where multiple Web services are distributed throughout an organization, the presence of a UDDI registry can help provide a central repository to services written for both .NET and J2EE.
When selecting a UDDI registry for internal use, also consider how that registry will be located by clients . Having an unstable location for the UDDI registry can prove as troublesome for clients as having no registry at all. To help with this, look at how the UDDI registry will integrate with any directory solution that's currently implemented. For example, UDDI Services for Windows Server 2003 has the ability to publish its location via Active Directory. By using this publication, clients can use a direct method to query for registered UDDI servers.