2.2 The Current State of Enterprise Integration

   

This section will explore the details of how enterprise applications are integrated, or not integrated, today. This includes a discussion of a prevailing problem across many organizations: the accidental architecture.

2.2.1 The Enterprise Is Not Well Connected

Over the past two decades, numerous distributed computing models have arrived on the scene, including DCE, CORBA, DCOM, MOM, EAI brokers, J2EE, web services, and .NET. However, indications are that only a small percentage of enterprise applications are connected, regardless of the technology being used. According to a research report from Gartner Inc.,[2] that number is less than 10%.

[2] Statistics from Gartner Inc., "Integration Brokers, Application Servers and APSs," 10/2002.

Another statistic is even more surprising of the applications that are connected, only 15% are using formal integration middleware. The rest are using the ETL and batch file-transfer techniques, which are largely based on hand-coded scripting and other custom solutions. More information on ETL and batch file transfer, including their associated problems, can be found in Chapter 9.

2.2.2 The Accidental Architecture

The Gartner 15% statistic provides a sobering data point that illustrates the true state of integration today. How are the other 85% of applications connected? A very common situation that exists in enterprises today is what I refer to as "the accidental architecture." The accidental architecture is something that nobody sets out to create; instead, it's the result of years of accumulating one-of-a-kind pointed integration solutions. In an accidental architecture, corporate applications are perpetually locked into an inflexible integration infrastructure. They continue to be treated as "silos" of information because the integration infrastructure can't adapt to new business requirements (Figure 2-1).

Most integration attempts start out with a deliberate design, but over time, other pieces are bolted on and "integrated," and the handcrafted integration code drifts away from the original intent. Through incremental patches and bolt-ons, integrated systems can lose their design integrity, especially if the system is maintained by a large number of people to whom the original design intent may not have been well communicated. It's a fact of life that individual point-to-point integrations will drift away from consistency, as engineers make "just this one little change" that's expedient at the time. Eventually it becomes difficult to even identify the points for making changes, and to understand what the side effects would be as a result. In a deployed system this can lead to disastrous results that will negatively affect your business.

Adhering to standards for integration creates a baseline of intended functionality for you to comply with. If the infrastructure is proprietary, rather than based on standards, it can become problematic to retain the intended design and guiding principles over time. While it's easier to take a proprietary platform and bend the rules, this usually results in more "diversity" that later gets bolted on. However, you should keep in mind that simply adhering to standards will not necessarily prevent you from building an accidental architecture.

Figure 2-1. The accidental architecture perpetuates the treatment of corporate applications as "silos" of information
figs/esb_0201.gif


The technology behind an accidental architecture can vary. The solid, dashed, and dotted lines in Figure 2-1 represent different techniques used to connect the applications. These techniques can include FTP file transfer, direct socket connections, proprietary MOM, and sometimes CORBA or another type of Remote Procedure Call (RPC) mechanism. Some of the targeted point-to-point solutions may even have XML envelopes defined already, either SOAP-based or otherwise, for carrying the data between the applications being integrated.

The integration broker at the center of the diagram represents an island of integration that connects some applications at a departmental level. However, this does not imply that it is being used to connect everything together. The integration broker is usually relegated to being just another piece of infrastructure in the mix, the result of a well-funded project that achieved moderate success, but then didn't continue to integrate everything as promised.

The accidental architecture represents an investment in infrastructure that is rigid and does not provide a durable, cohesive approach to integration. It is not capable of addressing your organizational needs as well as it should. Making changes to the accidental architecture can become increasingly challenging over time, as the number of point-to-point solutions increases. This usually also means that interdependencies between applications are extremely tightly coupled. Making a change to an application's representation of its data means that you also need to change all the other applications that need to share that data. This restricts your ability to quickly adapt your business processes to changes and new business opportunities. These tightly coupled, hard-wired interfaces are not the only problem with an accidental architecture. Further complications arise due to control flow, or orchestration of communication between the business applications, being hardcoded into the applications themselves. This increases the tight coupling and fragility between systems, makes changing business processes even more difficult, and can contribute to vendor lock-in for applications.

2.2.2.1 Departmental and organizational issues

Technological deficiencies in the accidental architecture can have a ripple effect of manpower coordination issues across the organization. Whether the problem is tightly coupled interfaces or hardcoded orchestration, going back and retrofitting changes into existing applications can be a daunting task. It often requires scheduling lots of meetings with the different development groups that own the applications just to agree upon what to do and when to do it. If the applications, and their respective development groups, are physically spread out across geographic locations and time zones, the coordination of application changes becomes even more difficult.

Sometimes the application is considered a "legacy" system, in which you are unwilling or unable to make changes simply because it has been put into maintenance mode. There is a common saying that the definition of a "legacy application" is something that you installed yesterday. Even if you have full access to and control over the source code of an in-house application, it can become increasingly difficult over time to make changes as developers move on to other projects or leave the company. As we will see in Chapter 4, an ESB significantly lessens the impact of changing data schemas and formats over time.

2.2.2.2 Moving forward with an accidental architecture

Even if you have established a good corporate practice for making and tracking changes to application data and interfaces, there are other drawbacks to continuing with the accidental architecture. Using different connectivity technologies means that the security model is probably ad hoc, so there is no sure way to establish and enforce corporate-wide security policies. There are no consistent APIs to rely upon for plugging in new applications, and there's no common ground upon which to establish and build best practices in integration. Recent conversations with an IT leader identified the following problems with accidental architectures:


Unreliability.

The communications between the applications are probably not capable of taking advantage of asynchronous reliable messaging. If one of the communication links between two applications within a larger business process fails, the entire process may have to be transactionally backed out and restarted. We will learn more about the advantages of loosely coupled, asynchronous reliable communications in Chapter 5.


Performance and scalability analysis.

Whether you are doing preemptive capacity planning or trying to analyze an existing performance problem, the accidental architecture makes it much more difficult due to the many subsystems and their different operational characteristics. The typical reaction is usually an ad hoc, "throw resources at it until it works right" solution, resulting in excessive spending on disks, CPUs, RAM, etc.


Troubleshooting in general.

There is no single way to provide adequate diagnostics and reporting. The accidental architecture requires having lots of highly skilled troubleshooters around to debug all the production problems, which tends to increase the overall Total Cost of Ownership (TCO) dramatically. The greater the variation in implementation of the subparts, the broader the expertise needed to figure it all out when it fails. Also, establishing a consistent baseline to describe the proper intended behavior is a challenge.


Redundancy and resiliency.

There is no way to ensure that all components of this morass meet with your definition of acceptable redundancy, resiliency, and fault tolerance. This means that it is difficult to define achievable Service-Level Agreements (SLAs) for new functions that depend on connected backend systems


Billing holes.

If your system carries data that can represent billable services (as in the telecom business), there's a good possibility that events of billable interest are being lost in the accidental architecture. So, you may be losing revenue and not even know it.


Monitoring and management.

There is no consistent way to monitor and manage an accidental architecture. Suppose that your system of integrated applications has to run 24/7, and that your staff is paying attention to the operational monitoring tools and making course corrections. These tools won't all work in the same way, and training (and retraining) the staff on the myriad disparate microsolutions can become very expensive. Simply installing an enterprise-wide operational management tool doesn't automatically provide introspection into the integration infrastructure, and the accidental architecture does not usually provide all the control points that may be needed.

In summary, the accidental architecture represents a rigid, high-cost infrastructure that does not address your organization's changing needs, and suffers from the following disadvantages:

  • Tightly coupled, brittle, and inflexible to changes

  • Expensive to maintain due to multiple point-to-point solutions

  • Changing one application can affect many others

  • Routing logic is hardcoded into the applications

  • No common security model; security is ad hoc

  • No common API (usually)

  • No common communications protocol

  • No common ground on which to establish and build best practices

  • Difficult to support asynchronous processing

  • Unreliable

  • No health monitoring and deployment management of applications and integration components

As you know, an accidental architecture is created over years, and will not be replaced or fixed by any one action. As the demand increases for integration projects, solutions need to become more flexible, less complex, and cheaper to operate not the other way around. The accidental architecture gives your more agile competitors an advantage, and renders you unable to realize new business opportunities in a reasonable timeframe.

You need a cohesive architecture, changes to practices, and standards to address a problem of this magnitude. The ESB provides the architecture and the infrastructure, and lets you adopt it on a project-by-project basis. Adopting an ESB is not an all-or-nothing, rip-and-replace solution. Rather, you can adopt it incrementally while still continuing to leverage your existing assets including the accidental architecture and integration brokers in a "leave and layer" approach (see the later section "Leave and Layer: Connecting into the Existing EAI Broker").

2.2.3 ETL, Batch Transfers, and FTP

Extract, Transform, and Load (ETL) techniques such as FTP file transfers and nightly batch jobs are still the most popular means of integration today.

This often involves nightly dump-and-load operations on data that sits in various applications. The problem is that there is great potential for data to get out of sync between systems. The recovery process from failure of a dump-and-load can sometimes take more than a day to reconcile. And in a global economy with businesses that run 24/7, there is no "night" anymore anyway so when can you run your batch job?

Other issues are associated with nightly batch processing as well. Due to the latency of nightly batch jobs, the best-case scenario is a 24-hour turnaround time when analyzing critical business data. This delay can severely hinder your ability to react to business events in a timely manner.

Sometimes, the end-to-end processing across multiple batch-oriented systems can take up to a whole week to complete. The overall latency involved in the processing of data from the source to the target can prevent you from collecting meaningful data that can provide insight into your current business situation. In the case of a supply chain, for example, this can translate to never knowing the true state of your inventory.

Chapter 9 will present a detailed case study covering both the technology and the business implications of batch transfers via FTP, and will explore how an ESB can help you escape the predicaments imposed by this architecture.

2.2.4 Integration Brokers

Hub-and-spoke integration brokers, or EAI hubs, offer alternatives to the accidental architecture. Integration brokers have been in existence since the middle to late '90s, and are built upon MOM backbones or application server platforms.

Some of the companies in the integration-broker market include:

SeeBeyond
IBM
webMethods
TIBCO
Ascential (Mercator)
BEA (more recently)
Vitria


Integration brokers can help with the accidental architecture by providing centralized routing between applications, using a hub-and-spoke architecture. Furthermore, they allow the separation of business processes from the underlying integration code through the use of Business Process Management (BPM) software. This is all good news so far.

However, there are drawbacks to integration broker approaches. A hub-and-spoke topology doesn't allow regional control over local integration domains. BPM tools that are built on top of a hub-and-spoke topology can't build choreography or business processes that can span departments or business units. The integration broker may be limited by the underlying MOM in its ability to cross physical network LAN segment boundaries and firewalls.

Many companies have adopted hub-and-spoke integration broker solutions for their integration strategies. These technologies have a high cost and questionable success. Expensive integration broker projects of the late 1990s have had nominal success and left organizations with silos of proprietary integration domains. A study produced by Forrester Research in December 2001[3] shows the following statistics:

[3] Statistics from Forrester Research, "Reducing Integration Costs," 12/2001.

  • Integration projects average 20+ months to complete

  • Fewer than 35% of projects finish on time and on budget

  • 35% of software maintenance budget is spent maintaining point-to-point application links

  • In 2003, Global 3500 was expected to spend an average of $6.4 million on integration projects

This study was undertaken during a time when EAI was at the peak of its inflated expectations, and there is little indication that the statistics have improved significantly since then. Note that the $6.4 million per year was a prediction of the average that companies would spend on integration for the following year. I get regular validation of this figure when speaking to IT leaders about these kinds of problems.

By today's budgetary standards, EAI broker projects are expensive. The integration software costs are prohibitive usually ranging from $250,000 to $1 million per project for the software licensing alone. This typically carries with it a heavy consulting services component, which is often five to twelve times the cost of the software licenses.

The initial high start-up cost of an integration broker is aggravated by the fact that the skills learned in one project are not easily transferable to the next. Due to the proprietary nature of traditional EAI broker technology, there is often a steep learning curve, sometimes of six months or so, associated with each project. The usual approach to try and offset this is to hire specialized consultants who are already trained on the proprietary technology. Of course, highly specialized = highly paid. This is a large contributor to the heavy consulting costs (the other large contributor being the complexity of the technology installation, configuration, deployment, and management). And once the project is done, the consultants are gone.

The implementation time for integration projects is commonly in the 6-18 month timeframe. This means that, by the criteria set previously for short-term, project-based funding, the implementation time eclipses the strategic window that the project was intended to take advantage of.

The proprietary nature of integration brokers combined with the high cost of consulting usually results in either vendor lock-in or an expensive restart for each subsequent project. This means that even with successful projects, growth and scale are daunting. And in the event that you become unhappy with a vendor or an implementation, you're faced with the dilemma of either sticking with what you have, or facing a complete restart that includes either hiring more consultants or investing in a new learning curve. Because of all this, an IT organization is usually left with an island of integration that can't easily expand into new projects. In summary, the integration broker has proven to be just another piece of technology within the accidental architecture, rather than the solution for it.

As we learn more about the details of integration brokers, we will see that technical barriers contribute to the problems listed here. Also, a number of non-technology-based factors contribute to the growing need to adopt an ESB.



Enterprise Service Bus
Enterprise Service Bus: Theory in Practice
ISBN: 0596006756
EAN: 2147483647
Year: 2006
Pages: 126

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net