5.5 Alternatives to Mainframe Interoperability

Architects may want to consider alternatives to mainframe interoperability other than building on top of the mainframe architecture. This interest is usually driven by lower total cost of ownership (such as operating cost and service support cost for legacy COBOL applications), transition strategy to move away from obsolete COBOL technology to Open Standards and J2EE technology, and more importantly, easier integration and interoperability using Java technology.

By migrating legacy application code to Java, architects and developers can then determine whether they would like to use document-based (asynchronous) or RPC-based (synchronous) Java Web Services. This will allow more flexibility in customizing the business functionality to accommodate local requirements of synchronous or asynchronous transactional processing.

5.5.1 Technology Options

Transcode

The term transcode refers to translating and converting from one program language structure to another using intelligent rule engines, without rewriting from scratch.

Architecture

There are more than two COBOL-to-Java Transcoder products currently available. The product architecture shown in Figure 5-22 shows an example from Relativity. Relativity's RescueWare consists of intelligent parsers that can parse COBOL programs and CICS/IMS screens into Java objects (including classes, methods , or even Java beans). This provides a convenient way to turn legacy COBOL programs into reusable objects. Java objects can then be exposed as Web Services.

Figure 5-22. Relativity's RescueWare Architecture

graphics/05fig16.gif

Typically, COBOL-to-Java transcoding tools should provide the following functionality:

  • The automated migration tool set should provide tools to analyze the dependency and components hierarchy of the COBOL programs, and support automated (unattended or nonmanual) code conversion, preferably with some "conversion patterns." It should also allow platform environment parameters (for example, JCL parameters or dataset names on the mainframe) to be changed "intelligently" to the new target environment.

  • Some tools may have better integration with software version control tools (such as ClearCase, CVS). MIS reporting should be available for code changes, version changes, and audit logging.

  • There should be intelligent screen display code migration from 3270-like screen to SWING. There will be lots of usability anomalies that need to be resolved or supported.

After the COBOL codes are transcoded into Java classes or EJBs, developers can create a server-side SOAP component to invoke these Java classes or EJBs. This approach provides a more flexible solution architecture for integrating with other systems and for extending the system functionality.

Solution Stack

The developer's platform relies on a PC, typically a Pentium 3, with 256MB RAM and at least 20GB storage running Windows 2000 or XP. The deployment hardware depends on the processing capacity requirements of the applications. Typically, they may range from Sun Fire mid-frame series (model 3800 to 6800) to the high-end Sun Fire 15K series.

Relativity's RescueWare, a developer tool, provides a comprehensive developer workbench, COBOL program analyzer, COBOL transcoding utilities, and data migration utilities.

Benefits

Automated and intelligent transcoding from COBOL to Java will expedite the migration effort. The COBOL program analyzer can help developers to identify dead code and to factor legacy business logic into reusable EJBs components.

Recompile

The term recompile refers to cross-compiling the source program language structure (such as COBOL) to a target program language structure (such as Java byte-code ) using an intelligent language cross-compiler without changing the application program logic.

Architecture

Figure 5-23 depicts the architecture of cross-compiling COBOL programs to Java byte-codes using the LegacyJ product. Legacy COBOL programs can be refactored and cross-compiled to Java byte code using intelligent COBOL business rules. Structured procedures can then be transcoded into Java beans or EJBs. Data access routines to a legacy database such as VSAM or IMS can be also translated into the Java Naming Convention via the CICS Transaction Gateway Java classes or the like. However, the constraint is that both the Java Virtual Machine and the original CICS need to reside on the same physical machine.

Figure 5-23. LegacyJ Architecture

graphics/05fig17.gif

Once the COBOL programs are cross-compiled as Java classes, beans, or EJBs on the mainframe, developers can expose them as Web Services using tools such as Java Web Services Developer Pack or Apache Axis. This approach is conceptually elegant because it does not require vendor-specific middleware components (such as CICS Web Support) in the mainframe.

Solution Stack

The developer's platform relies on a Windows NT/2000 Pentium-based PC, typically of 256MB RAM and at least 40GB of storage. The deployment hardware is assumed to be the legacy platform such as IBM mainframe OS/390 R1.x or OS/400 R4.x. On the OS/390 platform, IBM Java Virtual Machine (or Java Run time version) 1.3 or higher needs to be installed and configured on the OS/390 Unix Service partition. Similarly, IBM JVM 1.3 or higher needs to be installed and configured on the OS/400 platform.

LegacyJ's PerCOBOL

A developer tool built on top of Eclipse open source that provides a COBOL-to-Java byte code cross-compilation functionality. Several COBOL variants are supported. When the COBOL code is being compiled, a syntax check is also performed. Developers need to ensure the original COBOL source code is tested and deployed on the legacy system first, and then copied to the workbench for cross-compilation. Upon successful compilation, the code needs to be copied to the target platform for running.

Java Classes on Legacy System

IBM requires installing and loading relevant VSAM or IMS Java classes in order to access VSAM/IMS datasets. These files should come with the OS or are downloadable from IBM's Web site.

Benefits

Legacy COBOL programs can be cross-compiled to run on a Unix Service partition of the legacy system and can be called like Java. The cross-compilation capability enables external systems to access legacy system functionality via Java calls. These Java calls can be also wrapped as SOAP Web Services (XML-RPC) without changing the system infrastructure. This provides fast system interoperability, while leaving more room to re-engineer or migrate the legacy systems to an open platform in the long run.

Rehost

The term rehost refers to migrating the original program code from one platform to another without rewriting the program business logic. This may require some minor modifications to the language syntax owing to platform variance.

Architecture

Rehosting legacy COBOL applications on a mainframe usually results in porting the original COBOL source code to a Unix platform. This requires the use of a flexible COBOL environment that can accommodate variants of ANSI COBOL that run on the legacy mainframe, such as COBOL II and HOGAN COBOL. Apart from legacy COBOL programs, the rehosting environment also supports porting JCL (Job Control Language) or REXX, which are batch or scripting languages for both online and batch transaction processing.

The following logical architecture in Figure 5-24 shows a multi- tier architecture that corresponds to different components of a typical mainframe environment.

Figure 5-24. Sun's Mainframe Transaction Processing Architecture

graphics/05fig18.gif

Solution Stack

The hardware depends on the processing capacity requirements of the applications. Typically, they may range from the Sun Fire 3800 “6800 series to the high-end Sun Fire 15K series.

Sun's Mainframe Transaction Processing Software (MTP) (previously InJoin's TRANS)

This provides a CICS-like environment for processing COBOL applications. MTP now supports MicroFocus COBOL applications. Some COBOL variants may need to be modified to run on MicroFocus COBOL applications under the MTP environment. There is a VSAM-compatible database for COBOL-VSAM implementation.

Sun's Mainframe Batch Manager (MBM) (previously InJoin's BATCH)

This provides a batch-oriented environment similar to MVS/JCL. This will supplement COBOL applications with any batch job control language in the operating environment.

It is possible to use Sun ONE Integration Manager EAI edition to expose COBOL programs as Web Services. This is similar to building a SOAP proxy on the mainframe, as depicted in the previous section (see Figure 5-16). However, it may not be cost-effective if the primary goal is to expose legacy COBOL programs as Web Services, because the total cost and effort of migrating COBOL programs from a mainframe to a Unix system may be higher than using other mainframe integration technologies.

Benefits

Legacy COBOL applications can be ported to a Unix environment with minor modifications to the MicroFocus COBOL syntax. This provides a low-risk, low-impact, minimal-change alternative to rehost COBOL applications on Unix, with potential integration with open platform using Java or another open technology. This solution approach does not need to re-engineer dead code from legacy COBOL applications.

Refront

The term refront here refers to rewriting the legacy program code in the Java language. This usually results in redeveloping or redesigning the front-end and perhaps refactoring the business logic into reusable components.

Architecture

Refronting legacy COBOL applications denotes rewriting and rebuilding the business logic. This requires re-engineering the business logic as well as the application architecture, say, using Java and Web Services. J2EE provides a flexible framework and application architecture (see Figure 5-25) that can be scalable in an n-tier architecture. Developers can design JSPs, servlets, or EJBs to invoke Web Services. This is a neater and more long-term architecture solution and is not constrained by any legacy system components.

Figure 5-25. Refactoring Legacy Systems Using J2EE Architecture

graphics/05fig19.gif

Solution Stack

The developer's environment runs on Solaris OE version 8 or higher (for example, on Ultra-10 workstation), Windows 2000, or XP Pentium-based PC, typically of 256MB RAM and at least 20GB of storage. The deployment hardware depends on the processing capacity requirements of the applications. Typically, they may range from the Sun Fire midframe (such as Sun Fire 3800 or Sun Fire 6800) to the high-end Sun Fire 15K series.

Sun ONE Studio, a developer tool, provides a powerful developer workbench to develop, test, and deploy Java programs. There are extensive libraries (such as NetBeans) and J2EE patterns (from http://developer.java.sun.com/developer/technicalArticles/J2EE/patterns/ and http://www.sun.com/solutions/blueprints/tools/ ) available for software reuse.

Sun ONE Application Server. A J2EE-compliant application server, provides Web and EJB containers to develop and execute Java servlets and EJBs. It also supports session, state, and connection pooling for transaction processing.

JAX (Java API for XML Pack). A publicly available bundle of XML- related Java APIs to develop XML-based transforming and Web Services. It includes JAXP, JAXB, JAXM, JAXR, and JAX-RPC modules.

Java Web Services Developer Pack. An all-in-one Web Services developer kit available to the public; it includes JAX, Tomcat, ANT, SOAP, and an application deployment tool.

Benefits

J2EE is an industry-acceptable Open Standard that enables better system interoperability and reusability. The legacy system and any dead code (legacy program codes that are inefficient or poorly designed) can be re-engineered. It also provides a good opportunity to refactor inefficient code into reusable components and to tune up the performance of some bottleneck modules. This results in better application Quality of Services, such as better performance, throughput, and reusability of program modules, after re-engineering the inefficient code.

5.5.2 Design and Implementation Considerations

When to Use

Table 5-2 outlines some pros, cons, and when-to-use guidelines for legacy code migration implementation. They may not be applicable to all scenarios, as each real-life customer scenario is complex. There are many batch and off-line programs that do not require interactive response. There is no mixture of asynchronous or synchronous modes of messaging or communication.

Migration Framework

In addition to sound architecture strategy and appropriate tools, a structured migration framework is critical to migrating legacy applications to Java and Web Services. The following is a migration framework to migrate legacy COBOL applications to Java for a commercial bank scenario. It provides an example of how to use Web Services to integrate with mainframe systems and transition to a J2EE platform in the long term. The migration framework should be reusable for most industries.

Multiphase Customer Relationship Management

At present, the bulk of customer information is stored in the current Customer Master (also known as Customer Information File or CIF) on the mainframe. Different delivery channels or touch points (such as the call center/Internet banking, securities products, credit card application, or loan processing application) have also captured some customer information and preferences, but there is no single customer information repository to aggregate them in real-time for customer analysis and business intelligence. We propose to adopt a multiphase approach to migrating the existing CIF and various customer information sources to a new customer database to support Customer Relationship Management (CRM).

Table 5-2. Some Considerations for When to Use Legacy Codes Migration Tools

Migration Approach

When to Use

Pros

Cons

Transcode

Existing legacy applications have a low complexity. This applies to both off-line and batch processing.

The legacy code conversion can be automated and thus there is a low change impact for COBOL code written in a general well-documented programming style.

There are manual changes needed for high-complexity programs with dead code.

Recompile

This is suitable for stable legacy system functionality where there is no anticipated change or no strategy for future enhancement or re-engineering.

There is minimal impact to the existing architecture. There is no need to migrate the back-end database resources.

The application requires upgrading the legacy operating system to z/OS and installing a Java Virtual Machine in an LPAR for run time. Thus, architects and developers cannot decouple the business functionality from the legacy platform.

Rehost

This applies to many batch and off-line programs.

It has a lower impact of changes.

This is not ideal for online legacy systems as this may incur considerable changes to the hardware and software environment.

Refront

This allows re-engineering of business logic incrementally.

Developers can take the chance to clean up dead code.

There is a high risk of re-engineering business logic.

In Stage 1 (see Figure 5-26), a new CRM business data model needs to be defined and customized. Customer information will be extracted from existing CIF and delivery channels (such as ATM channel and teller platform). The data extraction will be one-way data synchronization using the existing middleware or messaging infrastructure. Nonintrusive adapters or Web services need to be implemented.

Figure 5-26. Stage 1 ”Creating Customer Information XML Schema

graphics/05fig20.gif

In Stage 2 (see Figure 5-27), we recommend building a two-way simultaneous data synchronization between the new customer database and various data sources.

Figure 5-27. Stage 2 ”Synchronizing All Interfaces

graphics/05fig21.gif

In Stage 3 (see Figure 5-28), the legacy CIF and old CIF interfaces can be decommissioned and dynamic customer analysis, segmentation, and cross-selling /up-selling can be supported using OLAP (Online Analytical Processing) and a data warehouse/business intelligence infrastructure. A single customer view can be consolidated easily.

Figure 5-28. Stage 3 ”Consolidating into a Single Customer View Using Web Services

graphics/05fig22.gif

Data Migration

Similar to multiphase CRM, legacy data files or databases (VSAM or DB2) can be migrated from the mainframe to an Open Platform in Stages 2 and 3, in conjunction with the initiatives to off-load mainframe loading. There are existing utilities that can rehost VSAM files to a Unix platform (for example, Sun's MTP). Alternatively, data can be extracted to flat files and reimported into an RDBMS.

Data migration will depend on a business data model, data extraction, data cleansing, data transformation, and the subsequent administration (for example, backup, archive, and restore). The current middleware or messaging infrastructure will provide a core infrastructure for the data migration processes.

Legacy Application Migration

Legacy COBOL applications can be off-loaded from the mainframe by any or a combination of the following migration approaches: rehosting on Unix with COBOL, recompile COBOL to Java byte codes, transcode COBOL to Java, or rewrite in J2EE.

Approaches to COBOL-to-Java Migration

This may be a big bang approach with complete code-level conversion. All code will run on Java with a new database platform running on Unix. This is a complete detachment and cut-off from the mainframe. This is the ideal case.

Another approach is a parallel run, which is a transition strategy where the new Java code/database will be run in parallel with the legacy system. Thus, how does the data synchronization operate ? For example, if the Java code will retrieve historic data from the legacy system via JDBC, how would it handle mainframe integration online (or off-line)?

Partial migration, where legacy code coexists with Java, is the most complicated approach, as the majority of the code will be converted to Java, while some of it may need to access historical data or legacy resources (such as QSAM files on mainframe or DB2 via JDBC). The migration tool should be able to support direct access to legacy resources via JDBC or some other mainframe integration means (for example, translate database access codes to JDBC). This is the worst scenario.

Code Conversion Methodology

A systematic methodology for handling code conversion is very useful for the delivery. This includes packaging all JCL/COBOL programs in the source directory, scanning the original programs to analyze the program hierarchy or dependency, and scanning the programs for the appropriate programming models or conversion patterns. There may be a need to refactor the business logic into reusable components such as EJBs (Alur, Crupi, and Malks, 2001, Core J2EE Patterns and http://www.refactoring.com introduce the concept of refactoring).

Developers may then start converting the code to Java, start manually fixing the dead code, or re-engineering some code. This is followed by retrofitting the new EJBs or components into a design pattern or Java library for future use and testing the code with GUI or test data feeds.

Integration With Development Life Cycle

The entire migration needs to integrate with the development platform management. The tool should be able to migrate converted code to a virtual or temporary development platform for code testing (for example, automated software migration to a "partitioned" or logical development platform where developers can modify their code with their IDE front-end).

It should also integrate with the back-end resources management. The migration tool should be able to handle changes in the system environment parameters in the original COBOL code, MVS/JCL, embedded SQL code, or EXEC CICS code easily, without manually changing these parameters in many places. It also needs potential simulation of legacy systems or transient storage devices (for example, the devices DB2, COMMAREA, and the temporary DD units used in the SORT utilities need to be simulated).

For testing platform management, the finished Java code should be "packaged" (for example, recompiling a set of J2EE .ear files) and tested in a "partitioned" or logical testing platform for unit testing or integration testing. The tool should allow test feeds to be input for testing. This will entail another software migration process (via scripting, if necessary) to a testing platform, which may be on the same physical machine with a logical partition or a separate machine.

For production platform management, this is similar to the testing environment platform management, except that this is a production platform. There should be a fire-fighting production mirror platform where high severity production bugs can be fixed right away, before migrating to production.

Conclusion

Banking customers are risk-averse to migrating their legacy systems to an open platform, as banking services are highly critical to customer services and financial risks. As a result, it is important to adopt a low-risk approach to mitigate technology and implementation risks.

The critical success factors for migrating core banking systems are expertise, process, and technology skills. Therefore, getting people with the right experience to customize the appropriate migration methodology is worthwhile and risk-aversive. Aside from that, doing a Proof of Concept or pilot also helps.

Architecture Implications and Design Considerations

Different migration design options will impose certain architectures. For example, a rehosting architecture requires porting the entire application to a new open platform, which requires a different technology skill set than the mainframe skill. This is on top of the migration hardware and software cost. Architects and developers need to be conscious of each design decision made.

On the other hand, transcoding technology may render different Java code design. For instance, some transcoding tools may render the same COBOL procedure into a lengthy if “then statement or a structured case loop. This may impact the maintainability and performance tuning of the Java source code. Some intelligent transcoding tools can refactor dead code into EJBs. This will obviously make the design more flexible and extensible if developers want to add more service components to the EJBs.

Risks and Their Mitigation

In order to migrate COBOL applications to Java, there are several technical risks that may impose constraints to the implementation. They may require a mixture of software solution sets and migration processes to mitigate these risks. This section will introduce a few major migration models that mitigate specific technology risks.

Most legacy systems are traditionally built from years of experience and testing. It is unrealistic to wait for years for the new system to be rebuilt. In order to meet changing market needs, it is crucial to build systems using automated migration tools (for example, recompile COBOL to Java byte codes) with shorter development life cycles and faster to-market times.

COBOL programs have a relatively longer development life cycle (for example, long program construct and longer testing cycle) and many variants (such as ANSI COBOL and COBOL II, for example). By nature, they were not designed to handle memory management, message processing, and system interoperability. Thus, re-writing the applications in Java can address these language constraints.

Many COBOL-based applications have dead code or proprietary extensions. Some of them do not have design documentation, which makes re-engineering difficult. They may require some form of re-engineering. Thus, transcoding tools may provide a neater way to analyze and rebuild business logic based on COBOL source code.

Many COBOL programs rely on system-specific interface methods (such as EXEC CICS on an IBM mainframe) to interoperate with external systems. This may impose a major constraint to application migration, as this is platform-dependent and often requires some re-engineering. However, this may open up the opportunity to re-engineer the legacy system interface as an asynchronous message or simply expose the system interface as re-usable Web Services.



J2EE Platform Web Services
J2EE Platform Web Services
ISBN: 0131014021
EAN: 2147483647
Year: 2002
Pages: 127
Authors: Ray Lai

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net