Architecting the Migration Solution

 <  Day Day Up  >  

The first task in the AIM methodology is to determine which of the key migration techniques should be applied to each of the source components . This determination requires a component model specific to the purpose of migration planning, that is, a component/technique map. The second output from such a component model is the definition of the scope of the migration. If a software component or function is not a member of the agreed-on component model, it is considered to be out of scope.

The key migration techniques used at the organization in our case study included a rehosting approach, supplemented by replacement and reverse engineering. The key rehosting techniques involved reusing GEAC and Sybase's platform independence and was supplemented with source code porting techniques or some of Sybase objects and HP/UX shell scripts. In this case, the component model included the database servers and their hosted business logic, print management, journaling services, and job management. The key technique driving the migration project was rehosting, using a new installation of a Solaris instance of the COTS package and the runtime software infrastructure. The reason for this is that the independent software vendors (ISVs) support their applications on multiple operating systems and basically support a common applications programming interface (API) for their products. These practices ensure that code or infrastructure changes are minimized.

The boundary of the migration problem is defined by exclusively analyzing components that are currently located on the platform to be retired . Any components located on other systems will communicate with the migrated componentry through the ISV's API. Some testing or research is required to validate that the ISVs have a common API across platforms, but this is a smaller task than attempting to migrate the calling components. The migration process can be focused on the customer's data and proprietary code base.

Defining the Scope and Approach

The architectural study's two outcomes are a a definition of scope and a technique/component map. The definition of scope means that there are two sets of objects for which no technique will be applied: those determined to be out of scope and those to which a retirement/replacement technique is to be applied. The development of the technique/component map is iterative.

This section describes how to use an architectural approach to define the scope of the project. This approach involves the discovery of business constraints, the application of migration technique to identified components and strategies, and the identification of any external batch or real-time feeds. A version of the component/technique map is presented in TABLE 11-1 on page 216.

The following figure illustrates the scope of the project. Components within the shaded box are in scope for migration and an examination of the arrows crossing the shaded box show the communication protocols used by the migrated components to receive their input and output. Transactional Data Stream (TDS) is Sybase's client-server protocol, which encapsulates their implementation of SQL, Transact -SQL. These protocols are all stable UNIX-guaranteed or ISV- guaranteed protocols, reinforcing the decision to use rehosting as the strategy.

Figure 11-2. Applications Component Model

graphics/11fig02.gif

Creating a Transition Plan

The creation and design of the transition plan is a separate task within the migration project. It requires the application of project planning skills and might involve prototyping and the application of technical design skills.

Discover Business Constraints

The migration team consulted with the business and IT departments to discover any business and operational constraints that existed. Fortunately, this was primarily a front-office system for a call center and it was required only during an extended working day. The overnight batch process typically took most of the night during the week, but data load prototyping showed that an overnight run and the copy process could be undertaken on a weekend without impacting business hours.

Design a Plan

A key feature of the transition plan in this case study was to build in a regression path and a final user acceptance.

The basic plan was to close the source system at the close of business on Friday, copy the outstanding data from source to target, and then run the overnight batch on both systems. This meant that both systems should be at "start of day Monday" state. This would permit the two systems to be compared and either system to operate as the production host on Monday morning. The plan met the goals of testing for success and regression in case final testing exposed catastrophic failure conditions. It also provided a test that the business logic in the overnight batch run was identical on both systems. The inputs to both overnight runs would be designed to be identical, and if the outputs were not the same, the test would fail.

Additional features in the plan included check-pointing the process and inserting test points so that tasks between test points could be repeated should the intermediate tests fail or alternative remedial action be undertaken.

Prototyping both timings and the development of metadata discovery tools led to changes in the strategy/component map. This was reinforced when the downtime window was finally established. The change was to leverage the installation processes of GEAC and Sybase. This meant that metadata and configuration data were prepopulated and other, less-volatile, objects were also precopied. These included the table definitions, views, triggers, and procedures. The precopying of the procedures meant that the project team had to amend the development change control process to ensure that any changes to procedures already copied were applied to both source and target system.

This change of approach leveraged the principle of precopying to minimize the work to be undertaken on the weekend when the transition was to occur. In this case, we reached a point where only data table contents (and hence their indexes) were to be transferred on the transition weekend.

Note

Indexes are a key to page map and this must reflect the UNIX volume implementation and hence the RDBMSs intermediate structures. This generally mandates the rebuilding of indexes on the target system. Depending on the implementation details of the RDBMS, this can be done before or after the data copy transaction.


Design Test Plans

The key benefit from leveraging both rehosting and reverse engineering is that runtime testing of the new environment is minimized. The underlying assumption that the ISV implementations have a common and stable API requires testing, and the testing process needs to be sufficiently broad to ensure that the assumption is considered safe. This means that testing the input interface for semantic meaning is not required. The basic purposes of testing in this case study were to prove the following:

  • The copy process was comprehensive.

  • The target system represented the business accurately (or at least as accurately as the source system).

  • The required service improvement goals had been met.

A further way of reducing the testing required is to utilize prototyping as a technique. In this particular case, the less critical systems (in revenue- earning terms) were migrated before the production systems, and any bugs in the transition process were discovered and rectified. The copy completion checks were developed and improved during the prototyping process.

Note

The copy completion checks were based on checking that all rows of a subset of the tables were copied. Additionally, we ran the check sum script against the contents of certain columns .


Testing tools consisted of five sets of tests:

  • Copy integrity ”Precopy

  • Copy integrity ”Transition phase

  • Semantic integrity

  • Business acceptance

  • Performance acceptance

The copy integrity suites both involved writing programs that read the source and target systems to compare them. These browsed the database catalog tables, and since the query language used was SQL, only one language was used. In addition, critical columns were check-summed. These columns were the critical item level financial columns. One error was discovered at this phase based on a bug in the application. This error was corrected in the production code lines, and a fix was applied to erroneous data. This illustrates a principle of fixing a problem at its root cause, rather than, as in this case, writing a data transformation and fixing the problem during migration.

Semantic integrity tests were limited in this case study. The key area they were applied to was the shell-script-based print management solution. The key question to be answered in this test suite is, does the code behave the same on both systems?

We supplemented semantic tests by co-opting members of the business unit's training department and their training scripts to test the unchanged client layer against a Sun-hosted migrated environment. To conduct these tests, we undertook the first full-scale migration test on the training instance of the application.

The performance tests had been specified in the contract prenegotiations and pretests undertaken in Sun's Global Benchmarking center. These tests were repeated on site on an appropriately sized instance of the database before the migrated solution was placed into production.

Specify the Business Acceptance Tests

The business acceptance tests were specified to meet the following goals:

  • To prove the target system accurately represents the business

  • To prove the target system meets the performance-based system improvement goals

The axiom of the project was no change in business logic, so the basic acceptance test was running the test chart of accounts, which was a report option within the package. The view taken was that if the target system represented the business accurately, then it was suitable. This view was based on one of the fundamental theories of software development: the primacy of design is in the data model and if the data model implementation is accurate, then the process implementations can change to support changes in process. This view had the advantage that the run times of the test were low and the verification time was also low. Using this test mandated constraints on the transition strategy.

It was decided jointly by the customer and the team that the best way to ensure that the system accurately represented the business was to run a test chart of accounts. To simplify the comparison between the two reports , we ran both of them at the same logical time. These activities leveraged the decision to ensure that both the source and target system were available, because they were at the start-of-day post transition.

Given that the business acceptance test involved running a program that relied on denormalized tables, a risk was identified that the acceptance test be sufficiently comprehensive to test the data migration, so additional instrumentation of the migration process was developed. As stated above, not only was a target container created that held all the non-business/transactional data instances (rows) so that the migration was constrained to business/volatile data only, test programs were also written to ensure that source and target container properties were identical. Test programs were written to ensure that all the contents of business data tables were copied accurately in terms of both number of entries and summing critical columns.

 <  Day Day Up  >  


Migrating to the Solaris Operating System
Migrating to the Solaris Operating System: The Discipline of UNIX-to-UNIX Migrations
ISBN: 0131502638
EAN: 2147483647
Year: 2003
Pages: 70

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net