|< Day Day Up >|
Testing the Migrated Environment
Testing is perhaps the least appreciated component of a successful migration project. It is frequently overlooked, leading to costly deployment delays, incompatible data streams, and incorrect results. Of all the activities associated with a migration project, testing is the "black sheep" of the family. Often thought of as adding little value and being boring and repetitive in nature, this critical activity is often given short shrift in favor of more high-profile activities such as code transformation, database conversion, or third-party product integration. Nothing is further from the truth. Unless the new application and new environment can be verified as fulfilling the requirement specifications, it cannot be deployed.
It is critical that the value of testing or quality assurance (QA) be well understood by the organization. Typically, QA is the last part of a migration activity or product-release cycle. After all the hard work required to migrate an application or to develop a product, the last part of the project is to test and verify that the system does what it is supposed to do. At this time, all eyes are on the QA staff. While the rigorous tests QA teams perform frequently result in the perception that they are holding up the transition to the new solution or the release of a new product, thorough and complete testing is critical to the successful implementation of a migration solution.
In the following sections, we identify the various types of testing that can be performed. We also describe how the testing methodology can be extended so that it is not an activity that is taken on at the end of the project, but one that is integrated into the project from the onset. The development of a test environment and the associated test suites is key to an effective testing strategy.
Building the Test Environment
In addition to creating development and production environments, a testing environment is also required to support the activity of verifying that the migrated application functions as planned. Testing is rarely permitted in a production environment, and the development environment is usually reserved for the software developers who are producing the new application and may not have all the tools or the capacity to support the tests required before an application is put into production.
As with product development, migration activities must conclude with testing to ensure that the application meets its requirements. Initially, the production environment can and should be used to test the application, because it will also help verify that the hardware platform is functioning correctly. However, this arrangement will no longer work when what was serving as the test environment becomes the production environment. It is important that you carefully plan for the acquisition of additional hardware and software to test the application once the new solution is put into production.
The test environment requires all the same supporting software that is required in the production environment. One approach used to increase availability calls for the purchase of an identical system that might or might not be in a clustered environment. This additional system can be used as a test platform to shake out problems introduced with the addition of new features or functionality during product development, and it can be switched to a production role should the production system have problems or require regularly scheduled maintenance. Always ensure that the test and production platforms are running identical versions of the OS and support software, and ensure that they have the same patches applied.
Of course, cost consideration might influence the way you choose to build a test environment. It might be possible to execute regression, unit, or correctness testing on a smaller machine with limited capacity.
Creating the Test Plan
The types of testing that should be performed vary depending on the environment that is being migrated. There are two possible scenarios:
As with all migration requirements that require application-specific knowledge from members of the organization's IT staff, the impact of this interaction will have to be taken into account in terms of the disruption it might cost to ongoing operational or development efforts. The organization must understand that the creation of a test plan requires significant input from the IT staff, and the organization must be willing to provide the local resources as they are required.
As needed, use external consultants to assist in the generation of the test plan and the testing effort itself. A senior tester who is skilled in testing methodology can guide the development of a comprehensive test plan. Once the test plan and test cases have been defined, the execution of the tests can be performed by other external resources. Again, if the testing procedure is very complicated or domain specific, it might be more cost effective to have local resources execute the test cases.
If you use external consultants to create test plans and test cases, the cost and duration of the effort increases because detailed knowledge of the domain will have to be acquired . When developing a test plan, use the following resources:
Performing Unit Testing
Unit tests are typically small tests that verify the functionality of a class or function used during the creation of a larger application. These tests should be written before the implementation starts to be coded. Unit tests can be run in a batch mode and frequently have automated results analysis, meaning that the tester is informed of exactly what case failed, thereby enabling rapid verification of changes.
When they exist, use unit tests. However, if they do not exist, there is no requirement that they be created. As previously mentioned, unit tests should be created before an application starts to be coded. Any attempt to create them after an application has been written would more accurately reflect the logic implemented in the code, which might or might not be what the author intended. Local environment semantics or "features" might result in the correct result or answer, but for the wrong reasons.
Of course, if the migration activity requires the creation of new technology, using unit tests for these implementations would be entirely appropriate.
Performing Regression Testing
Regression testing ensures that an application's functionality has not changed (other than as intended) after modifications or updates have been made to the environment. This is usually thought of as end-to-end testing. Regression tests require significant time and effort, and extra care must be given to ensure that all the application's functionality is verified as being correct.
Typically, when software is tested , regression tests are optimized to examine only those components that have been changed. In the case of a migration, there can be no such optimization because significant changes will have been introduced to the entire environment.
Many IT organizations like to run the old and new systems in parallel as a form of regression testing. The generation of identical results produces a feeling of comfort and confidence that the migration was successfully implemented. While there are great benefits and significant cost to running systems in parallel, care must be taken in choosing the length of the parallel tests. For example, running the systems in parallel for several weeks might not test month-end or year-end processing procedures. All aspects of the business process should be tested in a regression test.
A regression test must not only ensure that the correct results are achieved but also that the operational documentation (run books) is still accurate.
Performing Integration Testing
Integration testing must be performed on hardware and software. For hardware testing, connecting computers with their peripherals can be a complicated task, involving multiple vendors . The cabling alone can present a significant problem. When the hardware has been installed, you must ensure that it is functioning optimally. Storage, in particular, can be problematic . We recommend that tests be conducted on the individual components (compute platform, storage platform, and network facilities) to verify their operation and limitations. For software testing, the integration of different software components can produce erroneous results. Wherever possible, interaction with third-party products and packages should be verified during the migration effort.
Once you have verified that the migrated system is producing the correct results, test to verify that it is performing optimally in the new environment and verify that the new environment can support the SLAs required by the enterprise. This will involve stress-testing the application by applying differing workloads and measuring how the application responds in terms of throughput, latency, memory utilization, and processor loads. Care must be taken to ensure that these loads are representative of the real-world operational conditions. When possible, the migrated environment should be tested in parallel with the old implementation to ensure the veracity of the new implementation. However, one of the reasons for the migration might have been the anticipation of increased loads. In this scenario, test cases that provide loads greater than existing real-world conditions will have to be developed.
Custom applications will have been ported with little or no knowledge of the application logic. Differences in hardware architecture and supporting software design can result in performance bottlenecks. The application might function correctly, but not efficiently . This can happen for a number of reasons:
Whenever possible, record and compare measurements of performance on the old system in terms of system metrics, as well as application metrics, with similar metrics in the new system.
A performance tester requires a different skill set than that of the migration engineer. Performance testers usually have a detailed knowledge of the hardware platform and OS, as well as of tools that can be used to trace, profile, or measure application performance metrics.
In addition to the benefits of running systems in parallel described earlier in this chapter, this activity allows for the easy generation and comparison of performance metrics.
|< Day Day Up >|