Based on the size and scope of the project, the program manager can determine the types of testing that are applicable and how heavily to test the different areas of the application. The output of this stage is the MTP, which contains the test goals, the success criteria, the scope of the project, and other items discussed in this section.
The MTP is a living document throughout the test life cycle. The content for this document is generated from the functional specification document and the high-level release schedule. As such, the MTP should be subject to the same change control procedures as other documents involved in the migration project.
Appendix 12.3, Master Test Plan Template, is a sample MTP template that includes descriptions of all the necessary areas that need to be documented in an MTP.
The MTP defines a set of goals for the test process. These goals are essential for keeping the migration project on track. The types of tests to run on the application depend on the goals of the migration project.
The rule of thumb for defining test goals essentially depends on the application under consideration. The application should demonstrate the success criteria in three parameters:
Features and functionality . The migrated application on the Microsoft Windows platform should have the same features and functionality that it has on the UNIX platform. This is the basic rule of thumb for any migration project unless there is a change in requirements. For example, if the migrated application must support 200 clients, as compared to 20 clients on the UNIX application, this is a new scope. The test plan and scope are then changed accordingly .
Interoperability . The test case in this parameter is for demonstrating the coexistence of the Windows and UNIX environments. The application may need to exchange files or access data. Ensuring interoperability decreases the complication of file and data conversion across the platform. For more information about interoperability, see Chapter 5, Planning the Migration.
Performance . Compared with the performance on the UNIX-based system (for example, Sun Ultra Sparc 5 or HP-UX), the application must run faster on the Windows platform. Slower overall performance on the Windows platform is not acceptable. However, what constitutes acceptable performance depends on the goals of the migration project. If a goal is to have a higher-performance application on Windows, the testing phase must thoroughly cover performance testing. You can measure performance by using existing benchmarks whenever possible ” or, when no benchmarks are available, you can measure performance based on the subjective analysis of testers and developers.
Similar parameters can be added in this definition of test criteria, depending on the type and nature of the application considered for the migration.
Based on the preceding three parameters, you can define success criteria as shownin Table 12.1.
Success Criteria | Pass Definition | Fail Definition |
---|---|---|
Features and functionality | The migrated application has all the features and functionality that it has in the UNIX environment. | The migrated application fails to provide similar features and functionality. |
Interoperability | The migrated application can access existing data or project files with minimum or no conversion, and can save those files back to the UNIX environment with minimum or no conversion. | The migrated application fails to gain access to existing data or project files, and the conversion process is highly complicated. |
Performance | The migrated application meets the performance numbers defined in the scope of the project for the Windows platform. | The migrated application does not meet the performance goal. |
Defining test goals and success criteria can add a framework to the test plan and encourage the personnel involved to agree on the time frame for delivering the project. You can derive the test schedule from the number of test cases in each ofthe preceding test parameters.
Assumptions related to the platform, infrastructure, and business goals of the migration are also covered in this stage of the test life cycle. For example, if the migration is from Solaris to Windows Advanced Server, the MTP must reference only these two platforms.
The scope of the test phase depends on resources, time, and business objectives. Accordingly, there are different methods that you can use to envision the scope of testing for the migration project and to create a plan for lab strategy and a definition of lab requirements. The MTP should document the scope of the test, based on answers to the following questions:
Do you want the migrated application to be installed and started properly?
Do you want to verify the functionality and performance of the migratedapplication?
If there are issues or bugs, what is the priority? Do you want to fix bugs during the test phase or just note them and handle them in the next phase?
Is it necessary to test every component and module of the migrated application, or do you have high-priority modules and low-priority modules?
Does the migrated application have all the features that the application hason the UNIX platform?
Has any new feature been added to the migrated application? What is the scope?
Does the application need an interoperability environment?
Does the application need any automation testing?
Does the native UNIX application have existing test scripts, a test suite, a test case, and a test plan? These items are critical because they can save a lot of time in testing the migrated application.
Answers to these questions help determine the scope of the project and keep the project on track as you move forward with testing.
The amount of time available for testing the migrated application is critical to the success of the project. From the scope of the project and the test criteria, you can define the tasks and the time needed for completing the test phase. If the UNIX application already has a test plan and test cases, you can save a lot of time on the test cases by covering functionality testing and feature testing. Similarly, you should consider the time needed for creating the test environment and the test bed for the migrated application. In some cases, it is necessary to create both UNIX and Windows environments for comparison purposes.
The test schedule details the sequence of tasks that the test team will follow to accomplish the testing. It is usually part of the MTP; in addition, large projects may associate the test schedule with the DTP. The test schedule also clearly defines tasks and allocates them to relevant personnel. Project team members identify the different test stages applicable for the migration and select appropriate team members for the tests. Each level of actual testing (unit testing, integration testing, functional testing, and system testing) will require a separate work plan, scope, and entry and exit criteria, which the test team will determine. For more information about the levels of testing, see Stage 3: Design the Test Plan and Test Cases later in this chapter.
The test schedule is reviewed during weekly test meetings and the milestones are tracked. Any deviation in the schedule requires immediate intervention, during which the test lead identifies alternatives to resolve the concerns.
A sample template of the test schedule is included as Appendix 12.1, Test Schedule Template, to help a test team plan and track the key tasks and milestones.
The typical roles that may be required in a migration test team, along with the responsibilities of each role, are described in Table 12.2.
Role | Responsibilities |
---|---|
Test lead | Define test goals and generate MTP. |
Test engineer | Generate DTC. |
The MTP must also document dependencies. It is essential that all dependenciesare identified and listed to ensure proper and timely testing of the application and the architecture. Some of the factors that may affect the testing effort include:
Functional specifications.
Architecture diagrams.
Design documents.
Build dates.
Material and personnel resources.
Changes in functional scope.
A risk is the probability of an event occurring, which could jeopardize the system.To evaluate risks, the test team can prepare a matrix that identifies risks and assigns one of the following exposure factors to each risk:
Probability of loss . This is the probability that the risk will occur. It is usually adequate to create three levels of probability: for example, Not Likely (less than 50 percent), Possible (50 percent), and Very Likely (greater than 50 percent).
Size of loss . This is the impact on the project timeline when the event associated with a risk occurs. Again, three levels are usually adequate: Negligible, Jeopardizes Finish Date, and Significant Effect on Finish Date.
Contingency plan . This is a plan for handling the circumstances of the risk.The contingency plan could entail building into the schedule an extra numberof days to meet these circumstances, adding staff and other resources, or changing the delivery scope.
Table 12.3 lists some of the common risks encountered in testing projects and possible contingency plans.
Risk | Contingency Plan |
---|---|
Development falls behind schedule. | Determine your ability to begin initial testing in parallel with the last stages of development, and add test and development resources. |
Testers are unfamiliar with the application. | Factor in additional days to train the testers on the application. |
Applications are based on emerging technologies, which may result inunexpected delays. | Ensure that the schedule remains flexible. |
Scope of new requirements is evolving and may increase, which may result in anunexpected increase in overall project scope. | Ensure that the schedule remains flexible. |
For an example of the risks identified when a migrated application is tested ,see Appendix 12.2, Risks Identified for Testing.