11.3. Testing Project Deliverables
Without testing project deliverables, it would be impossible to know if your project was on track or not. If the results of project work are not regularly reviewed and tested, where appropriate, you may end up at the completion of your project with deliverables that do not address the required project scope or fail to deliver required functionality (quality or performance). Therefore, part of tracking project progress is testing results of project work.
In some projects such as software development, testing may be a discrete function outside the project plan. For instance, at each delivery point, the project may have a milestone indicating a deliverable is sent to testing. The project may also have a second milestone indicating when results from testing will be reviewed so that action can be taken. The action is usually binaryeither proceed or revise. If revision is required, the project's scope and specifications (functional and technical requirements) should be reviewed to determine why there is a difference between what was specified and what came through testing.
In some projects, the testing component is included in the project plan. This is common in more physical-based projects such as upgrading infrastructure or replacing servers, applications, or network components. In these cases, testing may occur in a lab environment to ensure the required configuration meets the project's specifications and works as intended. Once tested and approved in the lab environment, the solution may be implemented in the live setting and additional tests may be conducted to ensure everything is still working as expected. In these cases, testing tasks are built into the project plan and are conducted as project tasks.
Whether testing is part of the project plan or is part of an external project plan, testing is critical to managing and controlling IT project progress. Thought it's done from time to time, it would be pretty risky to put everything together and implement the solution without testing results at periodic intervals during project work. Often, pressure to get a product to market or into the hands of users forces IT project managers to compress testing, which is unfortunate. Sound testing is part of quality control. Remember from our earlier discussion on quality, the cost to fix a problem once the project's deliverables are in the users/customers hands is 100 times the cost of fixing that problem in the design phase. If a problem is found in the unit testing phase, the cost is only 10 times the cost of finding it in the design phase. If the problem is found in the integration testing phase, the cost is 30 times the cost of finding it in the design phase. As you can see, 10x or 30x is still far less expensive than 100x. If you are the IT project manager and you're being pushed to shorten or even skip the testing phases of your project, you may want to pull out Chapter 7 and explain to your project sponsor or corporate executives the costs (both direct and ndirect) of doing so. You might not win the battle, but you might find a solution that addresses both the management team's and your concerns. At the very least, your executive team and project sponsor can make an informed decision about testing once you've laid out the costs and risks for them.
You and your IT project team should have developed a testing plan either as part of your project processes in the organization or planning phases of your project, so we won't repeat that information here. There are numerous types of test plans and testing that can occur in an IT project. In this section, we'll look at the various types of tests and discuss how the results impact this phase of the project lifecycle.
The result of any kind of testing should be a clear understanding of how well your project's deliverables are performing against requirements. Your test and quality plans should clearly define the next steps for various types of results. For instance, results that vary from specifications just a bit may be sent back through the project work cycle to be fixed, or they may be deferred (as may be the case in software development projects). Results that vary significantly from specifications may be considered "show stoppers" and may cause project work to halt until the cause for the variance is determined and corrected. Understanding the results of testing and having plans for understanding and managing the results will help keep your project on track. If you are unable to assess the results of your various testing plans as it pertains to overall project progress, your project is at risk. In that case, you should step back and reassess your testing plans and assess what each test is designed to show and what the results will tell you about the project.
11.3.1. Unit Testing
Unit testing is done throughout the project work phase. Testing can be done on hardware, software, processes, and procedures throughout the project work cycle. Results of unit testing are your first indication of project progress. If deliverables at this point fail unit testing, this should raise a flag for you. If unit testing progresses smoothly, your project deliverables are off to a good start. A unit test is a good way to test a small portion of the deliverable early in the process. It is typically performed by the person doing the work. If a technician is supposed to configure 18 new servers, he or she should test each server to ensure the configuration is correct and works as expected. The work should not be considered complete until a unit test has been performed. Unit testing can be specifically added to entry/exit criteria or completion criteria. It is reasonable to expect the unit testing has been completed before the task owner designates the work as complete.
For example, suppose that you are having a house built and at the moment the plumber is installing the water lines. You would have (and should have) a real expectation that the plumber will have tested his work to make sure that there are no leaks before saying the job (task) is complete. The same attitude belongs in your IT project.
11.3.2. Integration Testing
Integration testing is the next logical step after unit testing. Integration takes your unit deliverables and begins putting them together and testing how well these components interact. It could be a situation where you have tested several disk drives and are now testing them in a RAID system or as a cluster. It could be where you take several segments of code related to the user interface and test them together to see how they work. These tests are usually referred to as functional or black box tests, meaning that the tester is not going through component by component or line by line looking at the internal functionality, but is looking at inputs and outputs only. In a hardware project (or physical project), the integration testing may overlap with implementation and deployment phases.
11.3.3. Usability Testing
Usability testing is done in a controlled environment to ensure that the project results can be used in the real world. These tests may involve bringing select users into a lab or test environment to use the project's results in order to see how the user will interact with the product. The results of this type of testing can help an IT project team refine the project's deliverables and can also be used as input to performance test plans. Usability testing is important to ensure the product is usable, but it should not be utilized in lieu of sound functional and technical specifications. Fixing usability issues once project work is underway is 10 to 30 times more expensive than creating excellent specifications early in the project cycle. That said, if there was an error or omission made in the project definition phase, it is easier to fix it after reviewing usability test results than once the product is in the user's hands.
11.3.4. Acceptance Testing
Acceptance testing is usually the final phase of most quality assurance processes. Acceptance testing may be performed in-house by testers or project team members to verify that the required features and functionality work as specified. In other cases, acceptance testing is done by a group of users to verify that the features and functionality work as they expect it to. Acceptance testing should be well defined going into the project because users have a habit of forgetting what they specified at some point in the past. IT projects can run from several weeks to several years and users may forget, change their minds, or leave altogether. Therefore, it is critical that acceptance procedures be clearly delineated and agreed to by the user/customer before the project commences. In some cases, you will need to modify the project acceptance testing plan or criteria (based on required changes to the project over time). In all cases, though, the acceptance criteria and test plans should be clearly delineated and agreed to by the user/customer to avoid end-of-project disagreements about what is and is not acceptable.
The result of this type of testing is the assurance (either by internal staff or by external customers) that the project results are suitable for use in the real world and that it performs as expected. Sometimes acceptance testing is combined with beta testing. Acceptance testing is often tied to payment when the project is for a client.
11.3.5. Beta Testing
Beta testing is a limited release of the project's deliverables to a selected group of users. Often the beta test group includes knowledgeable or expert testers who will use the project's product as users would. In the case of hardware projects, beta testing may come after a limited rollout of the project's results before implementing across the enterprise. The results of beta testing may result in project rework, but it may also result in notes for future releases. Most beta testing plans provide some definition or description of different levels of problems or bugs found during the beta testing process, as described earlier. Bugs or defects classified as show stoppers can impact your final delivery date and should be investigated thoroughly. Those bugs or defects classified with lower priorities will generally be put back in the work pipeline for repair during the normal project cycle. Some bugs or defects may be deferred to later project phases or later releases.
11.3.6. Regression Testing
Once errors and defects are found, rework should address the problems. At this point, the revised project work is again tested to ensure that the changes made to fix the defect did not introduce new problems. If you recall from our discussion of project risk, making a change to the project in order to address a risk often introduces secondary risk. The same is true in the project work itself. Any revision to address a problem or defect has the possibility of introducing new problems or defects into the project. Regression testing is used to verify that the identified problems have been fixed and that no new problems have been introduced.
11.3.7. Performance Testing
Performance testing can take many different forms. For instance, you can perform stress or load testing to see how a component performs in actual use. This is often done on hardware components to ensure that a disk drive or server can handle the load that will be placed on it once in production. This can also be performed on software components such as stress or load testing a database application to ensure that it will perform fast enough when 100 users are requesting data from the database. The four main types of performance tests are stress, load, stability, and reliability.
220.127.116.11. Stress and Load Testing
Stress testing attempts to replicate the stress placed on the system (hardware or software) once it's in a production environment. Load testing is a form of stress testing that places various loads on the system (read requests on the disk drive, login requests on the server, calculation requests on an application, for instance) to ensure that it can perform under pressure. Many systems look great on paper and look fine in development, but come to a screeching halt when placed under a realistic load. If a database application takes 10 minutes to return a simple query, users are not going to be happy and stress and load testing can begin to help you identify problems before you put the project's deliverables in the hands of users.
Stress and load testing can sometimes be accomplished as unit tests during the work phase of the project. Other times, stress and load tests require a more integrated approach and are performed as project work nears completion.
18.104.22.168. Stability and Reliability Testing
Stability and reliability tests are generally run on fully integrated systems to ensure that they don't crash or halt intermittently over time. These types of tests are stress and load tests performed over a longer period of time, typically days or weeks, and sometimes months. For instance, problems with database applications may only show up after a period of time. Memory leaks in an application or problems with particular locations in physical memory may only show up after an extended period of time. Problems with heat can impact hardware components over a period of time and those problems might not show up if a stress or load test is run for just a few hours. This type of testing can also include a "test to fail" process where the system is intentionally placed under load until it fails to determine the actual stress or load limits. It also helps to determine how to recover from the damage caused when the failure occurs and steps that can be taken to make the crash a bit more graceful.
Stability and reliability testing are sometimes performed in parallel or in conjunction with beta testing. These types of tests are performed on the integrated results of the project and are therefore performed during the latter stages of the project work cycle.
11.3.8. Benchmark Testing
Benchmark testing, or benchmarking, is the process of testing hardware or software against specific performance standards. This testing can be performed during unit and integration testing. If you are installing new network cable, you might unit test the cable to ensure it can carry the volume of traffic its specification indicate (to verify the specification and the cable are correct) and you might also test it in the test or production environment to again verify it is performing according to specifications. In the realm of software development, you might unit test to make sure that a procedure call actually works as specified and you might again test in an integrated test environment to ensure the application runs quickly and delivers the results as expected. A benchmark can be used to define the minimum standard of performance and can be included in entry/exit criteria, completion criteria, and quality metrics.
11.3.9. Security Testing
In today's environment, security testing should be part of just about every IT project. Security testing should cover two key areas: that the project deliverables meet security requirements and that if attacked, the system(s) can handle it gracefully. If a security problem does crop up, you want your hardware and software to deal with it intelligently. For instance, some database errors can give hackers additional information needed to continue penetrating the system. By forcing errors, hackers learn more about the design of the database and learn better how to infiltrate the system. These are the kinds of problems that can be discovered through security testing so that if a hacker does force an error, the error does not provide additional information for hackers. That's an example of a system handling an attack gracefully rather than laying out the welcome mat for intruders.
Many third-party companies specialize in testing security for companies and this might be something worth considering, especially if security is paramount to your particular IT project. Often internal "group think" sets in, which means everyone approaches the project from the same perspective. When this happens, it can be difficult (or impossible) to identify potential security problems. Sourcing this to an external firm can help overcome a sometimes myopic approach to security testing.