|
|
In recent years, I have had better luck convincing managers of the value of risk analysis than testers. Conducting formal risk analysis on the project not only gives managers a more defensible position because they can demonstrate that they used a best-practice approach, but more importantly, it costs so little and provides a powerful tool that supplements traditional project management tools. It also gets everyone in the effort on the same page with respect to scope and priorities.
A major part of the payback for performing MITs risk analysis is the assurance that testing is focused on the most important items. You will pick the best tests, perform fewer tests, and get a higher return on them than if you didn't perform risk analysis. And you will save the company more money.
You can have applications, systems, components, databases, even Web services (and apples and oranges and peas and spaceships) on your inventory-it doesn't matter, as long as each one is testable. Once any item has been added to the inventory, you can analyze it to whatever level necessary to identify its most important elements. For example, if the project requirements are on the inventory; they are testable. Under each requirement, there is some kind of program unit, system, module, or component; these are testable as well. (There also may be manual processes, like faxing or data entry.) Under each program unit are paths and data; these will have tests associated with them. What is the rank of these tests? How much will you test them? Which ones will you test first? The MITs risk analysis answers these questions.
In a plan-driven effort, these tests can be planned in advance, through analysis of the requirements and the actual functionality. Theoretically at least, you can rank these items based on the requirements and then use path analysis and data analysis to design the tests for them. And you can do so without having seen the software.
One of the great benefits of this MITs system is that it is published. Publishing the risk matrix is a type of assumption publishing. Ranking documents the assumptions of the testers. Everyone has the chance to review and comment. If faulty assumptions are not pointed out, it is not the fault of the tester. And other groups can also use the information.
Sometimes, however, it is hard to live with that publicity. In the real-world shipping example from Chapter 7, we testers were criticized by developers for applying risk-ranking criteria to their projects, but their vehemence and righteous indignation served to energize them into providing not only the ranking but the test order as well. This leaves testers with an excellent tool and guide map through a huge project.
One of the classic test waste scenarios happens when project personnel are changed. The new people don't know the application, or the tests, or what does what. So they tend to ignore the existing test collateral and invent their own. Since they don't know anything, the new collateral usually lacks maturity and depth of the existing material. The company has just paid good money to take a step backward and lose time as well.
The first question that someone new usually asks is "Which tests should I run to find out X?" The variable X can be replaced by virtually anything. If your test collateral can't provide an answer as simple as this, then you have failed to pass on your legacy and the new tester will probably start from scratch.
With MITs, all you need to do to answer the question is filter and re-sort your inventory by the test category that includes X and then by the MITs rank. They don't have to know anything about the application; all they have to do is look at the ranked test inventory in order to know where to start. Adding new categories to the test inventory to answer questions like this is one of my most important jobs. It shows the depth and value of the tests in the inventory, serves as the basis for funded new test development, and gets my tests reused.
This system works so well that other groups can also benefit from the tests. Operations is almost always the first beneficiary of the test inventory after the testers. They can use it to quickly identify just the tests that they need to create system-targeted diagnostics suites. Figure 9.1 shows the spreadsheet inventory from the real-world shipping example. It shows the "Day in the Life of a Car" test cases sorted by test order and priority. Notice that the view also shows the environments touched by each scenario.
Figure 9.1: The spreadsheet inventory showing the "Day in the Life of a Car" test scripts, sorted by test order and priority.
By itself, the inventory is a very nice parts list, useful for keeping track of items. When you add a priority to those items, the inventory becomes a powerful tool for answering all kinds of important questions. If you take one additional step and add a sequencing field like "Test Order," you have created a project management tool that is uniquely applicable to the test effort.
In the real-world shipping project, development and operations both benefited from the inventory. One of the biggest reasons was that careful attention was paid to identifying the environmental requirements of each PDR and major function. So the inventory could be used as a checklist by operations when preparing test systems for various stages of testing. For example, systems managers were planning a day-long meeting to determine the order in which they needed to bring their test systems online. We testers arrived at the meeting with several different printouts, like the one shown in Figure 9.2.
Figure 9.2: The shipping company's inventory showing the test order of the most important tests for the HRIS system.
Figure 9.2 shows the Web-based inventory and the environment catalog for the real-world case study of the shipping company. This listing shows the environmental catalog portion of the inventory; it has been filtered by the HRIS column. Notice the small funnel icon below the HRIS column heading, to show all items that touch the HRIS system. The list was then sorted by priority and finally by test order. Also notice the small arrow beneath the "Test Order" column heading. The questions answered by the view in Figure 9.2 are "What PDRs impact the HRIS system, and what order will I have to prepare for them?" and "What are the other systems that will need to be ready?"
The process of filtering and sorting took less that a minute, and the resulting list shows not only the most important inventory items for this system but also the order in which they will be tested. The managers were finished with their meeting in less than an hour, and they requested several additional views of the inventory. The inventory became one of the most frequently consulted (and quoted) documents in the integration effort.
The real-world shipping project was plan-driven. But the prioritized inventory is a valuable management tool in a RAD/Agile effort as well. You just don't construct it in quite the same way. In a RAD/Agile effort, since you will probably be handed some software to test with little or no idea what it is or what it will do, you will probably build your inventory as you go.
As soon as you have access to a functioning application, you can identify major functionality, prioritize it, and record it on your inventory. Once you have explored a function (inventory item), you can add the detail layers to your inventory. Sooner or later, features coalesce and become stable. If you have been building your inventory as you go, you will have an up-to-date report of everything that you have tested at all times. So when management wants to know if it's shippable, you have good answers in your hand. See Case Study: The 401(k) Web Project later in this chapter for an example.
A mature test inventory in a RAD/Agile effort also fuels the user guide and instructions to customer service. It is the definitive source for how things really work. In many RAD/Agile development efforts today, my title is actually technical author, not tester.
If you are involved in a RAD/Agile effort, you won't have a lot of time to plan, measure, or estimate, but you do want to be prepared, because you will be expected to be ready to test. If you are involved in a plan-driven project, it will be necessary and expected that you measure, plan, and estimate the size of the test effort in advance so that it can be budgeted and fitted into the time available. This brings us to the fundamental difference between the two development strategies and how they relate to risk.
|
|