The RUP Testing Philosophy

In a traditional, waterfall approach, up to 80 percent of the test project time can be spent planning the test effort and defining test cases (but not actually conducting any testing at all). Then toward the end of the lifecycle, 20 percent of the effort is typically spent running and debugging tests. Often an additional 20 percent (yes, we are now over budget!) is then required to fix anything in the product that did not pass the tests.

The test philosophy in the RUP takes a different approach and can be summarized as a small set of principles:

  • Iterative development. Testing does not start with just test plans. The tests themselves are developed early and conducted early, and a useful subset of them is accumulated in regression suites, iteration after iteration. This enables early feedback of important information to the rest of the development team, permits the tests themselves to mature as the problem and solution spaces are better understood , and enables the inclusion in the evolving software design of required testability mechanisms. To account for the changing tactical objectives of the tester throughout the iterative lifecycle, we will introduce the concept of mission.

  • Low up-front documentation. Detailed Test planning is defined iteration by iteration, based on a governing master Test plan, to meet the needs of the team and match the objectives of the iteration. For example, during the Elaboration phase, we focus on architecture and so the test effort should focus on testing the key architectural elements as they evolve in each iteration. The performance of key end-to-end scenarios should be assessed ”typically under load ”even though the user interface may be rudimentary. But beyond this semiformal artifact that defines the test plan for each iteration, there is not a lot of up-front specification paperwork developed.

  • Holistic approach. The approach to identifying appropriate tests is not strictly and solely based on deriving tests from requirements. After all, the requirements rarely specify what the system should not do; they do not enumerate all the possible crashes and sources of errors. These have to come from elsewhere. Tests in the RUP are derived from the requirements and from other sources. We will see this later, embodied in the concept of test-idea list.

  • Automation. Testing starts early in the RUP lifecycle, is repeated again and again, and could be very time-consuming , so many aspects must be supported by tools: tools to design tests, to run tests, and to analyze results.

Mission

The concept of an evaluation mission, as used in the RUP approach, has been derived from the work of James Bach in identifying different missions commonly adopted by software test teams . Bach advocates using a simple heuristic model for test planning. This model recognizes that different missions govern the activities and deliverables of the testing effort, and that selecting an appropriate mission is a key aspect of test planning. [4]

[4] See Kaner 2002.

The evaluation mission identifies a simple statement the test team can remember in order to stay focused on their overall goal and appropriate deliverables for a given iteration. This is especially important in situations where the team is faced with a number of possibly conflicting missions. A test team without an evaluation mission often describes their goal with statements such as "We test everything" or "We just do testing." They're concerned with simply performing the test activities and overlook how those activities should be adjusted to suit the current project context or iteration context to achieve an appropriate goal.

Mission statements shouldn't be too complex or incorporate too many conflicting goals. The best mission statements are simple, short, succinct ”and achievable. Here are some ideas for mission statements you might adopt for a given iteration:

  • Find as many defects as possible.

  • Find important problems fast.

  • Assess perceived quality risks.

  • Advise about perceived project risks.

  • Advise about perceived quality.

  • Certify to a given standard.

  • Assess conformance to a specification (requirements, design, or product claims).

Test Cycles

A test cycle is a period of independent test activity that includes, among other things, the execution and evaluation of tests. Each iteration can contain multiple test cycles ”the majority of iterations contain at least one. Each test cycle starts with the assessment of a software build's stability before it's accepted by the test team for more thorough and detailed testing.

The RUP recommends that each build be regarded as potentially requiring a cycle of testing (that is, a test cycle), but there is no strong coupling between build and test cycle. If each build may be "smoke- tested " to verify the build process, a test cycle may be longer than the build. Typically, Inception iterations of new projects don't produce builds, but iterations in all other phases do. Although each build is a potential candidate for a cycle of testing, there are various reasons why you might not decide to test every software build. Sometimes it will be appropriate to distribute the work effort to test a single software build across multiple test cycles.



The Rational Unified Process Made Easy(c) A Practitioner's Guide to Rational Unified Process
Programming Microsoft Visual C++
ISBN: N/A
EAN: 2147483647
Year: 2005
Pages: 173

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net