The Test Discipline in the RUP Product

We will now turn to the RUP and examine how testing is described. It is mostly confined within the Testing Discipline. We will review the various roles involved, the artifacts produced, and the activities and their workflow.

Various Roles Related to Test in the RUP

There are four roles provided in the RUP focusing on test-related activities:

  • Test Manager

  • Test Analyst

  • Test Designer

  • Tester

These roles represent a natural and complete grouping of skills and responsibilities. As a tester in a small organization, you may be called to play all four roles. In larger organizations, some specialization may take place to allocate the roles to different individuals.

It's important to recognize that the roles in the RUP represent a related group of responsibilities ”represented by RUP activities ”that are partitioned around a set of skills required to fulfill those responsibilities. This partitioning allows for many different choices about how a project team might assign the roles to individuals. These roles are based around four sets of fundamental skills: management, analysis, development, and test.

Key Test Artifacts

  • Test evaluation summary. Since the purpose of testing is to provide an objective assessment of a build, the most important artifact is the test evaluation summary ” more important than the test plan. One such evaluation summary is ideally created for each test cycle, or for each build to be tested , but at least one per iteration. This artifact presents an objective assessment of the build, from the perspective of the agreed mission. It refers indirectly to defects and other anomalies (often stored in a change request database), emphasizing the critical ones. It also assesses the results of activities targeted at addressing risks that were driving the iteration, and it presents an assessment of the test effort itself in terms of the extent of testing (often described as "coverage") against plan and the relative effectiveness of the test effort. This is your most important artifact in communicating with the rest of the team.

  • Test Plan. Matching the two-level Project Plan and Iteration Plan described in Chapter 12, in larger projects we find a Master Test Plan governing the whole project and then a Test Plan specifying the mission and the specific test objectives for each iteration.

    The Master Test Plan may be constrained by a Quality Assurance Plan (if there is one) found in the project's Software Development Plan. And similarly, the individual Test Plans for each iteration are constrained by the Master Test Plan.

  • Test-idea list. A test-idea list is an informal listing resulting from considering various things that might necessitate a test being undertaken (and, again, not necessarily related to the requirements). This list is often created by brainstorming with other team members , such as a tester collaborating with a developer or analyst. This list is subsequently used to direct the testing effort by sorting it based on a number of factors: time and effort it would take, importance to customer, likelihood of finding problems, and so on. An organization may keep one or more catalogs of abstract test ideas to enable reuse from iteration to iteration, project to project. The RUP product contains some examples of test-idea catalogs and guidance on creating and maintaining them.

  • Test suite. A test suite is a group of related tests, which when executed together give a good assessment of a general area of concern. A test suite is the realization of one or more test ideas or test cases and consists of test scripts and test data.

  • Test scripts. These are the procedural aspects of the test, the step-by-step instructions to execute to test. Test scripts may be manually or ”to varying degrees ”automatically executed.

    Test scripts and test suites are the area where automation will kick-in, allowing the testers to manage and run again and again large numbers of tests, and automate the production of a large part of the test evaluation summary.

  • Test cases. Test cases tie testing activity to the software under test so that you can look at progress over time. Each answer to the question, "What are you going to test?" is a test motivator, and each answer to "And how are you going to test it?" is a test case.

    Test cases are more abstract than the scripts. In addition, they define preconditions, processing conditions, and postconditions: They represent a high-level specification for one or more tests. They are likely derived from test motivators such as the requirements, and in the RUP, many are derived from use cases. [5] Use cases are one of the primary sources of test cases, but not the sole source, and they must be complemented by other test ideas.

    [5] See Heumann 2001.

    In many projects there is a clear danger of spending an inordinate amount of time formalizing test-case specifications, using up valuable time that could otherwise be spent on real, concrete testing. However, test-case specifications add value and are necessary in certain types of development, specifically for mission- or safety-critical systems, for particularly complex tests, and for tests that require careful consideration of multiple resources (both in terms of hardware and in terms of people), such as system performance testing. Note that not every test script needs to be related to a test case.

  • Defect. Unfortunately ”depending on your point of view ”testing uncovers defects. A defect is a kind of change request, and may lead to a fix in some subsequent build or iteration. Defects are a useful source of metrics that the project manager will be able to use to understand not only the quality of the product over time, but also the quality of the process, and the quality and efficiency of the test process itself. You should be careful, though, to clearly differentiate a defect in the application from a defect in the test itself.

  • Workload model. To support performance testing, a workload model may be developed, describing typical and exceptional load conditions that the system must support.

You will find in the RUP some other artifacts related to testing. A test interface specification contains additional requirements to be placed on the software to enable it to be tested efficiently ; this concern is often called testability. The test automation architecture shows the design of the test harness and the various testing tools and mechanisms that will be used to automate the testing of the software product being developed. This test automation architecture may need to handle various architectural mechanisms such as concurrency (running two or more automated tests concurrently), distribution (distributing automated tests to run on remote nodes), maintainability (ease of maintenance of the automated tests as they evolve ), and so on.



The Rational Unified Process Made Easy(c) A Practitioner's Guide to Rational Unified Process
Programming Microsoft Visual C++
ISBN: N/A
EAN: 2147483647
Year: 2005
Pages: 173

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net