A Generalized Traceability Model

   

As we discussed in Chapter 15, different projects drive different types of requirements artifacts and different ways of organizing them. These decisions, in turn , drive differing needs for the number and nature of the artifacts to be traced. However, building on what you've learned, you can see that the static structure, or model, for your traceability strategy is common for most projects in that it follows a hierarchy from higher-level needs and features through more detailed requirements, and then on into implementation and testing. Therefore, no matter what type of project your team is executing, your traceability model will likely appear similar to Figure 27-3.

Figure 27-3. Generalized traceability hierarchy

graphics/27fig03.gif

The model shows that we are tracing requirements both within a domain, as in the system definition (or requirements) domain, and from there into the implementation and test domains. Although there are many additional "things" you could trace (requirements to glossary items and so on), experience has shown that these basic types of traces usually cover most needs. Of course, your conditions may differ , and it may be necessary to add or subtract from the core traceability list.

In the following sections, we show some examples of each of these major trace techniques and point out why you would want to do such tracing.

Tracing Requirements in the System Definition Domain

Let's look first at tracing requirements within the system, or product, definition domain. We'll call this requirement-to-requirement traceability because we'll be relating one type of requirement (for example, a feature) to another (for example, a use case).

Tracing User Needs to Features
graphics/stakehold_icon.gif

By now, it's clear that the whole point of developing a system is to satisfy user and other stakeholder needs . Otherwise, you will likely have unhappy customers who tend to be irritable and are not inclined to pay their bills, or worse , you may have no customers at all.

As we described in Team Skill 2, the time spent understanding user needs is some of the most valuable time you can spend on the project. Defining the features of a system that meets those needs is the next step in the process, and it can be helpful to continually relate how the user needs are addressed by the features of your proposed solution. We can do so via a simple table, or traceability matrix, similar to the one shown in Table 27-1.

Table 27-1. Traceability Matrix: User Needs versus Features
 

Feature 1

Feature 2

. . .

Feature n

Need 1

X

     

Need 2

 

X

 

X

Need . . .

 

X

X

 

Need m

     

X

In Table 27-1, we've listed all the user needs we identified down the left column. In the row across the top, we've listed all the application features we defined to satisfy the stated needs. Where did we get those features? We developed those features in the context of the Vision document described in Chapter 16. Team Skill 2 addresses the techniques your team can use to derive the user needs and features of the system.

Once the rows (needs) and columns (features defined to address those needs) are defined, we simply put an X in the appropriate cell (s) to record the fact that a specific feature has been defined for the sole purpose of supporting one or more user needs. Note that typically this is a one-to-many mapping since there are typically far fewer needs identified, and they are specified at higher levels of abstraction, than the number of features defined to implement those needs.

After you've recorded all known need “feature relationships, examining the traceability matrix for potential indications of error can be an instructive activity.

  1. If inspection of a row fails to detect any Xs, a possibility exists that no feature is yet defined to respond to a user need. This may be acceptable if, for example, the feature is not fulfilled by software ("The case shall be of nonbreakable plastic"). Nevertheless, these potential red flags should be checked carefully . Modern requirements management tools have a facility to automate this type of inspection.

  2. If inspection of a column fails to detect any Xs, a possibility exists that a feature has been included for which there is no defined product need. This may indicate a gratuitous feature, a misunderstanding of the role of the feature, or a dead feature (one that is still in the system but whose reason to exist has disappeared or at least is no longer clear). Again, modern requirements management tools should facilitate this type of review. In any case, you are not dealing with a great deal of data at this level.

In addition, because of the dependency relationship inherent in the traceability relationship, we can see what specific needs would need to be reconsidered if a user need should change during the implementation period. Hopefully, this impact assessment process will be supported by the automated change detection capabilities of your requirement tool.

Once you've mapped the need “feature relationships and have determined that the needs and features are correctly accounted for and understood , it's time to consider the next level of the hierarchy ”relationships between the features and the use cases.

Tracing Features to Use Cases

It is equally important to ensure that the features can be related to the use cases proposed for the system. After all, the use cases illuminate the proposed implementation of the system from a user's perspective, and our job is to ensure that we have a fully responsive design.

graphics/traces_icon.gif

As before, you don't need much in the way of special tooling to perform this essential step. Again, we can consider a simple matrix similar to the one shown in Table 27-2

In Table 27-2, we've listed all the features down the left column. In the row across the top, we've listed the use cases we derived to satisfy the stated features. Team Skill 2 addresses the techniques your team can use to derive the features and use cases of the system. As in the previous section, mapping the features and use cases into a matrix as shown in Table 27-2 should be a straightforward process.

Table 27-2. Traceability Matrix: Features versus Use Cases
 

Use Case 1

Use Case 2

. . .

Use Case k

Feature 1

X

   

X

Feature 2

 

X

 

X

Feature . . .

   

X

 

Feature m

 

X

 

X

Once the rows (features) and columns (use cases) are defined, we indicate a traceability relationship with an X in the cell(s) that represents a use case that supports one or more features. Note that this is likely to be a set of many-to-many relationships because, although both features and use cases describe system behaviors, they do so in different means and at different levels of detail. A single feature may be supported or implemented by multiple use cases. In addition, it is not unusual that a single use case implements more than one feature.

After you've established all known feature “use case relationships, you should once again examine the traceability matrix for potential indications of error.

  1. If inspection of a row fails to detect any Xs, a possibility exists that no use case is yet defined to respond to a feature. As before, these potential red flags should be checked carefully.

  2. If inspection of a column fails to detect any Xs, a possibility exists that a use case has been included for which there is no known feature that requires it. This may indicate a gratuitous use case, a misunderstanding of the role of the use case, a use case that exists solely to support other use cases, or a dead or obsolete use case.

In any case, reviewing and analyzing the data will improve your understanding of the implementation, help you find the obvious errors, and increase the level of certainty in the design and implementation.

Once you've mapped the feature “use case relationships and have determined that the features and use cases are correctly accounted for, you need to apply similar thinking to the nonfunctional requirements and their specification.

Tracing Features to Supplementary Requirements
graphics/traces1_icon.gif

While the use cases carry the majority of the functional behavior, keep in mind that the supplementary requirements also hold valuable system behavioral requirements. As we discussed in Chapter 22, these often include the nonfunctional requirements of the system such as usability, reliability, supportability, and so on. Regardless of the number of supplementary requirements, their criticality can be as great as or greater than (for example, "Results must be within an accuracy of ±1 percent") the use cases themselves . In addition, certain functional requirements (for example, those that are algorithmic or scientific in nature, such as a language parsing program) will likely not be expressed in use-case form.

Often we look to the Vision document for these special features, or additional high-level requirements, and trace them from there into the supplementary requirements that capture this important information. This can be captured in a matrix similar to Table 27-3. In other cases, these requirements may originate within the supplementary specification itself, and they would not have a further trace from their origin.

Table 27-3. Traceability Matrix: Features versus Supplementary Requirements
 

Supplementary Requirement 1

Supplementary Requirement 2

. . .

Supplementary Requirement p

Feature or System Requirement 1

X

   

X

Feature or System Requirement 2

 

X

 

X

Feature or System Requirement . . .

 

X

X

 

Feature or System Requirement j

     

X

Tracing Requirements to Implementation

Having described the type of requirements tracing typical in the system definition domain (requirement-to-requirement traceability), we are now prepared to move from the requirements domain into the implementation and test domains. While the principles are the same ”we use the dependency traceability relationship to navigate this chasm ”the information content on the other side is remarkably different. Let's look first at crossing the chasm from requirements to implementation.

In Chapter 25, we discussed the problem of transitioning from requirements to code, which we called the problem of orthogonality , at some length. The context for the discussion was in developing an understanding of how the team makes this difficult transition and what role the requirements play in helping to do so. We also noted that tracing from requirements to implementation, and specifically from requirements to code, is extremely difficult, if not entirely impractical , and in general, we typically do not recommended it. For this reason, we suggested that the mapping of use case to use-case realization and from requirement to collaboration in the design model was perhaps the only pragmatic approach.

Tracing Use Cases to Use-Case Realizations

As we described in Chapter 26, in making this transition, we move to relating one form of artifact, the use-case form, to another artifact, the use-case realization , in the design model. In so doing, we used these two specific elements to bridge the gap between requirements and design (Figure 27-4).

Figure 27-4. Tracing from use case to use-case realization

graphics/27fig04.gif

In this special case, the traceability problem is simplified immensely because there is a one-to-one name space correspondence between a use case and its realization . Therein we meet both traceability requirements: the relationship between the entities is expressed directly by their name sharing, and the reason for the existence of the subordinate or traced entity, the use-case realization, is implicit in its very nature. That is, the use-case realization exists for only one purpose: to implement the use case by the same name. Therefore, there is no matrix to analyze since the design practice we employed yielded inherent traceability by default!

Tracing from the Use-Case Realization into Implementation

However, for those who require a higher degree of assurance or when traceability to code is mandated , it may not be adequate to stop at the design construct of the use-case realization. In this case, the traceability relationship must be followed from the use-case realization to its component parts , which are the classes (code) that implement the collaboration (Figure 27-5).

Figure 27-5. Tracing from use-case realization to implementation

graphics/27fig05.gif

How you will accomplish this mechanically depends on the types of tools you employ in your requirements, analysis, and design efforts. Without adequate tooling, the problem becomes quickly intractable since you will likely be dealing with dozens of use cases and hundreds of classes. Therefore, your choice of tooling will have a major impact on the practicality and efficacy of this level of traceability.

Tracing Supplementary Requirements into Implementation

Of course, not everything is a use case, and for those who must drive to uncompromised degrees of quality and safety, it may also be necessary to trace from supplementary requirements into implementation as well. How does one do this? We use a technique similar to the one we described in Chapter 25. In this case, we trace individual requirements or groups of requirements (such as the HOLIS software clock requirements) to a collaboration in the implementation. After all, the use-case realization wasn't really so special; it was just a type of collaboration all along. In this case, we'll have to name the collaboration and keep track of the links via some special means because they don't come prenamed and conveniently collected in a use-case package. However, the principle is the same, as Figure 27-6 illustrates.

Figure 27-6. Tracing from supplementary requirements into implementation

graphics/27fig06.gif

From there, you can trace to the specific code contained in the classes that realize the collaboration. Again, the mechanics of this are determined by the types of tooling you choose to employ.

Tracing from Requirements to Testing

Tracing from Use Case to Test Case

Finally we approach the last system boundary we must bridge to implement a complete traceability strategy: the bridge from the requirements domain to the testing domain . As Heumann [2001] describes, and as we described in Chapter 26, one specific approach to comprehensive testing is to assure that every use case is " tested by" one or more test cases. This was already reflected in our generalized traceability model, as we now highlight in Figure 27-7.

Figure 27-7. Tracing use cases to test cases

graphics/27fig07.gif

However, this simple diagram understates the complexity of the case somewhat, for it was not a trivial one-for-one transition that we described.

graphics/alternative_icon.gif

As you'll recall from Chapter 26, we first had to identify all the scenarios described in the use case itself. This is a one-to-many relationship since an elaborated use case will typically have a variety of possible scenarios that can be tested. From a traceability viewpoint, each use case traces to each scenario of the use case as shown in Figure 27-8.

Figure 27-8. Tracing use cases to test case scenarios

graphics/27fig08.gif

This can be represented by listing the scenarios of a specific use case in the rows of a matrix (Table 27-4).

Table 27-4. Traceability Matrix for Use Cases to Scenarios

Use Case

Scenario Number

Originating Flow

Alternate Flow

Next Alternate

Next Alternate

Control Light

1

Basic flow

     
 

2

Basic flow

Alternate flow 1

   
 

3

Basic flow

Alternate flow 1

Alternate flow 2

 
 

4

Basic flow

Alternate flow 3

   
 

5

Basic flow

Alternate flow 3

Alternate flow 1

 
 

6

Basic flow

Alternate flow 3

Alternate flow 1

Alternate flow 2

 

7

Basic flow

Alternate flow 4

   
 

8

Basic flow

Alternate flow 3

Alternate flow 4

 

Run Vacation Profile

1

Basic flow

     

However, we are still not done because each scenario can drive one or more specific test cases, as illustrated in Figure 27-9.

Figure 27-9. Tracing scenarios to test cases

graphics/27fig09.gif

However, in matrix form, this simply adds one more column to the matrix, as illustrated in Table 27-5.

In this way, a traceability matrix of one-to-many (use case to scenario) and an additional one-to-many (scenario to test case) relationship can fully describe the relationships among these elements. In a manner similar to the other matrices, automated tooling can help you build this matrix as well as perform certain inspections and quality tests. In addition, some tools provide immediate impact assessment by indicating which traced elements (for example, test cases) might be affected when a traced element (for example, a scenario) is changed.

Table 27-5. Traceability Matrix for Use Cases to Test Cases

Use Case

Scenario Number

. . .

Test Case ID

Control Light

1

 

1.1

 

2

 

2.1

 

3

 

3.1

 

4

 

4.1

 

4

 

4.2

 

4

 

4.3

 

5

 

5.1

 

6

 

6.1

 

7

 

7.1

 

7

 

7.2

 

8

 

8.1

Run Vacation Profile

1

 

1.1

Tracing from Supplementary Requirements to Test Cases

For those requirements that are not expressed in use-case form, the process is similar to the requirements-to-implementation process described above. More specifically, requirements are either traced individually to scenarios and test cases or grouped into "requirements packages" that operate in the same logical fashion as a use case. The matrices we describe above are unchanged, except that the column on the far left contains the specific requirement, or requirements package, that is being traced into implementation.

   


Managing Software Requirements[c] A Use Case Approach
Managing Software Requirements[c] A Use Case Approach
ISBN: 032112247X
EAN: N/A
Year: 2003
Pages: 257

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net