As we discussed in Chapter 15, different projects drive different types of requirements artifacts and different ways of organizing them. These decisions, in turn , drive differing needs for the number and nature of the artifacts to be traced. However, building on what you've learned, you can see that the static structure, or model, for your traceability strategy is common for most projects in that it follows a hierarchy from higher-level needs and features through more detailed requirements, and then on into implementation and testing. Therefore, no matter what type of project your team is executing, your traceability model will likely appear similar to Figure 27-3.
Figure 27-3. Generalized traceability hierarchy
The model shows that we are tracing requirements both within a domain, as in the system definition (or requirements) domain, and from there into the implementation and test domains. Although there are many additional "things" you could trace (requirements to glossary items and so on), experience has shown that these basic types of traces usually cover most needs. Of course, your conditions may differ , and it may be necessary to add or subtract from the core traceability list.
In the following sections, we show some examples of each of these major trace techniques and point out why you would want to do such tracing.
Tracing Requirements in the System Definition Domain
Let's look first at tracing requirements within the system, or product, definition domain. We'll call this requirement-to-requirement traceability because we'll be relating one type of requirement (for example, a feature) to another (for example, a use case).
Tracing User Needs to Features
As we described in Team Skill 2, the time spent understanding user needs is some of the most valuable time you can spend on the project. Defining the features of a system that meets those needs is the next step in the process, and it can be helpful to continually relate how the user needs are addressed by the features of your proposed solution. We can do so via a simple table, or traceability matrix, similar to the one shown in Table 27-1.
Table 27-1. Traceability Matrix: User Needs versus Features
In Table 27-1, we've listed all the user needs we identified down the left column. In the row across the top, we've listed all the application features we defined to satisfy the stated needs. Where did we get those features? We developed those features in the context of the Vision document described in Chapter 16. Team Skill 2 addresses the techniques your team can use to derive the user needs and features of the system.
Once the rows (needs) and columns (features defined to address those needs) are defined, we simply put an X in the appropriate cell (s) to record the fact that a specific feature has been defined for the sole purpose of supporting one or more user needs. Note that typically this is a one-to-many mapping since there are typically far fewer needs identified, and they are specified at higher levels of abstraction, than the number of features defined to implement those needs.
After you've recorded all known need “feature relationships, examining the traceability matrix for potential indications of error can be an instructive activity.
In addition, because of the dependency relationship inherent in the traceability relationship, we can see what specific needs would need to be reconsidered if a user need should change during the implementation period. Hopefully, this impact assessment process will be supported by the automated change detection capabilities of your requirement tool.
Once you've mapped the need “feature relationships and have determined that the needs and features are correctly accounted for and understood , it's time to consider the next level of the hierarchy ”relationships between the features and the use cases.
Tracing Features to Use Cases
It is equally important to ensure that the features can be related to the use cases proposed for the system. After all, the use cases illuminate the proposed implementation of the system from a user's perspective, and our job is to ensure that we have a fully responsive design.
In Table 27-2, we've listed all the features down the left column. In the row across the top, we've listed the use cases we derived to satisfy the stated features. Team Skill 2 addresses the techniques your team can use to derive the features and use cases of the system. As in the previous section, mapping the features and use cases into a matrix as shown in Table 27-2 should be a straightforward process.
Table 27-2. Traceability Matrix: Features versus Use Cases
Once the rows (features) and columns (use cases) are defined, we indicate a traceability relationship with an X in the cell(s) that represents a use case that supports one or more features. Note that this is likely to be a set of many-to-many relationships because, although both features and use cases describe system behaviors, they do so in different means and at different levels of detail. A single feature may be supported or implemented by multiple use cases. In addition, it is not unusual that a single use case implements more than one feature.
After you've established all known feature “use case relationships, you should once again examine the traceability matrix for potential indications of error.
In any case, reviewing and analyzing the data will improve your understanding of the implementation, help you find the obvious errors, and increase the level of certainty in the design and implementation.
Once you've mapped the feature “use case relationships and have determined that the features and use cases are correctly accounted for, you need to apply similar thinking to the nonfunctional requirements and their specification.
Tracing Features to Supplementary Requirements
Often we look to the Vision document for these special features, or additional high-level requirements, and trace them from there into the supplementary requirements that capture this important information. This can be captured in a matrix similar to Table 27-3. In other cases, these requirements may originate within the supplementary specification itself, and they would not have a further trace from their origin.
Table 27-3. Traceability Matrix: Features versus Supplementary Requirements
Tracing Requirements to Implementation
Having described the type of requirements tracing typical in the system definition domain (requirement-to-requirement traceability), we are now prepared to move from the requirements domain into the implementation and test domains. While the principles are the same ”we use the dependency traceability relationship to navigate this chasm ”the information content on the other side is remarkably different. Let's look first at crossing the chasm from requirements to implementation.
In Chapter 25, we discussed the problem of transitioning from requirements to code, which we called the problem of orthogonality , at some length. The context for the discussion was in developing an understanding of how the team makes this difficult transition and what role the requirements play in helping to do so. We also noted that tracing from requirements to implementation, and specifically from requirements to code, is extremely difficult, if not entirely impractical , and in general, we typically do not recommended it. For this reason, we suggested that the mapping of use case to use-case realization and from requirement to collaboration in the design model was perhaps the only pragmatic approach.
Tracing Use Cases to Use-Case Realizations
As we described in Chapter 26, in making this transition, we move to relating one form of artifact, the use-case form, to another artifact, the use-case realization , in the design model. In so doing, we used these two specific elements to bridge the gap between requirements and design (Figure 27-4).
Figure 27-4. Tracing from use case to use-case realization
In this special case, the traceability problem is simplified immensely because there is a one-to-one name space correspondence between a use case and its realization . Therein we meet both traceability requirements: the relationship between the entities is expressed directly by their name sharing, and the reason for the existence of the subordinate or traced entity, the use-case realization, is implicit in its very nature. That is, the use-case realization exists for only one purpose: to implement the use case by the same name. Therefore, there is no matrix to analyze since the design practice we employed yielded inherent traceability by default!
Tracing from the Use-Case Realization into Implementation
However, for those who require a higher degree of assurance or when traceability to code is mandated , it may not be adequate to stop at the design construct of the use-case realization. In this case, the traceability relationship must be followed from the use-case realization to its component parts , which are the classes (code) that implement the collaboration (Figure 27-5).
Figure 27-5. Tracing from use-case realization to implementation
How you will accomplish this mechanically depends on the types of tools you employ in your requirements, analysis, and design efforts. Without adequate tooling, the problem becomes quickly intractable since you will likely be dealing with dozens of use cases and hundreds of classes. Therefore, your choice of tooling will have a major impact on the practicality and efficacy of this level of traceability.
Tracing Supplementary Requirements into Implementation
Of course, not everything is a use case, and for those who must drive to uncompromised degrees of quality and safety, it may also be necessary to trace from supplementary requirements into implementation as well. How does one do this? We use a technique similar to the one we described in Chapter 25. In this case, we trace individual requirements or groups of requirements (such as the HOLIS software clock requirements) to a collaboration in the implementation. After all, the use-case realization wasn't really so special; it was just a type of collaboration all along. In this case, we'll have to name the collaboration and keep track of the links via some special means because they don't come prenamed and conveniently collected in a use-case package. However, the principle is the same, as Figure 27-6 illustrates.
Figure 27-6. Tracing from supplementary requirements into implementation
From there, you can trace to the specific code contained in the classes that realize the collaboration. Again, the mechanics of this are determined by the types of tooling you choose to employ.
Tracing from Requirements to Testing
Tracing from Use Case to Test Case
Finally we approach the last system boundary we must bridge to implement a complete traceability strategy: the bridge from the requirements domain to the testing domain . As Heumann  describes, and as we described in Chapter 26, one specific approach to comprehensive testing is to assure that every use case is " tested by" one or more test cases. This was already reflected in our generalized traceability model, as we now highlight in Figure 27-7.
Figure 27-7. Tracing use cases to test cases
However, this simple diagram understates the complexity of the case somewhat, for it was not a trivial one-for-one transition that we described.
Figure 27-8. Tracing use cases to test case scenarios
This can be represented by listing the scenarios of a specific use case in the rows of a matrix (Table 27-4).
Table 27-4. Traceability Matrix for Use Cases to Scenarios
However, we are still not done because each scenario can drive one or more specific test cases, as illustrated in Figure 27-9.
Figure 27-9. Tracing scenarios to test cases
However, in matrix form, this simply adds one more column to the matrix, as illustrated in Table 27-5.
In this way, a traceability matrix of one-to-many (use case to scenario) and an additional one-to-many (scenario to test case) relationship can fully describe the relationships among these elements. In a manner similar to the other matrices, automated tooling can help you build this matrix as well as perform certain inspections and quality tests. In addition, some tools provide immediate impact assessment by indicating which traced elements (for example, test cases) might be affected when a traced element (for example, a scenario) is changed.
Table 27-5. Traceability Matrix for Use Cases to Test Cases
Tracing from Supplementary Requirements to Test Cases
For those requirements that are not expressed in use-case form, the process is similar to the requirements-to-implementation process described above. More specifically, requirements are either traced individually to scenarios and test cases or grouped into "requirements packages" that operate in the same logical fashion as a use case. The matrices we describe above are unchanged, except that the column on the far left contains the specific requirement, or requirements package, that is being traced into implementation.