Mapping Requirements Directly to Design and Code

   

Fortunately, for some percentage of our requirements, we can design the software so that it is relatively easy to follow our requirements into design and then into code. This also means that we can test a significant portion of our code, using a requirement-to-module test, since there will be a reasonable degree of correlation between the statement of a requirement and the code that implements it. For example, it's probably fairly straightforward to find, inspect, and validate the code that fulfills the requirement "Support up to an eight-digit floating-point input parameter," or "Indicate compilation progress to the user ," as we can see in Figure 25-1. Depending on the type of system we are building, this approach may work for a substantial portion of our code, so the requirements-to-design-to-implementation process is not so difficult in these cases.

Figure 25-1. From requirements to design to implementation ”a direct mapping

graphics/25fig01.gif

The Orthogonality Problem

graphics/circlecode.gif However, when it comes to such requirements as "The system shall handle up to 100,000 trades an hour " or a use-case step like "The user can edit each of the highlighted fields in accordance with user privileges that have been established by the system administrator," things get a little trickier. In these cases, there is little correlation between the requirement and the design and implementation; they are orthogonal , or nearly so. In other words, the form of our requirements and the form of our design and implementation are different . There is no one-to-one mapping to make implementation and validation easier. There are many reasons why this is true.

  • Requirements speak of real-world items, such as engines and paychecks , but code speaks of stacks, queues, and computation algorithms. The two are different languages.

  • Certain requirements, such as performance requirements, have little to do with the logical structure of code but lots to do with the process structure, or how various pieces of code interact, how fast a particular piece of code runs, how often we get interrupted while in module A, and so on. When you can't physically map to the logical structure, there is no place to "point" your requirement to within the implementation.

  • Other functional requirements require that a number of system elements interact to achieve the functionality. Looking at a part is not the same as looking at the whole, and the implementation of the requirement may be distributed throughout the code.

  • Perhaps most importantly, good system design is driven not by optimizing the ease with which we can prove that a requirement is met but by more important factors. For example, the designer may be optimizing the use of scarce resources, reusing an architectural pattern that has been proven in other applications but is not the exact paradigm of the current application, reusing code, or applying purchased components that bring their own overhead and functional behaviors.

graphics/pixie_icon.gif

In any case, the design of the solution does not follow the form of the requirements, and there is no easy way to definitively follow, or trace, from requirements to design and code. For those of us who have been building high-assurance systems and/or have been forced by political or contractual considerations into demonstrating on paper the direct correlation between requirements and code, we managed to get by. But, we admit, the formulation consisted of one part real-and- deadly -serious-requirements-traceability mechanisms and one part pixie dust.

Object Orientation

In many ways, this problem of orthogonality ”a lack of direct relationship between requirements reflecting the problem space and the code we implemented ”was substantially improved with the advent of object-oriented (OO) technology. In applying OO concepts, we tended to build code entities that were a better match to the problem domain, and we discovered that an improved degree of robustness resulted. This was due not only to the OO principles of abstraction, information hiding, inheritance, and so on but also to the fact that the real-world entities simply changed less often than the transactions and the data on which we formerly designed our system. Therefore, our code changed less often, too. (For example, people still get paychecks today, just as they did 40 years ago, but in many cases the form of delivery ”electronic versus paper ”has changed dramatically.)

With OO technology, we did start to find engine objects and paycheck objects in the code, and we used this to good advantage to decrease the degree of orthogonality in requirements verification. We could look at the requirements for "paycheck stub" and see whether the implied operations and attributes were supported in the design model.

However, we must be careful because a purposeful attempt to provide a one-to-one mapping from requirements to code can lead to a very non-OO architecture, one that is functionally organized. The basic principles of OO technology drive the designer to describe a small number of mechanisms that satisfy the key requirements of the system, resulting in a set of classes that collaborate and yield behavior that's bigger than the sum of its parts . This "bigger behavior" is intended to provide a more robust, more extensible design that can deliver the current and, ideally , future requirements in the aggregate , but it is not a one-to-one mapping from requirements. Therefore, even with OO technology, some degree of orthogonality with requirements will always, and should always, remain .

The Use Case as a Requirement

As we mentioned earlier, the "itemized" nature of the requirements can further compound the problem of orthogonality. Each requirement by itself may not present a huge problem, but it makes it difficult to look at system behavior in the aggregate to see whether it does all the right things and in the right sequence. How could we examine the system to determine whether requirement 3 ("Display progress bar") immediately followed requirement 7 ("During compilation, the algorithm is . . .")?

graphics/clientatm_icon.gif

The use case, which provides a sequence of actions between the system and the user instead of an itemized individual requirement, improves this problem significantly. Now the requirements themselves , in the form of the use cases, do a better job of providing the behavior of the system in sequential fashion, complete with alternatives and exceptions. As we said before, use cases simply tell a better story about how the system does what it is intended to do. In addition, as we will see, they also give us a head start on the design process.

Managing the Transition

Although, with OO methods and use cases, we haven't solved the problem of orthogonality, we do have a number of existing assets and a few new techniques that can help us deal with the problem. If we can use these assets to increase the parallels between requirements and code, it seems likely that we can use our understanding of the requirements to more logically drive the design of the system. In so doing, it should also be easier to translate between these dissimilar worlds , to improve the design of the system, and to improve the overall quality of the system that results. Before we do so, however, we need to make a small digression into the world of modeling and software architecture.

Modeling Software Systems

Nontrivial software systems today are extraordinarily complex undertakings. It is common to find systems and applications that are composed of millions of lines of code. These systems or applications may, in turn , be embedded in other systems that also have an extraordinary complexity in their own right, not to mention the complex interactions that may occur between the systems. We take it as a given that no one person or even group of persons can possibly understand the details of each of these systems and their planned interactions.

In the face of this complexity and to keep our wits about us, a useful technique is to abstract the system into a simplified model , removing the minutia of the system in order to view a more comprehensible version. The purpose of modeling is to simplify the details down to an understandable "essence" but not to oversimplify to the point that the model does not adequately represent the real system. In this way, we can think about the system without being buried in the details.

Selection of the model is an important issue. We want the model to help us understand the system in the proper way, but we don't want the model to mislead us because of errors or abstractions. You've undoubtedly seen pictures of drawings and machines that helped the early philosophers , astronomers, and mathematicians understand the workings of the solar system. Many of these models, based on a geocentric view of the solar system with Earth at the center of the universe, thus led to many blind alleys and incorrect theories . Only when sun-centered, or heliocentric, models were proposed did a better understanding of our solar system emerge.

Remember, the model is not the reality.

Models provide a powerful way to reason about a complex problem and to derive useful insights. However, we must be aware that the model is not the reality . We must continually check and assure ourselves that the model has not led us astray.

graphics/heliocentric_icon.gif

For example, the heliocentric (sun-centric) models of the universe opened up many new possibilities and ideas regarding the universe at large (very large). Early scientists were able to reason from the model and to propose refined mathematical theories relating motion, gravity, and so on. However, it's important to note that the model was not the reality. In some cases, the mechanical views of the universe, as exemplified by the model, did not exactly match the observed realities. For example, one of the early confirmations of Einstein's relativity theory was observed in some previously unexplained anomalies of the planet Mercury's orbit .

Many different aspects of a system can be modeled. If you are interested in application concurrency, you may model that. If you are interested in the system's logical structure, you may model that. In addition, these models need to interact in some way, and that aspect too can be modeled . Each of these mechanisms contributes to our understanding of the system in the aggregate, and taken together they allow us to consider the system architecture in the whole.

The Architecture of Software Systems

According to Shaw and Garlan [1996], software architecture involves the

description of elements from which systems are built, interactions amongst those elements, patterns that guide their composition, and constraints on those patterns.

According to Kruchten [1999], we use architecture to help us:

  • Understand what the system does

  • Understand how the system works

  • Think and work on pieces of the system

  • Extend the system

  • Reuse part(s) of the system to build another one

Architecture becomes the tool by which decisions are made about what and how the system will be built. In many projects, we know at the start how we are going to put the pieces together because we, or others, have developed such systems before. The easy starting decisions are reflected in the dominant architecture notion, which is just a fancy way to say that "everyone knows how to build a payroll system."

Dominant architecture helps us kick-start the decision process and minimizes risk through the reuse of pieces of a successful solution. If you're going to build a payroll system, it would be silly to start from scratch and invent the entire concept of FICA, check writing, medical deductions, and so on. Start by looking at models of existing systems, and use them to prompt your thinking.

Different stakeholders need different perspectives of the system.

Different groups of stakeholders need to consider your architectural models and will want to view the proposed architecture from different perspectives. The parallel to a "building a house" metaphor holds. You'd want to have views of the house that were suitable for the framers, the roofers, the electricians, the plumbers, and so on. It's all the same house, but our view of it may differ , depending on the need.

The "4+1" View of Architecture

There is usually a small set of common needs for viewing the system architecture. The views that best illustrate these needs are discussed by Kruchten [1995] as the "4+1" view shown in Figure 25-2. The figure identifies a number of stakeholders (programmers, managers, users) and positions them near the types of views they would normally need to consider.

  1. The logical view addresses the functionality of the system. This abstraction of the design model represents the logical structure of the system in terms of subsystems and classes, which in turn are the entities that deliver the functionality to the user.

  2. The implementation view describes the bits and pieces that are relevant to the implementation of the system: source code, libraries, object classes, and so on. This view represents the static view of these pieces, not how they interact.

  3. Process views , generally more useful to describe operations of the system, are extremely important for systems that have parallel tasks , interfaces with other systems, and other interactions that occur during execution. Since many modern systems exhibit high degrees of parallelism and multithreading, this view allows the reviewer to determine potential problems, such as race conditions or deadlocks. You should also use the process view to examine throughput issues and other performance issues that the user specified in the nonfunctional requirements.

  4. Because the project modules rarely exist in a vacuum , the deployment view allocates the implementation elements to the supporting infrastructure, such as operating systems and computing platforms. This view is not especially concerned with what the interactions are but rather with the fact that there are interactions and constraints where the two systems meet.

Figure 25-2. The 4+1 architectural view

graphics/25fig02.gif

The Role of the Use-Case Model in Architecture

graphics/ucmodel_icon.gif

Finally, we return to our problem of orthogonality. Within the architecture, the use-case view , as the holder of requirements, plays a special role in the architectural model. This view presents key use cases of the use-case model, drives the design, and ties all the various views of the architecture together. We favor this view because it allows all stakeholders to examine the proposed system implementation plans against a backdrop of actual use cases and requirements of the system. Therefore, the use-case view, which represents the functionality of the system, is the "tie that binds," that is, the one view that binds the other views together.

For example, the HOLIS use case Initiate Emergency Sequence would impact the design of the system in each of the four views as follows .

  1. The logical view would describe the various classes and subsystems that implemented the behaviors called for by the emergency sequence functionality.

  2. The implementation view would describe the various code artifacts for HOLIS, including source and executable files.

  3. The process view would demonstrate how the multitasking capability of HOLIS was always available to initiate an emergency sequence, even when it was being programmed or was busy doing other tasks.

  4. The deployment view would show that the functionality of HOLIS was distributed across the three HOLIS nodes, or subsystems: Control Switch, Central Control Unit, and Homeowner's PC.

   


Managing Software Requirements[c] A Use Case Approach
Managing Software Requirements[c] A Use Case Approach
ISBN: 032112247X
EAN: N/A
Year: 2003
Pages: 257

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net