1.9 Working with Model-Based Projects


Given that your team has selected and implemented a reasonable model organizational structure, the question remains as to how to do the daily work of analysis, design, translation, and testing. These questions should be answered by your company's (or project's) software development plan (SDP). The SDP documents how the team is to work together effectively, what standards of work must be met, and other process details. In Section 1.5.2, we introduced the ROPES process. This can be used, as it has in many real-time and embedded projects, or your own favorite process may be used instead. However, for the teams to work together, it is crucial that all the team members understand their roles and expectation on their deliverable products [16].

One of the key activities in the daily workings of a project is configuration management. In a model-based project, the primary artifact being configured with the CM system is the model. If you are using a full-generative tool, such as Rhapsody, then you might be able to get away with CMing the model and not the generated code because you can always produce the code from the model automatically. Nevertheless, many projects, especially those with high safety requirements, CM both the mode and the generated code.

Ideally the CM tool will interface directly with the modeling tool and most UML modeling tools do exactly that. Most also allow a great deal of configurability as to the level and number of configuration items (CIs). The default behavior should be that a package and all its contents, form a single CI, so there will be approximately the same number of CIs from the model as there are packages. However, for large-scale systems, packages may contain other packages, so the default should probably be the number of bottom-level packages. Packages, in the UML, may contain any model element, and will, of course, reflect your model organization. It may be desirable to have finer-level control over CIs, in which case you may want to go as far down as the individual class or function, but in large-scale systems, that level of control can be very tedious to manipulate. For the purpose of this discussion, we will assume you use the package as the primary CI for manipulation.

You will mostly likely want to use a locking mechanism on your CIs that is, when one developer checks out a CI for updating, no one else can check it out except for reference. Because classes must collaborate across package boundaries, it is important that clients of a CIs can be referenced. However, this does create a problem. As we will see when we discuss classes in the next chapter, associations are normally bi-directional. When you add such an association, you must have write access to both classes in order to add the association. If one of these classes is checked out read-only, then you don't have write privileges to it and cannot add the bi-directional association. One possible solution for is to add unidirectional associations to classes for which you have only read access. As long as you're sending messages to objects of such a class, you don't need write access to create the association. Another solution is to get write access for that small change.

Still another option is to allow multiple developers to work on the same CIs and merge the changes when they are finished. This is often done with code using text-based diff and merge tools. Some tools (such as Rhapsody) can perform these functions on models and identify when changes are in conflict, allowing the developers to decide what to do should the changes made to the CIs be mutually incompatible.

A generative tool is one that can take the structural and behavioral semantics specified in the model diagrams and use them to generate executable code. As mentioned, it is common to CM the code in addition to the models, but in many cases, this may not be necessary.

Normal work in the presence of a CM infrastructure proceeds as the engineer checks out the model CIs on which he or she wishes to work, as well as getting read-only locks on the CIs containing elements they wish to reference. They make the design changes, additions, or updates and check the CIs back in. In some cases, a two-tiered CM system is used local CM for the individual worker at the desktop and project CM for the entire team. This approach allows the team member to work with CIs without breaking anyone else's work. Once the CIs are in a stable configuration, they may be checked back in to the project CM. The ROPES process recommends that before any CI can be used in a team build, the CI be unit-tested and reviewed. In my experience, this eliminates many work stoppages due to simple and easily correctable (and all too frequent) errors.

Another important aspect is requirements management (RM). This is particularly important in high-reliability (hi-rel) systems development in which the cost of system failure is high. The concept of requirements management is a simple one, very similar to that of configuration management. Requirements are identified in an RM system and then their design, implementation, and testing are tracked to the individual requirement. Forward RM allows the developer to track from the specific requirement to where the requirement is met in the design and code, and to its test status. This allows for queries such as "Where is this requirement met," "How many requirements have been implemented," and "How many requirements have been successfully tested?" Backward tracability allows the developer to look at some portion of the design or some set of tests and identify which requirements they meet. RM is best accomplished with tools design for that specific purpose and there are several available that interface with modeling tools.

The executability of a model is, in my opinion, very important. Whenever you do some portion of a model, you must be able to answer the question "Is this right?" Experience has shown that asking and answering this question throughout the development lifecycle has enormous impact on the quality of the system at the end. Indeed, the spiral development lifecycle is an attempt to ensure that the system is constructed using relatively small increments (called prototypes) that are each tested to be correct before moving on and adding more functionality. This is done not only at the level of the microcycle (four to six weeks is a typical timeframe) but also at the level of the nanocycle (every few minutes to hours). So if you are designing a collaboration of 50 classes to realize a use case, rather than create all 50 classes, generate them, and hope that they're right, you might create three classes and execute and test that portion of the collaboration. Once you're convinced that much is right, you might add one or two more and get that to work. Then add another class and some more behavior to two of the existing classes. And so on. This is a highly effective way in which to create complex systems. This approach is made even more productive when the modeling tool used is executable that is, it can execute and debug portions of or the entire model. There exist UML tools that do this, Rhapsody being a prime example.

Executable tools come in two flavors: simulation tools and generative tools. Simulators pretend they are the real system and allow the system to execute in a simulated environment. Simulators have a good deal of merit, particular for proving logical correctness, but they suffer from a few flaws as well. First, because you're not testing the real system or the real code, you must test twice, once on the simulated version and once on the final code. Second, the simulation cannot easily run on the actual target environment nor in anything close to real time, so they are once removed from the true execution environment.

The other approach to executability is to use a generative tool. By generative tool, I mean that the tool can take the semantics of your model, by far most commonly entered using structural and behavioral diagrams, and generate code in the desired target source code language, such as C++, C, Java, or Ada. Since the code generated is the same as that which will be ultimately deployed, it is usually only necessary to test it once, saving valuable time and effort. Also, because true source code is generated, it can be run on the desktop debugging environment or, with nothing more than a recompile, also on the target hardware environment. For this reason, generative tools are considered by most to be "stronger" in terms of their executability.

Figure 1-18. Model Execution

graphics/01fig18.jpg

In either case, the execution and debugging of models should be done at the model level rather than the level of the source code. If the developer is using class diagrams to show structure, and statecharts and sequence diagrams to specify behavior, then those are, in fact, the very views that should be used to examine the executing system.

Of course, standard debugging concepts should be supported at this model level single step, step-over, step-into, set breakpoints, and so on but they should be using design-level concepts, such as setting a breakpoint when a state is entered or an operation is invoked. Most of the system debugging should be done at this level of abstraction, although it may be necessary sometimes to drill down to the source-code level (and use a source-code-level debugger) or even to the assembly or logic analyzer level. Nevertheless, most of the debugging of a model should be done at the level of abstraction at which the model was created.

Debugging may be thought of as testing by roaming around. It is usually highly informal and unstructured. Debugging at the model level allows us to much more easily ensure that the system is behaving as expected than if we were limited to debugging the code resulting from the models. Beyond debugging, there is testing. By testing, I mean a structured and repeatable execution of the system or some portion thereof, with well-defined test conditions and a set of expected results with clear and unambiguous pass/fail criteria. Testing should also be done primarily at the model level as well.

In the ROPES process, there are three identified levels of testing:

  • Unit testing

  • Integration testing

  • Validation testing

Unit-level testing is done primarily white box and at the class or component level. Such testing ensures the detailed design of the primitive building blocks of the system is correct, that preconditional invariants (such as "pointers are valid" and "enumerated values are ensured to be in range") are checked. The consistent application of good unit-level testing is, in my experience, where the biggest improvements in overall system quality may be made.

Integration testing is a test of the architecture. Specifically it tests that the large-scale pieces of the system typically components or subsystems fit together properly and collaborate as expected. Failure to adhere to interface requirements, especially those that are more subtle than simple operation parameter list types, is a leading cause of large-system failure. Interfaces are more than simple collections of operations that may be called from other architectural components. They have many assumptions about value ranges, the order in which operations may be invoked, and so on, that may not be caught by simple visual inspection. By putting together these architectural pieces and demonstrating that they do all the right things and catch violations of the preconditional invariants, we can alleviate many of the failures we see.

The last level of testing is validation testing. This is done primarily black box. Validation testing means that a system (or prototype of a system) properly executes its requirements in the real or simulated target environment. In an iterative development lifecycle, the primary artifacts produced at the end of each spiral constitute a version of the system that realizes some coherent set of requirements. Each prototype in the ROPES spiral is tested against that set of requirements, normally represented as a small set of use cases. In subsequent spirals, old requirements are validated using a set of regression tests to ensure that the new functionality hasn't broken the old.

As stated, the primary artifact of each spiral is the prototype a tested, working version of the system, which may be incomplete. Another artifact is a defect report, identifying the defects that weren't fixed in the previous spiral. The next spiral typically adds new requirements as well as fixing previously identified minor defects (major defects must be repaired before the spiral may end). Often, as new functionality is added and known defects are removed, the model must be internally reorganized in minor ways. This is called refactoring. Refactoring is a normal outcome of the iterative lifecycle and is not to be feared. In the spiral approach, early design decisions are made with incomplete knowledge, after all, and even though an attempt is made to ensure that future additional functionality won't radically affect the architecture, sometimes it will. Usually these are small changes that must be made. But working with a spiral model means that you expect some refactoring to be necessary. It is only a concern if you find major architectural changes are necessary in a number of prototypes. Should this occur, then it would be useful to step back and consider the architectural selection with a greater scrutiny.

The last aspect of model-based development I would like to consider is that of reviews or inspections, as they are sometimes called. Inspections serve two primary purposes: to improve the quality of the portion of the model being inspected and to disseminate knowledge of some portion of the model to various team members.

The ROPES process has particular notions of what constitutes good models. One of the common problems made by neophyte modelers is that of too much information in one place. While in books, problems are simplified to make concrete points (and this book is no exception), in the real world, problems are complex. It simply isn't possible to put every aspect of a system in a single diagram. What is needed is a good rule for how to break up a model into the diagrams so that they aid in model creation and understanding. The ROPES rule for diagrams revolves around the notion of a "mission." Each diagram should have a single mission and include only those elements necessary to perform that mission, but should include all of those elements. So rather than create a class diagram that has every class in the system (requiring "E" size plotter paper and a 2-point font), a class only appears on diagrams when it is relevant to its mission. Likewise, given that a class is relevant to the diagram's mission, only the aspects of the class that are relevant are shown operations, attributes, associations, and so on that are not relevant to the particular diagram are not shown on this diagram, although they may very well appear on a different one. This means that it is common for a class to appear on more than one diagram.

Common diagrammatic missions include

  • A single collaboration (set of classes working together for a common purpose, such as realizing a use case)

  • A class taxonomy (i.e., generalization)

  • An architectural view

    • Subsystem and/or component architecture

    • Distribution of elements across multiple address spaces

    • Safety and/or reliability management

    • Concurrency and/or resource management (e.g., task diagram)

    • Deployment of elements on processors

    • Organization of processors and buses

  • The organization of the model (package diagram)

  • A scenario of a collaboration

  • A coherent set of requirements

  • Behavior of a structural element (e.g., statechart or activity diagram)

And these missions may occur at multiple levels of abstraction.



Real Time UML. Advances in The UML for Real-Time Systems
Real Time UML: Advances in the UML for Real-Time Systems (3rd Edition)
ISBN: 0321160762
EAN: 2147483647
Year: 2003
Pages: 127

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net