The Waterfall Development Model

In the 1960s and 1970s software development projects were characterized by massive cost overruns and schedule delays; the focus was on planning and control (Basili and Musa, 1991). The emergence of the waterfall process to help tackle the growing complexity of development projects was a logical event (Boehm, 1976). As Figure 1.2 in Chapter 1 shows, the waterfall process model encourages the development team to specify what the software is supposed to do (gather and define system requirements) before developing the system. It then breaks the complex mission of development into several logical steps (design, code, test, and so forth) with intermediate deliverables that lead to the final product. To ensure proper execution with good-quality deliverables, each step has validation, entry, and exit criteria. This Entry-Task-Validation-Exit (ETVX) paradigm is a key characteristic of the waterfall process and the IBM programming process architecture (Radice et al., 1985).

The divide-and-conquer approach of the waterfall process has several advantages. It enables more accurate tracking of project progress and early identification of possible slippages. It forces the organization that develops the software system to be more structured and manageable. This structural approach is very important for large organizations with large, complex development projects. It demands that the process generate a series of documents that can later be used to test and maintain the system (Davis et al., 1988). The bottom line of this approach is to make large software projects more manageable and delivered on time without cost overrun . Experiences of the past several decades show that the waterfall process is very valuable . Many major developers, especially those who were established early and are involved with systems development, have adopted this process. This group includes commercial corporations, government contractors, and governmental entities. Although a variety of names have been given to each stage in the model, the basic methodologies remain more or less the same. Thus, the system-requirements stages are sometimes called system analysis, customer-requirements gathering and analysis, or user needs analysis; the design stage may be broken down into high-level design and detail-level design; the implementation stage may be called code and debug; and the testing stage may include component-level test, product-level test, and system-level test.

Figure 2.1 shows an implementation of the waterfall process model for a large project. Note that the requirements stage is followed by a stage for architectural design. When the system architecture and design are in place, design and development work for each function begins. This consists of high-level design (HLD), low-level design (LLD), code development, and unit testing (UT). Despite the waterfall concept, parallelism exists because various functions can proceed simultaneously . As shown in the figure, the code development and unit test stages are also implemented iteratively. Since UT is an integral part of the implementation stage, it makes little sense to separate it into another formal stage. Before the completion of the HLD, LLD, and code, formal reviews and inspections occur as part of the validation and exit criteria. These inspections are called I0, I1, and I2 inspections, respectively. When the code is completed and unit tested , the subsequent stages are integration, component test, system test, and early customer programs. The final stage is release of the software system to customers.

Figure 2.1. An Example of the Waterfall Process Model

graphics/02fig01.gif

The following sections describe the objectives of the various stages from highlevel design to early customer programs.

High-Level Design

High-level design is the process of defining the externals and internals from the perspective of a component. Its objectives are as follows :

  • Develop the external functions and interfaces, including:

    • external user interfaces
    • application programming interfaces
    • system programming interfaces: intercomponent interfaces and data structures.
  • Design the internal component structure, including intracomponent interfaces and data structures.
  • Ensure all functional requirements are satisfied.
  • Ensure the component fits into the system/product structure.
  • Ensure the component design is complete.
  • Ensure the external functions can be accomplished ”"doability" of
  • requirements.

Low-Level Design

Low-level design is the process of transforming the HLD into more detailed designs from the perspective of a part (modules, macros, includes, and so forth). Its objectives are as follows:

  • Finalize the design of components and parts (modules, macros, includes) within a system or product.
  • Complete the component test plans.
  • Give feedback about HLD and verify changes in HLD.

Code Stage

The coding portion of the process results in the transformation of a function's LLD to completely coded parts. The objectives of this stage are as follows:

  • Code parts (modules, macros, includes, messages, etc.).
  • Code component test cases.
  • Verify changes in HLD and LLD.

Unit Test

The unit test is the first test of an executable module. Its objectives are as follows:

  • Verify the code against the component's

    • high-level design and
    • low-level design.
  • Execute all new and changed code to ensure

    • all branches are executed in all directions,
    • logic is correct, and
    • data paths are verified .
  • Exercise all error messages, return codes, and response options.
  • Give feedback about code, LLD, and HLD.

The level of unit test is for verification of limits, internal interfaces, and logic and data paths in a module, macro, or executable include. Unit testing is performed on nonintegrated code and may require scaffold code to construct the proper environment.

Component Test

Component tests evaluate the combined software parts that make up a component after they have been integrated into the system library. The objectives of this test are as follows:

  • Test external user interfaces against the component's design documentation ” user requirements.
  • Test intercomponent interfaces against the component's design documentation.
  • Test application program interfaces against the component's design documentation.
  • Test function against the component's design documentation.
  • Test intracomponent interfaces (module level) against the component's design documentation.
  • Test error recovery and messages against the component's design documentation.
  • Verify that component drivers are functionally complete and at the acceptable quality level.
  • Test the shared paths (multitasking) and shared resources (files, locks, queues, etc.) against the component's design documentation.
  • Test ported and unchanged functions against the component's design documentation.

System-Level Test

The system-level test phase comprises the following tests:

  • System test
  • System regression test
  • System performance measurement test
  • Usability tests

The system test follows the component tests and precedes system regression tests. The system performance test usually begins shortly after system testing starts and proceeds throughout the system-level test phase. Usability tests occur throughout the development process (i.e., prototyping during design stages, formal usability testing during system test period).

  • System test objectives

    • Ensure software products function correctly when executed concurrently and in stressful system environments.
    • Verify overall system stability when development activity has been completed for all products.
  • System regression test objective

    • Verify that the final programming package is ready to be shipped to external customers.
    • Make sure original functions work correctly after functions were added to the system.
  • System performance measurement test objectives

    • Validate the performance of the system.
    • Verify performance specifications.
    • Provide performance information to marketing.
    • Establish base performance measurements for future releases.
  • Usability tests objective

    • Verify that the system contains the usability characteristics required for the intended user tasks and user environment.

Early Customer Programs

The early customer programs (ECP) include testing of the following support structures to verify their readiness:

  • Service structures
  • Development fix support
  • Electronic customer support
  • Market support
  • Ordering, manufacturing, and distribution

In addition to these objectives, a side benefit of having production systems installed in a customer's environment for the ECP is the opportunity to gather customers' feedback so developers can evaluate features and improve them for future releases. Collections of such data or user opinion include:

  • Product feedback: functions offered , ease of use, and quality of online documentation
  • Installability of hardware and software
  • Reliability
  • Performance (measure throughput under the customer's typical load)
  • System connectivity
  • Customer acceptance

As the preceding lists illustrate , the waterfall process model is a disciplined approach to software development. It is most appropriate for systems development characterized by a high degree of complexity and interdependency. Although expressed as a cascading waterfall, parallelism and some amount of iteration among process phases often exist in actual implementation. During this process, the focus should be on the intermediate deliverables (e.g., design document, interface rules, test plans, and test cases) rather than on the sequence of activities for each development phase. In other words, it should be entity-based instead of step-by-step based. Otherwise the process could become too rigid to be efficient and effective.

What Is Software Quality?

Software Development Process Models

Fundamentals of Measurement Theory

Software Quality Metrics Overview

Applying the Seven Basic Quality Tools in Software Development

Defect Removal Effectiveness

The Rayleigh Model

Exponential Distribution and Reliability Growth Models

Quality Management Models

In-Process Metrics for Software Testing

Complexity Metrics and Models

Metrics and Lessons Learned for Object-Oriented Projects

Availability Metrics

Measuring and Analyzing Customer Satisfaction

Conducting In-Process Quality Assessments

Conducting Software Project Assessments

Dos and Donts of Software Process Improvement

Using Function Point Metrics to Measure Software Process Improvements

Concluding Remarks

A Project Assessment Questionnaire



Metrics and Models in Software Quality Engineering
Metrics and Models in Software Quality Engineering (2nd Edition)
ISBN: 0201729156
EAN: 2147483647
Year: 2001
Pages: 176

Flylib.com © 2008-2020.
If you may any questions please contact us: flylib@qtcs.net