The Waterfall Lifecycle Model


The waterfall model is the most well known of all the software lifecycles, and its basic steps have been ingrained into the heads of countless IT students. The waterfall model was one of the first attempts to bring order to the chaotic world of software development, and sought to bring predictability to software projects through the application of methods taken from the various engineering professions.

Despite its popularity, the model has some significant weaknesses that make it poorly suited to the rapid development of business software. This section reviews the waterfall approach to software development and discusses its strengths and weaknesses.

The Classic Waterfall Model

The waterfall model offers a linear approach to software development. Practitioners of the model diligently and methodically step through its distinct phases of analysis, design, coding, and testing.

Figure 3-1 shows the steps of the classic waterfall approach.

Figure 3-1. The waterfall lifecycle model.


The approach is document-driven, with the initial phases focusing on the creation of highly detailed requirements and design documents before any coding work commences. The phases of the waterfall model do not overlap. As the name implies, they cascade one into another.

Note

Contrary to many people's understanding of the waterfall lifecycle, the model does provide for feedback between the phases, making it possible to backtrack and undertake rework from a previous phase. However, backtracking is difficult, and essentially the waterfall model sees the project progressing in a linear fashion, with each phase building on the work of the previous phase.


Strengths and Weaknesses

The model does offer some considerable benefits in lieu of an ad hoc approach to development. It enforces a disciplined method on the project team, ensuring that requirements are duly considered up front at the start of the project and that extensive planning is carried out before resources are committed to the development. To a degree, these are all good software engineering practices because they improve the team's understanding of the customer's needs.

Unfortunately, the waterfall lifecycle model also suffers from some inherent weaknesses:

  • No system is delivered until near the end of the schedule. This is high risk, since the system may have diverged from the customer's initial expectations.

  • Mistakes in the design, or missed requirements, are extremely costly to rectify in the later stages of the process, since the entire project must be backtracked to an earlier phase.

  • Testing is left until the last phase of the project. Defects detected at this late stage of the development are the most expensive to fix with the waterfall model.

  • Leaving testing to the end of the project also means the quality of the application being developed cannot be gauged until testing has been completed. This leaves it very late to address any quality concerns in either the design or implementation.

  • The model is document-intensive and devotes considerable resources to the production of specifications for each phase.

Many of you have examples of projects conducted according to the disciplines of the waterfall model. One such project I worked on early in my career made the inefficiencies of the model abundantly clear, especially when the pressure to deliver intensified.

A Case Study

The project team was developing a shrink-wrapped hydrographic surveying system in C++ (this was in the days before Java). The team was small, but every member was well skilled in the use of the technology and knowledgeable in the practices of object-oriented development.

The team was under considerable pressure to deliver the product. The competition had stolen a lead with the latest release of its software, and the company was in catch-up mode. The project stakeholders were in a state of constant high stress. Every passing day that we didn't deliver, the competition stole more and more of the market share. On the project, stress levels were high and tempers were short.

We were a dedicated group of developers who were well aware of the concerns of the stakeholders. We wanted to deliver quickly, but we also took pride in our work and wanted to produce a quality product.

To achieve both aims, we decided, called for a formal disciplined approach. The biggest consumer of time on the project was rework. If we could reduce the amount of rework, we could reduce our time to market.

Working on the premise that documents are cheap, while skillfully crafted object-oriented code is expensive, we put the following process in place based on our knowledge of the software engineering best practices of the day.

  1. Discuss the requirements with one of the product specialists at the team's disposal.

  2. Have the developer sit down and carefully document how the new functionality should operate.

  3. Present the functional specification to the product specialists for approval.

    The specialists sign off the document if they are happy with the content; if not, step 1 is revisited and the document revised.

  4. Once the specification has been accepted by the product specialists, the developer designs, implements, and delivers the new functionality.

  5. Finally, the product specialists take on the role of testers and verify the functionality of the delivered product feature.

The approach seemed both sound and diligent. Having the product specialists accept the requirements specification meant the development team could focus on implementing the exact functionality requested. Unfortunately, things didn't progress as smoothly as anticipated.

One of my main tasks was the design and implementation of the system's real-time navigational displays. The purpose of one particular display was to provide the helmsman with a visual cue as to whether the vessel was maintaining the correct course. After creating a functional specification for the navigational display, work began on the implementation.

Weeks later, the display was complete and a new version of the software was released to the product specialists for formal testing. At this point the problems started.

The product specialists were not happy. Yes, they agreed the helmsman's display met the original requirements, and no, they could not find any defects. However, it was not what they wanted.

Seeing the display in action, they realized requirements had been missed that meant the display would prove unusable for navigation out on the water. They also didn't like the flat, two-dimensional look of the display and suggested something more three-dimensional. Why hadn't they said so at the time?

This feedback was very frustrating. I had worked extra hours to get the job done on time and to make sure the display's behavior was exactly as the product specialists had requested. From my perspective, I had achieved my goal yet had failed to deliver functionality that met their needs.

To avoid repeating the problem a second time, I took a different approach. First, I ignored the requirements document, instead taking a day to restructure the code so it roughly incorporated some of the changes. This next version was far from production quality but demonstrated some of the main new ideas. I went back to the product group with the new version, explaining that the software was not stable and was only a rough prototype. They liked the revamped display but suggested some further changes.

Over the course of the next week, I went through the cyclic process of revising and demonstrating the display. Quite soon, the display evolved to the point where the requirements were agreed and effort then went into bringing the software up to a production level.

Once the final version was ready, the product group documented the display's functionality as part of the user guide, leaving me free to get on with the next system feature.

From the experience, I learned a few things:

  • In this case, writing software was quicker than writing a requirements specification.

  • People like to see things working. Few of us can appreciate the nuances of the final system from reading a description in a document.

  • The approach of involving the end users throughout the process, and getting their input, made for a better product. The final version of the display looked much better than my first effort.

  • If the display had been demonstrated when it was only even half complete, the problems would have been picked up much earlier, thereby saving a lot of extra work.

Despite all of this, a heroic team effort won through and the project was a success. The architecture we produced for the system proved a stable platform and served as the basis for other profitable products. In the end, we came through, but the questions arose: Is there a better way, and could we have got to market any sooner?

The answer is yes, and we could see the key to successful future projects lay in the application of an approach that allowed us to factor in feedback from the product specialists at every step of the development process. For that, we needed to ditch our waterfall variant in favor of something along adaptive lines. To do that required turning to an iterative approach to development.



    Rapid J2EE Development. An Adaptive Foundation for Enterprise Applications
    Rapid J2EEв„ў Development: An Adaptive Foundation for Enterprise Applications
    ISBN: 0131472208
    EAN: 2147483647
    Year: 2005
    Pages: 159
    Authors: Alan Monnox

    flylib.com © 2008-2017.
    If you may any questions please contact us: flylib@qtcs.net