The Sequential Process

It has become fashionable to blame many problems and failures in software development on the sequential, or waterfall, process depicted in Figure 4-1. This is rather surprising because at first this method seems like a reasonable approach to system development.

Figure 4-1. The sequential process

graphics/04fig01.gif

A Reasonable Approach

Many engineering problems are solved using a sequential process, which typically goes through the following five steps:

  1. Completely understand the problem to be solved, its requirements, and its constraints. Capture them in writing and get all interested parties to agree that this is what they need to achieve.

  2. Design a solution that satisfies all requirements and constraints. Examine this design carefully and make sure that all interested parties agree that it is the right solution.

  3. Implement the solution using your best engineering techniques.

  4. Verify that the implementation satisfies the stated requirements.

  5. Deliver. Problem solved!

That is how skyscrapers and bridges are built. It's a rational way to proceed but only because the problem domain is relatively well known; engineers can draw on hundreds of years of experimentation in design and construction.

By contrast, software engineers have had only a few decades to explore their field. Software developers worked very hard, particularly in the seventies and eighties, to accumulate experimental results in software design and construction. In 1980 I would have sworn that the sequential process was the one and only reasonable approach.

If the sequential process is ideal, however, why aren't the projects that use it more successful? There are many reasons:

  • We made the wrong assumptions.

  • The context of software development is somewhat different from that of other engineering disciplines.

  • We have failed to incorporate some human factors.

  • We have tried to stretch an approach that works in certain well-defined circumstances beyond what it can bear.

  • We are still only in the exploratory phase of software engineering. We do not have the experience of hundreds of years of trial and error that makes building a bridge appear to be a mechanical process. This is the primary reason.

Let us review two fundamentally wrong assumptions that often hinder the success of software projects.

Wrong Assumption 1: Requirements Will Be Frozen

Notice that in the description of the sequential process we assume in step 1 that we can capture the entire problem at the beginning. We assume we can nail down all the requirements in writing in an unambiguous fashion and begin the project with a stable foundation. Despite all our efforts, though, this almost always proves to be impossible . Requirements will change. We must accept this fact. Unless we are solving a trivial problem, new or different requirements will appear. Requirements change for many reasons. Let's look at a few:

  • The users change.

    The users' needs cannot be frozen in time. This is especially true when the development time is measured not in weeks or months but in years. Users see other systems and other products, and they want some of the features they see. Their own work environment evolves, and they become better educated .

  • The problem changes.

    After the system is implemented or while it is being implemented, the system itself affects the perspective of users. Trying out features or seeing them demonstrated is quite different from reading about them. As soon as the end users see how their intentions have been translated into a system, the requirements change. In fact, the one point when users know exactly what they want is not two years before the system is ready but rather a few weeks or months after delivery of the system when they are beyond the initial learning phase. This is known as the IKIWISI effect: "I'll Know It When I See It." [1]

    [1] Origin uncertain , sometimes attributed to U.S. Judge Stewart Potter; Barry Boehm used this acronym at a workshop on software architecture at the University of Southern California in 1997.

    Users don't really know what they want, but they know what they do not want when they see it. Therefore, efforts to detail, capture, and freeze the requirements may ultimately lead to the delivery of a perfect system with respect to the requirements but the wrong system with respect to the real problem at the time of delivery.

  • The underlying technology changes.

    New software or hardware techniques and products emerge, and you will want to exploit them. On a multiyear project, the hardware platform bid at the beginning of the project may no longer be manufactured at delivery time.

  • The market changes .

    The competition might introduce better products to the market. What is the point of developing the perfect product relative to the original spec if you end up with the wrong product relative to what the marketplace expects when you are finally finished?

  • We cannot capture requirements with sufficient detail and precision.

    Formal methods have held the promise of a solution, but at the beginning of the third millennium , they have not gained significant acceptance in the industry except in small, specialized domains. They are hard to apply and very user -unfriendly. Try teaching temporal logic or colored Petri nets to an audience of bank tellers and branch managers so that they can read and approve the specification of their new system.

Wrong Assumption 2: We Can Get the Design
Right on Paper before Proceeding

The second step of the sequential process assumes that we can confirm that our design is the right solution to the problem. By "right" we imply all the obvious qualities: correctness, efficiency, feasibility, and so on. With complete requirements tracing, formal derivation methods, automated proof, generator techniques, and design simulation, some of these qualities can be achieved. Few of these techniques are readily available to practitioners , however, and many of them require that you begin with a formal definition of the problem. You can accumulate pages and pages of design documentation and hundreds of blueprints and spend weeks in reviews, only to discover late in the process that the design has major flaws that cause serious breakdowns.

Software engineering has not reached the level of other engineering disciplines (and perhaps it never will) because the underlying " theories " are weak and poorly understood , and the heuristics are crude. Software engineering may be misnamed. At various times it more closely resembles a branch of psychology, sociology, philosophy, or art than engineering. Relatively straightforward laws of physics underlie the design of a bridge, but there is no strict equivalent in software design. Software is "soft" in this respect.

Bringing Risks into the Picture

The sequential, or waterfall, process does work. It has worked fine for me on small projects lasting from a few weeks to a few months, on projects in which we could clearly anticipate what would happen, and on projects in which all hard aspects were well understood. For projects having little or no novelty, you can develop a plan and execute it with little or no surprise. If the current project is somewhat like the one you completed last yearand the one the year beforeand if you use the same people, the same tools, and the same design, the sequential approach will work well.

The sequential process breaks down when you tackle projects that have a significant level of novelty, unknowns, and risks. You cannot anticipate the difficulties you may encounter, let alone how you will address them. The only thing you can do is to build some slack into the schedule and cross your fingers.

The absence of fundamental "laws of software" to match the fundamental laws of physics that support other engineering disciplines, and the pace at which software evolves make it a risky domain. Techniques for reinforcing concrete have not changed dramatically since my grandfather used them in the early twenties in an engineering bureau . Software tools, techniques, and products, on the other hand, have a lifetime of a few years at best. So every time we try to build a system that is a bit more complicated, somewhat larger, or a little more challenging, we are in dangerous and risky territory, and we must take this into account.

That's why we bring risk analysis into the picture.

Stretching the Time Scale

If you stretch what works for a three-month project to fit a three-year project, you expose the project not only to the changing contexts we have discussed but also to other subtle effects related to the people involved. Software developers who know that they will see tangible results within the next two to three months can remain well focused on the real outcome. Very quickly, they will get feedback on the quality of their work. If small mistakes are discovered along the way, the developers won't have to go very far back in time to correct them.

But imagine the mindset of developers in the middle of the design phase of a three-year project. The target is to finish the design within four months. In a sequential process, the developers may not even be around to see the final product up and running. Progress is measured in pages or diagrams and not in operational features. There is nothing tangible, nothing to get the adrenaline flowing .

There is little feedback on the quality of the current activity because defects will be found later, during integration or test, perhaps 18 months from now. The developers have few opportunities to improve the way they work. Moreover, strange things discovered in the requirements text mean that developers must revisit discussions and decisions made months ago. Is it any wonder that they have a hard time staying motivated? The original protagonists are no longer in the project, and the contract with the customer is as hard and inflexible as a rock.

The developers have only one shot at each kind of activity, with little opportunity to learn from their mistakes. You have one shot at design, and it had better be good. You say you've never designed a system like this? Too bad! You have one shot at coding, and it had better be good. You say this is a new programming language? Well, you can work longer hours to learn its new features. There's only one shot at testing, and it had better be a no-fault run. You say this is a new system and no one really knows how it's supposed to work? Well, you'll figure it out. If the project introduces new techniques or new tools or new people, the sequential process gives you no latitude for learning and improvement.

Pushing Paperwork on the Shelves

In the sequential process, the goal of each step except the last one is to produce and complete an intermediate artifact (often a paper document) that is reviewed, approved, frozen, and then used as the starting point for the next step. In practice, sequential processes place an excessive emphasis on the production and freezing of documents. Some limited amount of feedback to the preceding step is tolerated, but feedback on the results of earlier steps is seen as disruptive. This is related to the reluctance to change requirements and to the loss of focus on the final product that is often seen during long projects.

Volume-Based versus Time-Based Scheduling

Often, timeliness is the most important factor in the success of a software project. In many industries, delivery of a product on time and with a short turnaround for new features is far more important than delivery of a complete, full-featured , perfect system. To achieve timeliness, you must be able to adjust the contents dynamically by dropping or postponing some features to deliver incremental value on time. With a linear approach, you do not gain much on the overall schedule if you decide in the middle of the implementation to drop feature X. You have already expended the time and effort to specify, design, and code the feature. That's why this model isn't suitable when a company wants to work with schedules that are time-based (for example, in three months we can do the first three items on your list, and three months later we'll have the next two, and so on) and not volume-based (it will take us nine months to do everything that you want).

For these reasons and a few others that we will cover later, software organizations have tried another approach.



The Rational Unified Process. An Introduction
Blogosphere: Best of Blogs
ISBN: B0072U14D8
EAN: 2147483647
Year: 2002
Pages: 193

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net