The Root Causes of Project Success and Failure


The first step in resolving any problem is to understand the root causes . Fortunately, the Standish Group survey went beyond the assessment phase and asked survey respondents to identify the most significant factors that contributed to projects that were rated "success," "challenged" (late and did not meet expectations), and "impaired" ( canceled ), respectively.

Here we discover that the emphasis in this book on requirements management is not frivolous or arbitrary; it's a response to accumulating evidence that many of the most common, most serious problems associated with software development are related to requirements. The 1994 Standish Group study noted the three most commonly cited factors that caused projects to be "challenged":

  1. Lack of user input : 13 percent of all projects

  2. Incomplete requirements and specifications : 12 percent of all projects

  3. Changing requirements and specifications : 12 percent of all projects

Thereafter, the data diverges rapidly . Of course, your project could fail because of an unrealistic schedule or time frame (4 percent of the projects cited this), inadequate staffing and resources (6 percent), inadequate technology skills (7 percent), or various other reasons. Nevertheless, to the extent that the Standish figures are representative of the overall industry, it appears that at least a third of development projects run into trouble for reasons that are directly related to requirements gathering, requirements documentation, and requirements management.

Although the majority of projects do seem to experience schedule/budget overruns, if not outright cancellation, the Standish Group found that 9 percent of the projects in large companies were delivered on time and on budget; 16 percent of the projects in small companies enjoyed a similar success. That leads to an obvious question: What were the primary "success factors" for those projects? According to the Standish study, the three most important factors were

  1. User involvement : 16 percent of all successful projects

  2. Executive management support : 14 percent of all successful projects

  3. Clear statement of requirements : 12 percent of all successful projects

Other surveys have even more striking results. For example, the European Software Process Improvement Training Initiative (ESPITI) [1995] conducted a survey to identify the relative importance of various types of software problems in industry. The results of this large-scale survey, based on 3,800 responses, are indicated in Figure 1-1.

Figure 1-1. Largest software development problems by category. (Data derived from ESPITI [1995].)


The two largest problems, appearing in about half the responses, were

  1. Requirements specifications

  2. Managing customer requirements

Again, corroborating the Standish survey, coding issues were a "nonproblem," relatively speaking.

It seems clear that requirements deserve their place as a leading root cause of software problems, and our continuing personal experiences support that conclusion. Let's take a look at the economic factors associated with this particular root cause.

The Frequency of Requirements Errors

Both the Standish and the ESPITI studies provide qualitative data indicating that respondents feel that requirements problems appear to transcend other issues in terms of the risks and problems they pose to application development. But do requirements problems affect the delivered code?

Table 1-1 summarizes a 1994 study by Capers Jones that provides data regarding the likely number of "potential" defects in a development project and the typical "efficiency" with which a development organization removes those defects through various combinations of testing, inspections, and other strategies.

Table 1-1. Defect Summary

Defect Origins

Defect Potentials

Removal Efficiency

Delivered Defects

















Bad fixes








Source : Data derived from Jones [1994].

The Defect Potentials column normalizes the defects such that each category contributes to the total potential of 5.00, an arbitrary normalization that does not imply anything about the absolute number of defects. The Delivered Defects column, referring to what the user sees, is normalized in the same way.

Requirements errors top the delivered defects and contribute approximately one-third of the total delivered defects to the defect pile. Thus, this study provides yet another confirmation that requirements errors are the most common category of systems development errors.

The High Cost of Requirements Errors

If requirements errors can be fixed quickly, easily, and economically, we still may not have a huge problem. This last statistic delivers the final blow. Just the opposite tends to be true. Studies performed at companies including GTE, TRW, IBM, and HP have measured and assigned costs to errors occurring at various phases of the project lifecycle. Davis [1993] summarized a number of these studies, as Figure 1-2 illustrates. Although these studies were run independently, they all reached roughly the same conclusion: If a unit cost of one is assigned to the effort required to detect and repair an error during the coding stage, then the cost to detect and repair an error during the requirements stage is between five to ten times less. Furthermore, the cost to detect and repair an error during the maintenance stage is twenty times more.

Figure 1-2. Relative cost to repair a defect at different lifecycle phases. (Data derived from Davis [1993].)


Altogether, the figure illustrates that as much as a 200:1 cost savings results from finding errors in the requirements stage versus finding errors in the maintenance stage of the software lifecycle.

While this may be the exaggerated case, it's easy to see that there is a multiplicative factor at work. The reason is that many of these errors are not detected until well after they have been made.

If you've read this section carefully , you may have noticed that we muddled two issues together in Figure 1-2: the relative costs of various categories of errors and the cost of fixing them at different stages in the software lifecycle. For example, the item "requirements time" literally means all errors that were detected and fixed during the period officially designated as "requirements definition." But since it's unlikely that substantial technical design or programming activities will have been carried out at this early stage ”ignoring, for the moment, the early design or prototyping activities that might be taking place ”the mistakes that we detect and fix at this stage are requirements errors.

But the errors discovered during the design of a development project could fall into one of two categories: (1) errors that occurred when the development staff created a technical design from a correct set of requirements or (2) errors that should have been detected as requirements errors somewhat earlier in the process but that somehow "leaked" into the design phase of the project. It's the latter category of errors that turn out to be particularly expensive, for two reasons.

  1. By the time the requirements-oriented error is discovered, the development group will have invested time and effort in building a design from those erroneous requirements. As a result, the design will probably have to be thrown away or reworked.

  2. The true nature of the error may be disguised; everyone assumes that they're looking for design errors during the testing or inspection activities that take place during this phase, and considerable time and effort may be wasted until someone says, "Wait a minute! This isn't a design mistake after all; we've got the wrong requirements."

Confirming the details of the requirements error means tracking down the user who provided the requirements details in the first place. However, that person may not be readily available, may have forgotten the requirements instruction to the development team or the rationale for identifying the original requirements, or may have just had a change of mind. Similarly, the development team member who was involved in that stage of the project ”often, a person with the title of "business analyst" or "systems analyst" ”may have moved on to a different project or may suffer a similar form of short- term amnesia. All of this involves a certain amount of "spinning of wheels" and lost time.

These problems associated with "leakage" of defects from one lifecycle phase to the next are fairly obvious when you think about them, but most organizations haven't investigated them very carefully. One organization that has done so is Hughes Aircraft. A study by Snyder and Shumate [1992] follows the leakage phenomenon for a large collection of projects Hughes has conducted over the past 15 years . The study indicates that 74 percent of the requirements-oriented defects were discovered during the requirements analysis phase of the project ”that is, the formal phase during which customers and systems analysts discuss, brainstorm, negotiate, and document the project requirements. That's the ideal time and place to discover such errors, and it's likely to be the most inexpensive time and place. However, the study also shows that 4 percent of the requirements defects "leak" into the preliminary, or high-level, design of the project and that 7 percent leak further into detailed design. The leakage continues throughout the lifecycle, and a total of 4 percent of the requirements errors aren't found until maintenance, when the system has been released to the customers and is presumably in full-scale operation.

Thus, depending on when and where a defect is discovered in a software application development project, we're likely to experience the effect of 50 “100 times cost. The reason is that in order to repair the defect, we are likely to experience costs in some or all of the following areas:

  • Respecification.

  • Redesign.

  • Recoding.

  • Retesting.

  • Change orders (telling users and operators to replace a defective version of the system with the corrected version).

  • Corrective action (undoing whatever damage may have been done by erroneous operation of the improperly specified system, which could involve sending refund checks to angry customers, rerunning computer jobs, and so on).

  • Scrap (including code, design, and test cases that were carried out with the best of intentions but then had to be thrown away when it became clear they were based on incorrect requirements).

  • Recall of defective versions of shrink-wrapped software and associated manuals from users. (Since software is now embedded in products ranging from digital wristwatches to microwave ovens to automobiles, the recall could include both tangible products and the software embedded within them.)

  • Warranty costs.

  • Product liability (if the customer sues for damages caused by the defective software).

  • Service costs for a company representative to visit a customer's field location to reinstall the new software.

  • Documentation.


Managing Software Requirements[c] A Use Case Approach
Managing Software Requirements[c] A Use Case Approach
ISBN: 032112247X
Year: 2003
Pages: 257 © 2008-2017.
If you may any questions please contact us: