Important Caveats


At this point, it is important to step back for a moment and consider the limitations of this model. I have made many implicit assumptions along the way, and now I must make them explicit.

  • Let's begin with what I mean by "success." Remember, I said I would define success as an outcome greater than 1 sigma in the lognormal distribution, which would mean that about half of our projects would be successful.

    But the Standish report says that four out of five projects fail. Does this mean they are so constrained and therefore so difficult that this is the result? Perhaps. Many software development projects are doomed the instant the ink dries on the project plan. But I think there is more going on than that.

    I have always had a problem with the Standish report, because I think it overstates the case, and in so doing it trivializes the real problem. If we were to take all original project plans and then apply our four base metrics to assess the projects at their conclusion, Standish would probably be right. And the lognormal distribution seems to support this scenario. But do we really have four failures for every success?

    Here is what I think really happens: Along the way, as a project progresses, management realizes that the original goals were too aggressive, or the developers were too optimistic, or that they really didn't understand the problem. But now the project has incurred costs, so that scrapping it would seem wasteful and impractical. So instead the project is redefined, and the goals are reset. This may involve scaling back the feature set, deferring some things to a subsequent release. Sometimes, especially if the team discovers problems near the end of the development lifecycle, it will sacrifice quality and ship the product with too many defects. And even then, the team is likely to exceed its schedule and budget. But does this mean that the project is a failure? Not necessarily.

    I maintain that lots of these projects fall into the "moderately successful" bucket, and some into the "only somewhat unsuccessful" bucket. So, as in everything else in the world, we revise expectations (usually downward) as we go, so that when we are done we can declare victory. This is important both politically and psychologically. It avoids what the psychologists call cognitive dissonance. No one likes to fail, and you can always salvage something. So we tend to gently revise history and "spin" actual results. In reality, the Standish metrics apply only if you use the original project plan as a measuring stick. But no one actually does. In this context, having about half the projects judged "successful" is the result.

  • The parameters scope, quality, speed, and frugality are not all independent of each other. For example, as the project slips and takes more time, it also incurs increased cost because of the increased resources consumed, so both speed and frugality tend to suffer in parallel. You could try to offset one with the otherfor example by spending more money to hire more people and go faster. But, as Brooks so clearly pointed out almost 30 years ago[10], adding people to a software project usually has the effect of slowing it down! If you wanted to make this kind of trade-off, counter-intuitively you would do better to spend less money per unit time by having fewer people and going more slowly. You might not even lose that much time, because, as Brooks pointed out, smaller teams tend to be more efficient.

    [10] See Brooks, Frederick P. The Mythical Man-Month: Essays on Software Engineering, Second Edition (Boston: Addison-Wesley, 1995).

  • The different parameters do not have perfectly equal impact. Time, or the inverse of speed, appears to play a more critical role than the other three, although this is always open to debate. Typically, managers resist reducing scope and quality, and they are always in a big hurry. From their point of view, the only parameter they can play with is resources. So often they opt to throw money at the problem. This usually fails, because they don't take the time to apply the money intelligently and instead spend it in ineffective ways. This is exactly Brooks' point. In the end, he said, more projects fail for lack of time than for all other reasons combined. He was right then, and I believe he is still right.

  • In general, you cannot trade off the four parameters, one against anotherat least not in large doses. That is, you cannot make up for a major lacuna in any of them by massively increasing one or more of the others. Projects seem to observe a law of natural balance; if you try to construct a base in which any one side is way out of proportion in relation to the others, you will fail. That is why I opted to assume our base was a square, with all sides (parameters) conceptually on a somewhat equal footing. I acknowledge that you can adjust the sides up and down in the interest of achieving equivalent area but caution against the notion that you can do this indiscriminately. Again, you can't increase one parameter arbitrarily to solve problems in one or more of the other parameters. Max Wideman likes to think of the base as a "rubber sheet." You can pull on one corner and adjust the lengths, but eventually the sheet will tear. Geometrically, of course, one side cannot be longer than the sum of the other three sides, because then the quadrilateral would not "close."

  • To some extent, I have ignored the most important factor in any software development project: the talent of the people involved. Over and over again, I have seen that it is not the sheer number of people on a team that matters but rather their skills, experience, aptitude, and character. Managing team dynamics and matching skills to specific project tasks are topics beyond the scope of this chapter. However, the pyramid's volume to some degree corresponds to the team's capabilities.

  • We should all be careful not to specify product quality based solely on the absence of defects. Quality needs to be defined more generally as "fitness for use." A defect-free product that doesn't persuasively address an important problem is by and large irrelevant and cannot be classified as "high quality."

  • What about iterative development? Unfortunately, this treatment looks at the project as a "one shot," which goes against everything we believe in with respect to iterative development. But perhaps the unusually high failure rate documented by Standish is caused by a lack of iterative development. That is, by starting with an unrealistic plan and rigidly adhering to it throughout the project, despite new data indicating we should do otherwise, we bring about our own failures.

    However, if we are smart enough to use an iterative approach, then we can suggest a workable model. We start out with a pyramid of a certain volume and altitude during inception, based on our best knowledge of the team and the unknowns at that point. As we move into the next phase, our pyramid can change both its volume and shape. The volume might shift as we augment or diminish the team's capability, or as we learn things that help us mitigate risks. This is a natural consequence of iterative development. In addition, the shape of the pyramid may change, as we adjust one or more sides of the base by reducing scope, adding resources, taking more time, or relaxing the quality standard a bitor by making changes in the opposite direction. This should happen at each of the phase boundaries; our goal should be to increase the altitude each and every time. As the project moves through the four phases of iterative development, we should see our pyramid not only increase volume but also grow progressively taller as we reduce risks, by whatever means necessary. If this does not happen through an increase in volume, we must accomplish it by decreasing the base area.

  • The issue of whether projects follow the lognormal probability distribution is debatable. I agree with Pascal that it makes more sense than a standard normal distribution. Here's why.

    The normal distribution occurs when you add together many small effects that influence the final result. Often we say colloquially: "Some things will go better than we planned, and some things worse, and in the end it averages out." Implicit here is the notion of symmetry, that is, the idea that it is equally likely that an effect will be greater than or less than its average. These two assumptionsaddition and symmetrymake the normal distribution symmetrical around its central value, and give it tails out to infinity in both directionsa reflection of the low probability of having all the effects go in one direction or the other. So long as we add "symmetrical" things to produce a result, the normal distribution is a very solid concept; in fact, something called the Central Limit Theorem virtually guarantees it, even if the constituent effects themselves are not normally distributed.

    The normal distribution, or "bell curve," has become part of our collective mindset. It dominates every probability and statistics course taught today; it is intuitive and has lots of easy-to-calculate characteristics. But in life, results don't always come about from the cumulative addition of their constituent causes. So we need to be careful about applying the standard normal distribution everywhere without examining the underlying causes. Despite its amazingly wide applicability, it is a mistake to just assume "standard normal" applies universally.

    The lognormal distribution occurs when you multiply together many small effects that influence the final result. Note that there is a crucial difference between adding lots of small things and multiplying lots of small things. Obtaining a large result by multiplication requires that some of the factors be "large," while only one factor needs to be close to zero for the result to be close to zero. Hence there is a lack of symmetry, and the distribution is skewed to the low end. It strikes me that many composite events in life may be better simulated by a lognormal distribution: to get a large positive result, lots of things have to go right; to get a small result a negative outcome, so to speak one needs to have only one thing go very wrong. And even one "zero" will make the result zero, independent of all the other factors combined. In the case of project management, my experience tells me that nature will tend to distribute outcomes lognormally rather than normally.

  • Finally, the conservation law expressed as a constant-volume pyramid is just a model. It provides a convenient visualization of the phenomenon, but it is a guessand the simplest geometric model I could come up with. To determine whether it reflects reality, we'd need to examine empirical data.

Although it is long, this list of caveats does not negate the value of the model; I think its predictions are valid and consistent with my previous experience. Indeed, many midcourse corrections that teams make during a development project to improve their probability of success turn out to be mere Band-Aids and don't come close to addressing the real issues. As a profession, we have demonstrated over and over again that to improve your chances of success substantially, you need to do more than relax a single constraint by 10 percent, and this model underscores that point. Therein lies its greatest value; I believe it represents a fundamental truth.




The Software Development Edge(c) Essays on Managing Successful Projects
The Software Development Edge(c) Essays on Managing Successful Projects
ISBN: N/A
EAN: N/A
Year: 2006
Pages: 269

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net