28.3. Estimating a Construction Schedule

 < Free Open Study > 

Managing a software project is one of the formidable challenges of the twenty-first century, and estimating the size of a project and the effort required to complete it is one of the most challenging aspects of software-project management. The average large software project is one year late and 100 percent over budget (Standish Group 1994, Jones 1997, Johnson 1999). At the individual level, surveys of estimated vs. actual schedules have found that developers' estimates tend to have an optimism factor of 20 to 30 percent (van Genuchten 1991).This has as much to do with poor size and effort estimates as with poor development efforts. This section outlines the issues involved in estimating software projects and indicates where to look for more information.


Estimation Approaches

You can estimate the size of a project and the effort required to complete it in any of several ways:

Further Reading

For further reading on scheduleestimation techniques, see Chapter 8 of Rapid Development (McConnell 1996) and Software Cost Estimation with Cocomo II (Boehm et al. 2000).


  • Use estimating software.

  • Use an algorithmic approach, such as Cocomo II, Barry Boehm's estimation model (Boehm et al. 2000).

  • Have outside estimation experts estimate the project.

  • Have a walk-through meeting for estimates.

  • Estimate pieces of the project, and then add the pieces together.

  • Have people estimate their own tasks, and then add the task estimates together.

  • Refer to experience on previous projects.

  • Keep previous estimates and see how accurate they were. Use them to adjust new estimates.

Pointers to more information on these approaches are given in "Additional Resources on Software Estimation" at the end of this section. Here's a good approach to estimating a project:

Establish objectives Why do you need an estimate? What are you estimating? Are you estimating only construction activities, or all of development? Are you estimating only the effort for your project, or your project plus vacations, holidays, training, and other nonproject activities? How accurate does the estimate need to be to meet your objectives? What degree of certainty needs to be associated with the estimate? Would an optimistic or a pessimistic estimate produce substantially different results?

Further Reading

This approach is adapted from Software Engineering Economics (Boehm 1981).


Allow time for the estimate, and plan it Rushed estimates are inaccurate estimates. If you're estimating a large project, treat estimation as a miniproject and take the time to miniplan the estimate so that you can do it well.

Spell out software requirements Just as an architect can't estimate how much a "pretty big" house will cost, you can't reliably estimate a "pretty big" software project. It's unreasonable for anyone to expect you to be able to estimate the amount of work required to build something when "something" has not yet been defined. Define requirements or plan a preliminary exploration phase before making an estimate.

Cross-Reference

For more information on software requirements, see Section 3.4, "Requirements Prerequisite."


Estimate at a low level of detail Depending on the objectives you identified, base the estimate on a detailed examination of project activities. In general, the more detailed your examination is, the more accurate your estimate will be. The Law of Large Numbers says that a 10 percent error on one big piece will be 10 percent high or 10 percent low. On 50 small pieces, some of the 10 percent errors in the pieces will be high and some will be low, and the errors will tend to cancel each other out.

Use several different estimation techniques, and compare the results The list of estimation approaches at the beginning of the section identified several techniques. They won't all produce the same results, so try several of them. Study the different results from the different approaches. Children learn early that if they ask each parent individually for a third bowl of ice cream, they have a better chance of getting at least one "yes" than if they ask only one parent. Sometimes the parents wise up and give the same answer; sometimes they don't. See what different answers you can get from different estimation techniques.

Cross-Reference

It's hard to find an area of software development in which iteration is not valuable. Estimation is one case in which iteration is useful. For a summary of iterative techniques, see Section 34.8, "Iterate, Repeatedly, Again and Again."


No approach is best in all circumstances, and the differences among them can be illuminating. For example, for the first edition of this book, my original eyeball estimate for the length of the book was 250 300 pages. When I finally did an in-depth estimate, the estimate came out to 873 pages. "That can't be right," I thought. So I estimated it using a completely different technique. The second estimate came out to 828 pages. Considering that these estimates were within about five percent of each other, I concluded that the book was going to be much closer to 850 pages than to 250 pages, and I was able to adjust my writing plans accordingly.

Reestimate periodically Factors on a software project change after the initial estimate, so plan to update your estimates periodically. As Figure 28-2 illustrates, the accuracy of your estimates should improve as you move toward completing the project. From time to time, compare your actual results to your estimated results, and use that evaluation to refine estimates for the remainder of the project.

Figure 28-2. Estimates created early in a project are inherently inaccurate. As the project progresses, estimates can become more accurate. Reestimate periodically throughout a project, and use what you learn during each activity to improve your estimate for the next activity


cc2e.com/2864

Estimating the Amount of Construction

The extent to which construction will be a major influence on a project's schedule depends in part on the proportion of the project that will be devoted to construction understood as detailed design, coding and debugging, and unit testing. Take another look at Figure 27-3 on page 654. As the figure shows, the proportion varies by project size. Until your company has project-history data of its own, the proportion of time devoted to each activity shown in the figure is a good place to start estimates for your projects.

Cross-Reference

For details on the amount of coding for projects of various sizes, see "Activity Proportions and Size" in Section 27.5.


The best answer to the question of how much construction a project will call for is that the proportion will vary from project to project and organization to organization. Keep records of your organization's experience on projects, and use them to estimate the time future projects will take.

Influences on Schedule

The largest influence on a software project's schedule is the size of the program to be produced. But many other factors also influence a software-development schedule. Studies of commercial programs have quantified some of the factors, and they're shown in Table 28-1.

Table 28-1. Factors That Influence Software-Project Effort

Factor

Potential Helpful Influence

Potential Harmful Influence

Co-located vs. multisite development

-14%

22%

Database size

-10%

28%

Documentation match to project needs

-19%

23%

Flexibility allowed in interpreting requirements

-9%

10%

How actively risks are addressed

-12%

14%

Language and tools experience

-16%

20%

Personnel continuity (turnover)

-19%

29%

Platform volatility

-13%

30%

Process maturity

-13%

15%

Product complexity

-27%

74%

Programmer capability

-24%

34%

Reliability required

-18%

26%

Requirements analyst capability

-29%

42%

Reuse requirements

-5%

24%

State-of-the-art application

-11%

12%

Storage constraint (how much of available storage will be consumed)

0%

46%

Team cohesion

-10%

11%

Team's experience in the applications area

-19%

22%

Team's experience on the technology platform

-15%

19%

Time constraint (of the application itself)

0%

63%

Use of software tools

-22%

17%

Source: Software Cost Estimation with Cocomo II (Boehm et al. 2000).


Cross-Reference

The effect of a program's size on productivity and quality isn't always intuitively apparent. See Chapter 27, "How Program Size Affects Construction," for an explanation of how size affects construction.


Here are some of the less easily quantified factors that can influence a software-development schedule. These factors are drawn from Barry Boehm's Software Cost Estimation with Cocomo II (2000) and Capers Jones's Estimating Software Costs (1998).

  • Requirements developer experience and capability

  • Programmer experience and capability

  • Team motivation

  • Management quality

  • Amount of code reused

  • Personnel turnover

  • Requirements volatility

  • Quality of relationship with customer

  • User participation in requirements

  • Customer experience with the type of application

  • Extent to which programmers participate in requirements development

  • Classified security environment for computer, programs, and data

  • Amount of documentation

  • Project objectives (schedule vs. quality vs. usability vs. the many other possible objectives)

Each of these factors can be significant, so consider them along with the factors shown in Table 28-1 (which includes some of these factors).

Estimation vs. Control

Estimation is an important part of the planning needed to complete a software project on time. Once you have a delivery date and a product specification, the main problem is how to control the expenditure of human and technical resources for an on-time delivery of the product. In that sense, the accuracy of the initial estimate is much less important than your subsequent success at controlling resources to meet the schedule.

The important question is, do you want prediction, or do you want control?

Tom Gilb

What to Do If You're Behind

The average project overruns its planned schedule by about 100 percent, as mentioned earlier in this chapter. When you're behind, increasing the amount of time usually isn't an option. If it is, do it. Otherwise, you can try one or more of these solutions:

Hope that you'll catch up Hopeful optimism is a common response to a project's falling behind schedule. The rationalization typically goes like this: "Requirements took a little longer than we expected, but now they're solid, so we're bound to save time later. We'll make up the shortfall during coding and testing." This is hardly ever the case. One survey of over 300 software projects concluded that delays and overruns generally increase toward the end of a project (van Genuchten 1991). Projects don't make up lost time later; they fall further behind.

Expand the team According to Fred Brooks's law, adding people to a late software project makes it later (Brooks 1995). It's like adding gas to a fire. Brooks's explanation is convincing: new people need time to familiarize themselves with a project before they can become productive. Their training takes up the time of the people who have already been trained. And merely increasing the number of people increases the complexity and amount of project communication. Brooks points out that the fact that one woman can have a baby in nine months does not imply that nine women can have a baby in one month.

Undoubtedly the warning in Brooks's law should be heeded more often than it is. It's tempting to throw people at a project and hope that they'll bring it in on time. Managers need to understand that developing software isn't like riveting sheet metal: more workers working doesn't necessarily mean more work will get done.

The simple statement that adding programmers to a late project makes it later, however, masks the fact that under some circumstances it's possible to add people to a late project and speed it up. As Brooks points out in the analysis of his law, adding people to software projects in which the tasks can't be divided and performed independently doesn't help. But if a project's tasks are partitionable, you can divide them further and assign them to different people, even to people who are added late in the project. Other researchers have formally identified circumstances under which you can add people to a late project without making it later (Abdel-Hamid 1989, McConnell 1999).

Reduce the scope of the project The powerful technique of reducing the scope of the project is often overlooked. If you eliminate a feature, you eliminate the design, coding, debugging, testing, and documentation of that feature. You eliminate that feature's interface to other features.

Further Reading

For an argument in favor of building only the most-needed features, see Chapter 14, "Feature-Set Control," in Rapid Development (McConnell 1996).


When you plan the product initially, partition the product's capabilities into "must haves," "nice to haves," and "optionals." If you fall behind, prioritize the "optionals" and "nice to haves" and drop the ones that are the least important.

Short of dropping a feature altogether, you can provide a cheaper version of the same functionality. You might provide a version that's on time but that hasn't been tuned for performance. You might provide a version in which the least important functionality is implemented crudely. You might decide to back off on a speed requirement because it's much easier to provide a slow version. You might back off on a space requirement because it's easier to provide a memory-intensive version.

Reestimate development time for the least important features. What functionality can you provide in two hours, two days, or two weeks? What do you gain by building the two-week version rather than the two-day version, or the two-day version rather than the two-hour version?

Additional Resources on Software Estimation

cc2e.com/2871

Here are some additional references about software estimation:

Boehm, Barry, et al. Software Cost Estimation with Cocomo II. Boston, MA: Addison-Wesley, 2000. This book describes the ins and outs of the Cocomo II estimating model, which is undoubtedly the most popular model in use today.

Boehm, Barry W. Software Engineering Economics. Englewood Cliffs, NJ: Prentice Hall, 1981. This older book contains an exhaustive treatment of software-project estimation considered more generally than in Boehm's newer book.

Humphrey, Watts S. A Discipline for Software Engineering. Reading, MA: Addison-Wesley, 1995. Chapter 5 of this book describes Humphrey's Probe method, which is a technique for estimating work at the individual developer level.

Conte, S. D., H. E. Dunsmore, and V. Y. Shen. Software Engineering Metrics and Models. Menlo Park, CA: Benjamin/Cummings, 1986. Chapter 6 contains a good survey of estimation techniques, including a history of estimation, statistical models, theoretically based models, and composite models. The book also demonstrates the use of each estimation technique on a database of projects and compares the estimates to the projects' actual lengths.

Gilb, Tom. Principles of Software Engineering Management. Wokingham, England: Addison-Wesley, 1988. The title of Chapter 16, "Ten Principles for Estimating Software Attributes," is somewhat tongue-in-cheek. Gilb argues against project estimation and in favor of project control. Pointing out that people don't really want to predict accurately but do want to control final results, Gilb lays out 10 principles you can use to steer a project to meet a calendar deadline, a cost goal, or another project objective.

28.4. Measurement

Software projects can be measured in numerous ways. Here are two solid reasons to measure your process:

For any project attribute, it's possible to measure that attribute in a way that's superior to not measuring it at all The measurement might not be perfectly precise, it might be difficult to make, and it might need to be refined over time, but measurement will give you a handle on your software-development process that you don't have without it (Gilb 2004).

If data is to be used in a scientific experiment, it must be quantified. Can you imagine a scientist recommending a ban on a new food product because a group of white rats "just seemed to get sicker" than another group? That's absurd. You'd demand a quantified reason, like "Rats that ate the new food product were sick 3.7 more days per month than rats that didn't." To evaluate software-development methods, you must measure them. Statements like "This new method seems more productive" aren't good enough.

Be aware of measurement side effects Measurement has a motivational effect. People pay attention to whatever is measured, assuming that it's used to evaluate them. Choose what you measure carefully. People tend to focus on work that's measured and to ignore work that isn't.

What gets measured, gets done.

Tom Peters

To argue against measurement is to argue that it's better not to know what's really happening on your project When you measure an aspect of a project, you know something about it that you didn't know before. You can see whether the aspect gets bigger or smaller or stays the same. The measurement gives you a window into at least that aspect of your project. The window might be small and cloudy until you refine your measurements, but it will be better than no window at all. To argue against all measurements because some are inconclusive is to argue against windows because some happen to be cloudy.

You can measure virtually any aspect of the software-development process. Table 28-2 lists some measurements that other practitioners have found to be useful.

Table 28-2. Useful Software-Development Measurements

Size

Total lines of code written

Total comment lines

Total number of classes or routines

Total data declarations

Total blank lines

Defect Tracking

Severity of each defect

Location of each defect (class or routine)

Origin of each defect (requirements, design, construction, test)

Way in which each defect is corrected

Person responsible for each defect

Number of lines affected by each defect correction

Work hours spent correcting each defect

Average time required to find a defect

Average time required to fix a defect

Number of attempts made to correct each defect

Number of new errors resulting from defect correction

Productivity

Work-hours spent on the project

Work-hours spent on each class or routine

Number of times each class or routine changed

Dollars spent on project

Dollars spent per line of code

Dollars spent per defect

Overall Quality

Total number of defects

Number of defects in each class or routine

Average defects per thousand lines of code

Mean time between failures Compiler-detected errors

Maintainability

Number of public routines on each class

Number of parameters passed to each routine

Number of private routines and/or variables on each class

Number of local variables used by each routine

Number of routines called by each class or routine

Number of decision points in each routine

Control-flow complexity in each routine

Lines of code in each class or routine

Lines of comments in each class or routine

Number of data declarations in each class or routine

Number of blank lines in each class or routine

Number of gotos in each class or routine

Number of input or output statements in each class or routine


You can collect most of these measurements with software tools that are currently available. Discussions throughout the book indicate the reasons that each measurement is useful. At this time, most of the measurements aren't useful for making fine distinctions among programs, classes, and routines (Shepperd and Ince 1989). They're useful mainly for identifying routines that are "outliers"; abnormal measurements in a routine are a warning sign that you should reexamine that routine, checking for unusually low quality.

Don't start by collecting data on all possible measurements you'll bury yourself in data so complex that you won't be able to figure out what any of it means. Start with a simple set of measurements, such as the number of defects, the number of workmonths, the total dollars, and the total lines of code. Standardize the measurements across your projects, and then refine them and add to them as your understanding of what you want to measure improves (Pietrasanta 1990).

Make sure you're collecting data for a reason. Set goals, determine the questions you need to ask to meet the goals, and then measure to answer the questions (Basili and Weiss 1984). Be sure that you ask for only as much information as is feasible to obtain, and keep in mind that data collection will always take a back seat to deadlines (Basili et al. 2002).

Additional Resources on Software Measurement

cc2e.com/2878

Here are addtional resources:

Oman, Paul and Shari Lawrence Pfleeger, eds. Applying Software Metrics. Los Alamitos, CA: IEEE Computer Society Press, 1996. This volume collects more than 25 key papers on software measurement under one cover.

Jones, Capers. Applied Software Measurement: Assuring Productivity and Quality, 2d ed. New York, NY: McGraw-Hill, 1997. Jones is a leader in software measurement, and his book is an accumulation of knowledge in this area. It provides the definitive theory and practice of current measurement techniques and describes problems with traditional measurements. It lays out a full program for collecting "function-point metrics." Jones has collected and analyzed a huge amount of quality and productivity data, and this book distills the results in one place including a fascinating chapter on averages for U.S. software development.

Grady, Robert B. Practical Software Metrics for Project Management and Process Improvement. Englewood Cliffs, NJ: Prentice Hall PTR, 1992. Grady describes lessons learned from establishing a software-measurement program at Hewlett-Packard and tells you how to establish a software-measurement program in your organization.

Conte, S. D., H. E. Dunsmore, and V. Y. Shen. Software Engineering Metrics and Models. Menlo Park, CA: Benjamin/Cummings, 1986. This book catalogs current knowledge of software measurement circa 1986, including commonly used measurements, experimental techniques, and criteria for evaluating experimental results.

Basili, Victor R., et al. 2002. "Lessons learned from 25 years of process improvement: The Rise and Fall of the NASA Software Engineering Laboratory," Proceedings of the 24th International Conference on Software Engineering. Orlando, FL, 2002. This paper catalogs lessons learned by one of the world's most sophisticated software-development organizations. The lessons focus on measurement topics.

cc2e.com/2892

NASA Software Engineering Laboratory. Software Measurement Guidebook, June 1995, NASA-GB-001-94. This guidebook of about 100 pages is probably the best source of practical information on how to set up and run a measurement program. It can be downloaded from NASA's website.

cc2e.com/2899

Gilb, Tom. Competitive Engineering. Boston, MA: Addison-Wesley, 2004. This book presents a measurement-focused approach to defining requirements, evaluating designs, measuring quality, and, in general, managing projects. It can be downloaded from Gilb's website.

 < Free Open Study > 


Code Complete
Code Complete: A Practical Handbook of Software Construction, Second Edition
ISBN: 0735619670
EAN: 2147483647
Year: 2003
Pages: 334

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net