List of Figures


Chapter 1: What Is an "Estimate"?

Figure 1-1: Single-point estimates assume 100% probability of the actual outcome equaling the planned outcome. This isn't realistic.
Figure 1-2: A common assumption is that software project outcomes follow a bell curve. This assumption is incorrect because there are limits to how efficiently a project team can complete any given amount of work.
Figure 1-3: An accurate depiction of possible software project outcomes. There is a limit to how well a project can go but no limit to how many problems can occur.
Figure 1-4: The probability of a software project delivering on or before a particular date (or less than or equal to a specific cost or level of effort).
Figure 1-5: All single-point estimates are associated with a probability, explicitly or implicitly.
Figure 1-6: Improvement in estimation of a set of U.S. Air Force projects. The predictability of the projects improved dramatically as the organizations moved toward higher CMM levels.
Figure 1-7: Improvement in estimation at the Boeing Company. As with the U.S. Air Force projects, the predictability of the projects improved dramatically at higher CMM levels.
Figure 1-8: Schlumberger improved its estimation accuracy from an average overrun of 35 weeks to an average underrun of 1 week.
Figure 1-9: Projects change significantly from inception to delivery. Changes are usually significant enough that the project delivered is not the same as the project that was estimated. Nonetheless, if the outcome is similar to the estimate, we say the project met its estimate.

Chapter 2: How Good an Estimator Are You?

Figure 2-1: Results from administering the "How Good an Estimator Are You?" quiz. Most quiz-takers get 1–3 answers correct.

Chapter 3: Value of Accurate Estimates

Figure 3-1: The penalties for underestimation are more severe than the penalties for overestimation, so, if you can't estimate with complete accuracy, try to err on the side of overestimation rather than underestimation.
Figure 3-2: Project outcomes reported in The Standish Group's Chaos report have fluctuated year to year. About three quarters of all software projects are delivered late or fail outright.
Figure 3-3: Estimation results from one organization. General industry data suggests that this company's estimates being about 100% low is typical. Data used by permission.
Figure 3-4: When given the option of a shorter average schedule with higher variability or a longer average schedule with lower variability, most businesses will choose the second option.

Chapter 4: Where Does Estimation Error Come From?

Figure 4-1: The Cone of Uncertainty based on common project milestones.
Figure 4-2: The Cone of Uncertainty based on calendar time. The Cone narrows much more quickly than would appear from the previous depiction in Figure 4-1.
Figure 4-3: If a project is not well controlled or well estimated, you can end up with a Cloud of Uncertainty that contains even more estimation error than that represented by the Cone.
Figure 4-4: The Cone of Uncertainty doesn't narrow itself. You narrow the Cone by making decisions that remove sources of variability from the project. Some of these decisions are about what the project will deliver; some are about what the project will not deliver. If these decisions change later, the Cone will widen.
Figure 4-5: A Cone of Uncertainty that allows for requirements increases over the course of the project.
Figure 4-6: Example of variations in estimates when numerous adjustment factors are present. The more adjustment factors an estimation method provides, the more opportunity there is for subjectivity to creep into the estimate.
Figure 4-7: Example of low variation in estimates resulting from a small number of adjustment factors. (The scales of the two graphs are different, but they are directly comparable when you account for the difference in the average values on the two graphs.)
Figure 4-8: Average error from off-the-cuff estimates vs. reviewed estimates.

Chapter 5: Estimate Influences

Figure 5-1: Growth in effort for a typical business-systems project. The specific numbers are meaningful only for the average business-systems project. The general dynamic applies to software projects of all kinds.
Figure 5-2: The number of communication paths on a project increases proportionally to the square of the number of people on the team.
Figure 5-3: Diseconomy of scale for a typical business-systems project ranging from 10,000 to 100,000 lines of code.
Figure 5-4: Diseconomy of scale for projects with greater size differences and the worst-case diseconomy of scale.
Figure 5-5: Differences between ratio-based estimates and estimates based on diseconomy of scale will be minimal for projects within a similar size range.
Figure 5-6: Effect of personnel factors on project effort. Depending on the strength or weakness in each factor, the project results can vary by the amount indicated—that is, a project with the worst requirements analysts would require 42% more effort than nominal, whereas a project with the best analysts would require 29% less effort than nominal.
Figure 5-7: Cocomo II factors arranged in order of significance. The relative lengths of the bars represent the sensitivity of the estimate to the different factors.
Figure 5-8: Cocomo II factors arranged by potential to increase total effort (gray bars) and potential to decrease total effort (blue bars).
Figure 5-9: Cocomo II factors with diseconomy of scale factors highlighted in blue. Project size is 100,000 LOC.
Figure 5-10: Cocomo II factors with diseconomy of scale factors highlighted in blue. Project size is 5,000,000 LOC.

Chapter 8: Calibration and Historical Data

Figure 8-1: An example of estimated outcomes for an estimate calibrated using industry-average data. Total variation in the effort estimates is about a factor of 10 (from about 25 staff months to about 250 staff months).
Figure 8-2: An estimate calibrated using historical productivity data. The effort estimates vary by only about a factor of 4 (from about 30 staff months to about 120 staff months).

Chapter 10: Decomposition and Recomposition

Figure 10-1: Software projects tend to progress from large-grain focus at the beginning to fine-grain focus at the end. This progression supports increasing the use of estimation by decomposition as a project progresses.

Chapter 13: Expert Judgment in Groups

Figure 13-1: A simple review of individually created estimates significantly improves the accuracy of the estimates.
Figure 13-2: A Wideband Delphi estimating form.
Figure 13-3: A Wideband Delphi estimating form after three rounds of estimates.
Figure 13-4: Estimation accuracy of simple averaging compared to Wideband Delphi estimation. Wideband Delphi reduces estimation error in about two-thirds of cases.
Figure 13-5: Wideband Delphi when applied to terrible initial estimates. In this data set, Wideband Delphi improved results in 8 out of 10 cases.
Figure 13-6: In about one-third of cases, Wideband Delphi helps groups that don't initially include the correct answer to move outside their initial range and closer to the correct answer.

Chapter 14: Software Estimation Tools

Figure 14-1: A tool-generated simulation of 1,000 project outcomes. Output from Construx Estimate.
Figure 14-2: Example of probable project outcomes based on output from estimation software.
Figure 14-3: In this simulation, only 8 of the 1,000 outcomes fall within the desired combination of cost and schedule.
Figure 14-4: Calculated effect of shortening or lengthening a schedule.

Chapter 15: Use of Multiple Approaches

Figure 15-1: An example of multiple estimates for a software project.

Chapter 16: Flow of Software Estimates on a Well-Estimated Project

Figure 16-1: Estimation on a poorly estimated project. Neither the inputs nor the process are well defined, and the inputs, process, and outputs are all open to debate.
Figure 16-2: Estimation on a well-estimated project. The inputs and process are well defined. The process and outputs are not subject to debate; however, the inputs are subject to iteration until acceptable outputs are obtained.
Figure 16-3: Flow of a single estimate on a well-estimated project. Effort, schedule, cost, and features that can be delivered are all computed from the size estimate.
Figure 16-4: Summary of applicability of different estimation techniques by kind of project and project phase.
Figure 16-5: A well-estimated project. The single-point estimates miss the mark, but the ranges all include the eventual outcome.
Figure 16-6: A poorly estimated project. The project is uniformly underestimated, and the estimation ranges are too narrow to encompass the eventual outcome.

Chapter 17: Standardized Estimation Procedures

Figure 17-1: A typical stage-gate product development life cycle.

Chapter 19: Special Issues in Estimating Effort

Figure 19-1: Industry-average effort for real-time projects.
Figure 19-2: Industry-average effort for embedded systems projects.
Figure 19-3: Industry-average effort for telecommunications projects.
Figure 19-4: Industry-average effort for systems software and driver projects.
Figure 19-5: Industry-average effort for scientific systems and engineering research projects.
Figure 19-6: Industry-average effort for shrink-wrap and packaged software projects.
Figure 19-7: Industry-average effort for public internet systems projects.
Figure 19-8: Industry-average effort for internal intranet projects.
Figure 19-9: Industry-average effort for business systems projects.
Figure 19-10: Ranges of estimates derived by using the methods discussed in this chapter. The relative dot sizes and line thicknesses represent the weight I would give each of the estimation techniques in this case.

Chapter 20: Special Issues in Estimating Schedule

Figure 20-1: The Cone of Uncertainty, including schedule adjustment numbers on the right axis. The schedule variability is much lower than the scope variability because schedule is a cube-root function of scope.
Figure 20-2: The effects of compressing or extending a nominal schedule and the Impossible Zone. All researchers have found that there is a maximum degree to which a schedule can be compressed.
Figure 20-3: Relationship between team size, schedule, and effort for business-systems projects of about 57,000 lines of code. For team sizes greater than 5 to 7 people, effort and schedule both increase.
Figure 20-4: Ranges of schedule estimates produced by the methods discussed in this chapter. The relative dot sizes and line thicknesses represent the weights I would give each of these estimates. Looking at all the estimates, including those that aren't well founded, hides the real convergence among these estimates.
Figure 20-5: Ranges of schedule estimates produced by the most accurate methods. Once the estimates produced by overly generic methods are eliminated, the convergence of the estimates becomes clear.

Chapter 22: Estimate Presentation Styles

Figure 22-1: Example of documenting estimate assumptions.
Figure 22-2: Example of presenting percentage-confident estimates in a form that's more visually appealing than a table.
Figure 22-3: Example of presenting case-based estimates in a visual form.




Software Estimation. Demystifying the Black Art
Software Estimation: Demystifying the Black Art (Best Practices (Microsoft))
ISBN: 0735605351
EAN: 2147483647
Year: 2004
Pages: 212

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net