10.1 Calculating an Accurate Overall Expected Case


10.1 Calculating an Accurate Overall Expected Case

Scene: The weekly team meeting

YOU: We need to create an estimate for a new project. I want to emphasize how important accurate estimation is to this group, and so I'm betting a pizza lunch that I can create a more accurate estimate for this project than you can. If you win, I'll buy the pizza. If I win, you'll buy. Any takers?

TEAM: You're on!

YOU: OK, let's get started.

You look up information about a similar past project, and you find that that project took 18 staff weeks. You estimate that this project is about 20 percent larger than the past project, so you create a total estimate of 22 staff weeks.

Meanwhile, your team has created a more detailed, feature-by-feature estimate. They come back with the estimate shown in Table 10-1.

Table 10-1: Example of Estimation by Decomposition

Feature

Estimated Staff Weeks to Complete

Feature 1

1.5

Feature 2

4

Feature 3

3

Feature 4

1

Feature 5

4

Feature 6

6

Feature 7

2

Feature 8

1

Feature 9

3

Feature 10

1.5

TOTAL

27

YOU: 27 weeks? Wow, I think your estimate is high, but I guess we'll find out.

A few weeks later

YOU: Now that the project is done, we know that it took a total of 29 staff weeks. It looks like your estimate of 27 staff weeks was optimistic by 2 weeks, which is an error of 7%. My estimate of 22 staff weeks was off by 7 staff weeks, about 24%. It looks like you win, so I'm buying the pizza.

By the way, I want to see which of you good estimators cost me the pizza. Let's take a look at which detailed estimates were the most accurate.

You take a few minutes to compute the magnitude of relative error of each individual estimate and write the results on the whiteboard. Table 10-2 shows the results.

Table 10-2: Example Results of Estimation by Decomposition

Feature

Estimated Staff Weeks to Complete

Actual Effort

Raw Error

Magnitude of Relative Error

Feature 1

1.5

3.0

-1.5

50%

Feature 2

4.5

2.5

2.0

80%

Feature 3

3

1.5

1.5

100%

Feature 4

1

2.5

-1.5

60%

Feature 5

4

4.5

-0.5

11%

Feature 6

6

4.5

1.5

33%

Feature 7

2

3.0

-1.0

33%

Feature 8

1

1.5

-0.5

33%

Feature 9

3

2.5

0.5

20%

Feature 10

1.5

3.5

-2.0

57%

TOTAL

27

29

-2

-

Average

-

-

-7%

46%

TEAM: Wow, that's interesting. Most of our individual estimates weren't any more accurate than yours. Our estimates were nearly all wrong by 30% to 50% or more. Our average error was 46%which is way higher than your error. But our overall error was still only 7% and yours was 24%.

But the joke is on you. Even though our estimates were worse than yours, you're still buying the pizza!

Somehow the team's estimate was more accurate than your estimate even though their individual feature estimates were worse. How is that possible?

The Law of Large Numbers

The team's estimate benefited from a statistical property called the Law of Large Numbers. The gist of this law is that if you create one big estimate, the estimate's error tendency will be completely on the high side or completely on the low side. But if you create several smaller estimates, some of the estimation errors will be on the high side, and some will be on the low side. The errors will tend to cancel each other out to some degree. Your team underestimated in some cases, but it also overestimated in some cases, so the error in the aggregate estimate is only 7%. In your estimate, all 24% of the error was on the same side.

This approach should work in theory, and research says that it also works in practice. Lederer and Prasad found that summing task durations was negatively correlated with cost and schedule overruns (Lederer and Prasad 1992).

Tip #47 

Decompose large estimates into small pieces so that you can take advantage of the Law of Large Numbers: the errors on the high side and the errors on the low side cancel each other out to some degree.

How Small Should the Estimated Pieces Be?

Seen from the perspective shown in Figure 10-1, software development is a process of making larger numbers of steadily smaller decisions. At the beginning of the project, you make such decisions as "What major areas should this software contain?" A simple decision to include or exclude an area can significantly swing total project effort and schedule in one direction or another. As you approach top-level requirements, you make a larger number of decisions about which features should be in or out, but each of those decisions on average exerts a smaller impact on the overall project outcome. As you approach detailed requirements, you typically make hundreds of decisions, some with larger implications and some with smaller implications, but on average the impact of these decisions is far smaller than the impact of the decisions made earlier in the project.

image from book
Figure 10-1: Software projects tend to progress from large-grain focus at the beginning to fine-grain focus at the end. This progression supports increasing the use of estimation by decomposition as a project progresses.

By the time you focus on software construction, the granularity of the decisions you make is tiny: "How should I design this class interface? How should I name this variable? How should I structure this loop?" And so on. These decisions are still important, but the effect of any single decision tends to be localized compared with the big decisions that were made at the initial, software-concept level.

The implication of software development being a process of steady refinement is that the further into the project you are, the finer-grained your decomposed estimates can be. Early in the project, you might base a bottom-up estimate on feature areas. Later, you might base the estimate on marketing requirements. Still later, you might use detailed requirements or engineering requirements. In the project's endgame, you might use developer and tester task-based estimates.

The limits on the number of items to estimate are more practical than theoretical. Very early in a project, it can be a struggle to get enough detailed information to create a decomposed estimate. Later in the project, you might have too much detail. You need 5 to 10 individual items before you get much benefit from the Law of Large Numbers, but even 5 items are better than 1.




Software Estimation. Demystifying the Black Art
Software Estimation: Demystifying the Black Art (Best Practices (Microsoft))
ISBN: 0735605351
EAN: 2147483647
Year: 2004
Pages: 212

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net