Architecture Weaknesses in Schedule Logic
One of the problems facing project managers when constructing the logic of their project is avoiding inherently weak architectures in the schedule network. Certainly from the point of view of risk management, avoidance is a key strategy and well applied when developing the logic of the project. In this section we will address two architectural weaknesses that lend themselves to quantitative analysis: the merge point of parallel paths and the "long" task.
Merge Points in Network Logic
By merge point we simply mean a point in the logic where a milestone has two or more predecessors, each with the same finish date. Illustrations of network building blocks shown earlier in this chapter illustrate simple parallel paths joining at a milestone. Such a construction is exactly what we are talking about. Obviously, for all project managers such a logic situation occurs frequently and is really unavoidable entirely; the idea is to avoid as many merging points as possible.
Here is the problem in a nutshell. Let us assume that each of the merging paths is independent. By independent we mean that the performance along one path is not dependent on the performance along the other path. We must be careful here. If there are shared resources between the two paths that are in conflict, whether project staff, special tools, facilities, or other, then the paths are truly not independent. But assuming independence, there is a probability that path A will finish on the milestone date, p(A on time) = p(A), and there is a probability that path B will finish on the milestone date, p(B on time) = p(B). Now, the probability that the milestone into which each of the two paths merge will be achieved on time is the probability that both paths will finish on time. We know from the math of probabilities and from Bayes' Theorem that if A and B are independent, the probability of the milestone finishing on time is the product of the two probabilities:
p(Milestone on time) = p(A) * p(B)
Now it should be intuitively obvious what the problem is. Both p(A) and p(B) are numbers less than 1. Their product, p(A) * p(B), is even smaller, so the probability of meeting the milestone on time is less than the smallest probability of any of the joining paths. If there are more than two paths joining, the problem is that much greater. Figure 7-10 shows graphically the phenomenon we have been discussing.
Figure 7-10: Merge Point Math.
Suppose that p(A) = 0.8 and p(B) = 0.7, both pretty high confidence figures. The joint probability of their product is quite simply: 0.8 * 0.7 = 0.56. Obviously, moving down from a confidence of 70% at the least for any single path to only 56% for the milestone is a real move and must be addressed by the project manager. To mitigate risk, the project manager would develop alternate schedule logic that does not require a merging of paths, or the milestone would be isolated with a buffer task.
We have mentioned "shift right" in the discussion of merge point. What does "shift right" refer to? Looking at Figure 7-10, we see that to raise the confidence of the milestone up to the least of the merging paths, in this case 70%, we are going to have to allow for more time. Such a conclusion is really straightforward: more time always raises the confidence in the schedule and provides for a higher probability of completion. But, of course, allowing more time is an extension, to the right, of the schedule. Extending the schedule to the right is the source of the term "shift right." The rule for project managers examining project logic is:
At every merge point of predecessor tasks, think "shift right."
Merging Dependent Paths
So far the discussion has been about the merging of independent paths. Setting the condition of independence certainly simplifies matters greatly. We know from our study of multiple random variables that if the random variables are not independent, then there is a covariance between them and a degree of correlation. We also know that if one path outcome is conditioned on the other path's outcome, then we must invoke Bayes' Theorem to handle the conditions.
p(A on time AND B on time) = p(A on time given B is on time) * p(B is on time)
We see right away on the right side of the equation that the "probability of A on time given B on time" is made smaller by the multiplication of the probability of "B on time." From what we have already discussed, making "probability of A on time given B on time" smaller is a shift right of the schedule. The exact amount is more difficult to estimate because of having to estimate "probability of A on time given B on time," but the heuristic is untouched: the milestone join of two paths, whether independent or not, will shift right. Only the degree of shift depends on the conditions between the paths.
Thinking of the milestone as an "outcome milestone," we can also approach this examination from the point of view of risk as represented by the variance of the outcome distribution. We may not know this distribution outright, although by the discussion that follows we might assume that it is somewhat Normal. In any event, if the two paths are not independent, what can we say about the variance of the outcome distribution at this point? Will it be larger or smaller?
We reason as follows: The outcome performance of the milestone is a combined effect of all the paths joining. If we look at the expected value of the two paths joining, and the paths are independent, then we know that:
E(A and B) = E(A) * E(B), paths independent
But if the paths are not independent, then the covariance between them comes into play:
E(A and B) = E(A) * E(B) + Cov(A and B)
where paths A and B are not independent.
The equation for the paths not independent tells us that the expected value may be larger (longer) or smaller (shorter) depending on the covariance. The covariance will be positive — that is, the outcome will be larger (the expected value of the outcome at the milestone is longer or shifted right) — if both paths move in the same direction together. This is often the case in projects because of the causes within the project that tend to create dependence between paths. The chief cause is shared resources, whether the shared resource is key staff, special tools or environments, or other unique and scarce resources. So the equation we are discussing is valuable heuristically even if we often do not have the information to evaluate it numerically. The heuristic most interesting to project managers is:
Parallel paths that become correlated by sharing resources stretch the schedule!
The observation that sharing resources will stretch the schedule is not news to most project managers. Either through experience or exposure to the various rules of thumb of project management, such a phenomenon is generally known. What we have done is given the rule of thumb a statistical foundation and given the project manager the opportunity, by means of the formula, to figure out the actual numerical impact. However, perhaps the most important way to make use of this discussion is by interpreting results from a Monte Carlo simulation. When running the Monte Carlo simulation, the effects of merge points and shift right will be very apparent. This discussion provides the framework to understand and interpret the results provided.
Resource Leveling Quantitative Effects
Resource leveling refers to the planning methodology in which scarce resources, typically staff with special skills (but also special equipment, facilities, and environments), are allocated to tasks where there is otherwise conflict for the resource. The situation we are addressing naturally arises out of multiple planners who each require a resource and plan for its use, only to find in the summation of the schedule network that certain resources are oversubscribed: there is simply too much demand for supply.
The first and most obvious solution is to increase supply. Sometimes increasing supply can work in the case of certain staff resources that can be augmented by contractors or temporary workers, and certain facilities or environments might likewise be outsourced to suppliers. However, the problem for quantitative analysis is the case where supply cannot be increased, demand cannot be reduced, and it is not possible physically to oversubscribe the resource. Some have said this is like the case of someone wondering if "nine women could have a baby in one month." It is also not unlike the situation described by Fredrick Brooks in his classic book, The Mythical Man-Month,  wherein he states affirmatively that the thought that a simple interchange of resources, like time and staff, is possible on complex projects is a myth! Interestingly enough, Brooks also states what he calls "Brooks Law":
Adding additional resources (increasing supply) to a late project only makes it later! 
For purposes of this book, we will limit our discussion of resource leveling to simply assigning resources in the most advantageous manner to affect the project in the least way possible. Several rules of thumb have been developed in this regard. The most prominent is perhaps: "assign resources to the critical path to ensure it is not resource starved, and then assign the remaining resources to the near-critical paths in descending order of risk." Certainly this is a sensible approach. Starving the critical path would seem to build in a schedule slip right away. Actually, however, others argue that in the face of scarce resources, the identification of the true critical path is obscured by the resource conflict.
Recall our discussion earlier about correlating or creating a dependency among otherwise independent paths. Resource leveling is exactly that. Most scheduling software has a resource leveling algorithm built in, and you can also buy add-in software to popular scheduling software that has more elaborate and more efficient algorithms. However, in the final analysis, creating a resource dependency among tasks almost certainly sets up a positive covariance between the paths. Recall that by positive covariance we mean that both path performances move in the same direction. If a scarce resource is delayed or retained on one path beyond its planned time, then surely it will have a similar impact on any other task it is assigned to, creating a situation of positive covariance.
Figure 7-11 and Figure 7-12 show a simple example. In Figure 7-11, we see a simple plan consisting of four tasks and two resources. Following the rule of thumb, we staff the critical path first and find that the schedule has lengthened as predicted by the positive covariance. In Figure 7-12, we see an alternate resource allocation plan, one in which we apparently do not start with the critical path, and indeed the overall network schedule is optimally shorter than first planned as shown in Figure 7-11. However, there is no combination of the two resources and the four tasks that is as short as if there were complete independence between tasks. Statistically speaking we would say that there is no case where the covariance can be zero in the given combination of tasks and resources.
Figure 7-11: Resource Leveling Plan.
Figure 7-12: Resource Leveling Optimized.
Another problem common to schedule network architecture is the so-called long task. What is "long" in this context? There is no exact answer, though there are many rules of thumb, heuristics. Many companies and some customers have built into their project methodology specific figures for task length that are not to be exceeded when planning a project unless a waiver is granted by the project manager. Some common heuristics of "short" tasks are 80 to 120 hours of FTE time or perhaps 10 to 14 days of calendar time. You can see by these figures that a long task is one measured in several weeks or many staff hours.
What is the problem with the long task from the quantitative methods point of view? Intuitively, the longer the task, the more likely something untoward will go wrong. As a matter of confidence in the long task, allowing for more possibilities to go wrong has to reduce the confidence in the task. Can we show this intuitive idea simply with statistics? Yes; let us take a look at the project shown in Figure 7-13.
Figure 7-13: Long Task Baseline.
In Figure 7-13 we see a project consisting of only one task, and it is a long task. We have assigned a degree of risk to this long task by saying that the duration estimate is represented by a Triangular distribution with parameters as shown in the figure. We apply the equations we learned about expected value and variance and compute not only the variance and expected value but the standard deviation as well. It is handy to have the standard deviation because it is dimensioned the same as the expected value. Thus if the expected value is measured in days, then so will the standard deviation be. Unfortunately, the variance will be measured in days-squared with no physical meaning. Therefore the variance becomes a figure of merit wherein smaller is better.
The question at hand is whether we can achieve improvement in the schedule confidence by breaking up the long task into a waterfall of tandem but shorter tasks. Our intuition guided by the Law of Large Numbers tells us we are on the right track. If it is possible for the project manager and the WBS work package managers to redesign the work so that the WBS work package can be subdivided into smaller but tandem tasks, then even though each task has a dependency on its immediate predecessor, our feeling is that with the additional planning knowledge that leads to breaking up the long task should come higher confidence that we have it estimated more correctly.
For simplicity, let's apply the Triangular distribution to each of the shorter tasks and also apply the same overall assumptions of pessimism and optimism. You can work some examples to show that there is no loss of generality in the final conclusion. We now recompute the expected value of the overall schedule and the variance and standard deviation of the output milestone. We see a result predicted by the Law of Large Numbers as illustrated in Figure 7-14. The expected value of the population is the expected value of the outcome (summation) milestone. Our additional planning knowledge does not change the expected value. However, note the improvement in the variance and the standard deviation; both have improved. Not coincidently, the (sample) variance has improved, compared to the population variance, by 1/N, where N is the number of tasks into which we subdivided the longer task, and the standard deviation has improved by 1/√N. We need only look back to our discussion of the sample variance to see exactly from where these results come.
Figure 7-14: Shorter Tasks Project.
Brooks, Fredrick P., The Mythical Man-Month, Addison Wesley Longman, Inc., Reading, MA, 1995, p. 16.
Brooks, Fredrick P., The Mythical Man-Month, Addison Wesley Longman, Inc., Reading, MA, 1995, p. 25.