Poor Software Project Management


It sometimes seems that managers estimate poorly on purpose. Programmers, thinking of their rank, try hard to meet impossible schedules. In this way managers can get more work from someone. It is frequently said that, management is getting things done through other people. Under-estimation is a way of using every drop of creativity.

You can imagine the pointy-haired boss from Scott Adams Dilbert cartoons descending the jetway from some trade show, whipping out his cellular phone, and calling one of his engineers . He informs the engineer that the company will build whatever impressive piece of software by the next trade show, thus setting an artificial due date for some arbitrary, dimly known piece of software.

It is almost certain that a few months later, someone will realize that the date cannot be met. Requirements could be scrubbed, but that means less functionality. Alternatively, some seemingly big items could be reduced in size , like testing, but that means less quality. In the absence of any good data, the boss will not budge from the earlier estimation.

Better, Faster, Cheaper

A lot has been said in the industry recently about the better, faster, cheaper phenomenon , which is certainly relevant here. It once was that people would present these concepts by saying, Better, faster, cheaper. Pick any two. Choosing to concentrate on two of these concepts made accomplishing the third difficult or impossible.

For example, producing something of high quality and within a deadline, is rarely inexpensive. The Apollo Lunar Landing program comes to mind. The lives of human beings were involved, so everything had to work, and work well. The date was seemingly arbitrarily chosen by the then-President, so no pushback was tolerated. It turned out to be quite expensive. Apollo was that rare government-sponsored program that had an indefinite budget. Therefore, it is an example of better and faster, but certainly not cheap [Brooks79].

Software built quickly by small teams , regardless of the size of the product, rarely survives the rigors of constant use. Therefore, it is not better, although it is faster and certainly cheaper.

The final combination is better, and cheaper, but not faster. This is actually the combination that uses resources the most effectively. The not faster means that there is just enough time to do the work; therefore, adequate time is scheduled for the project.

Unfortunately, in these days of Internet Time, the mantra of better, faster, cheaper means all three. Quality cannot be compromised, because customers abandon the source of poor quality; faster is the essence of the current era, as thousands of fast food franchises can attest. This leaves only cheaper as a factor for easy change, but few want to spend money for fear of not making sufficient profit. Hence, if the factor of speed can be increased, the goals will be met.

Perhaps in an attempt to deliver on these three aspects, some companies foundered. This might be one common reason for the rampant failure of dot-com companies.

Task  

Identify a product that is better and cheaper, but not faster. How does it reach its quality goals?

Overtime

Initially, the first reaction to a schedule shortfall was to add personnel. Fred Brooks pointed out the folly of this reaction course in [Brooks75], so this solution is probably wrong from a viewpoint other than cost. However, nowadays, increased cost is keeping some managers from remembering that. Many of them seem to consider software making as linear as ditch digging (see Chapter 1).

Therefore, the problem becomes centered on keeping costs down while increasing productivity. You can almost see the boss s hair twitch as overtime is expected until the delivery date. By using the same engineers for a little extra time, you save training costs, communications overhead, and extra benefits, since overtime often is not paid to salaried employees . Therefore, there is no perceived penalty to overtime.

This overtime may range from an hour a day to a half-day a week for over a year. Either way, it is chronic. One of the authors (Tomayko) observed a friend performing a half-day, long- term overtime in the nuclear power industry. Fortunately, reactor code written at 11:30 a.m. on a Saturday did not find its way into an American nuclear reactor!

Steve Maguire, once a Microsoft manager, observed those on chronic overtime as in the following story adapted from [Maguire94]: an engineer shows up for work at around 10:00 a.m., immediately processes electronic mail, and then works on his or her project for about a half an hour. Suddenly, the engineer realizes that his or her personal bills have not been done that month and then does bills until lunchtime. Feeling a little sluggish , the engineer runs with a friend at noon, showers, and eats a quick lunch . Going back to work about 2 p.m., the engineer first checks mail again, and then works for another hour or so. Getting sluggish once more, the engineer goes down the hall for some foosball or table tennis. By this time, it is 5 p.m ., and the engineer realizes that not much time has been spent on the project. The engineer eats supper with his friends , then goes back to the office, and, with steely resolve, works on the project until midnight, then goes home and straight to bed. Nearly eight hours later, the engineer is up and at it again.

The engineer spends more than 14 hours on site, but only about half that time on the project. For the practitioner of chronic overtime, the things normally done in the outside world, like bills, exercise, and evening meals, become part of a normal day. It is no wonder Maguire starts his tenure on a new project by going down the office corridor at 7:00 p.m . to throw out the engineers and send them home [Maguire94].

It has been observed that an hour or two here and there, usually voluntary time on the part of an engineer, does increase productivity for a while [Beck00]. As might be expected, an extra half-day per week in an organization lacking flexible hours also causes a positive trend in productivity initially. However, chronic overtime in a flextime environment quickly causes engineers to revert to their original productivity, even though they are spending many hours at the job. Their brethren in the fixed-time company achieves the same result, just a little later.

What is wrong with this? Let us say that the engineer mentioned previously is married. If the spouse also works, most likely at specific hours, then the two may sleep in the same bed, but otherwise would not see each other until the weekend . If children are involved, it is highly likely they will not be seen as well. Those doing the extra half-day of work are absent from some family activities. This is clearly difficult for the engineers and their families.

Expecting this type of behavior as routine can cause a certain amount of resentment, erasing any positive effects of increased productivity. An author of this book (Tomayko) worked a six-week spurt of overtime once. His project delivered on time, largely through heroic effort on the part of the team. The software was never used. The fact that this is still bothersome after 20 years is a symptom of the scope of possible resentment.

Task  

List the pros and cons of daily overtime.

Avoiding Overtime

The best way to avoid overtime is to make better schedules and estimates of the time it is going to take to build the software. Time spent on planning is well spent if it is effective. The remainder of this section discusses several very effective ways of making estimates and tracking them.

Historical Data

Simple historical data is the basis of the first of these methods . We have all noticed that today s weather is much like yesterday s (in the same location). Building software is a lot like weather. Software in the same domain with the same functionality takes about the same amount of time to build. We observed a project manager moving a box containing prior estimates to a new cubicle . What had worked for two previous projects in the same domain now worked for a third. Eventually, the project manager transferred to a different domain. Certain software was new, but some had much in common with previous projects where the parts of the domain matched.

Task  

Identify two similar projects, and point out the parts of one that can serve as surrogates for parts of the other.

Clark s Method

In the case of many methods discussed here, they end their road to abstraction in a unit of software. For individual units, it is possible to be more accurate by using this equation:

LOC = (L + 4M + S) / 6

where M is how big (in lines of code) you expect the software to be, L is the biggest you can ever imagine it being, and S is the smallest you can ever imagine it being. For example: 102 (rounded) = (125 + 4 *100 + 85)/ 6 .

This relation is very close to the standard deviation. It was developed by an engineer named Clark for the Polaris submarine -launched ballistic missile project, one of the few government projects delivered ahead of time and under budget [Sapolsky72].

This equation and COCOMO I [Boehm81] were used among several hundred experienced practitioners . That use demonstrated the superiority of this relation, especially in the absence of experience with the product. In addition, this relation can be used to estimate the size of components , which estimates are then combined to make a more accurate overall estimate. This equation handles small units of code and scales up well.

Task  

Identify a software product in a domain that you know reasonably well. Divide the product into smaller parts. Use Clark s method to estimate the size of the parts. Add them together and compare the total size with the actual size. Comment on the accuracy of the estimate.

COCOMO II

Planning is difficult, and obviously error prone, in most methods. Many methods, even some of the most detailed [Humphrey95], start with the programmer making an estimate based on the gut feel of experience, and this is refined once the day s work is examined. The accuracy of the estimate is based on actual performance. The shortness of a typical modern software engineering development cycle ”two or three weeks ”keeps the effects of mis-estimation small. Therefore, any refinement of the estimates must support cyclical development. It would be nice if the estimate had different levels of complexity. COCOMO II fills this bill [Boehm95].

COCOMO I is a component of many heavyweight processes. Even its original explication, [Boehm81], was huge. COCOMO I calculated project times and effort by the use of a volume metric, Delivered Source Lines of Code (SLOC). Thus arose an entire industry of size estimation as an input to COCOMO. It also caused endless debates over the definition of lines of code. Do you count semicolons (logical statements) only? Include or exclude comments? On one hand, professors delighted in circulating an exercise to their classes resulting in a count of 1 to 10 lines, depending on what you consider as components of a line of code. At the other extreme is the Software Engineering Institute s guide to defining a line of code, more than a 100-page exercise [Park92]. Fortunately, the practice of having a coding standard simplifies defining a line of code.

COCOMO I was calibrated using only a few dozen projects in one company, TRW, and was highly inaccurate. Chris Kemerer did a study [Kemerer87] that showed that COCOMO estimates were about 600-percent low whether you were using Basic COCOMO (just the equations) or Intermediate COCOMO (raw estimates adjusted by applying a large number of qualifying factors). An interest group around COCOMO was formed and had annual meetings to swap data and recount struggles with COCOMO. Moreover, the technique is fairly useless for products under 10,000 lines of code.

After leaving TRW and a stint in government service, Barry Boehm, who originated COCOMO, accepted an academic appointment. By this time, the early 1990s, he noticed that the software world had changed, and that the waterfall model, which underlay COCOMO I, had been almost completely replaced by iterative models. These included his own, the Spiral Model [Boehm88]. Based on this observation, Boehm enlisted a larger number of companies to provide data on over twice as many projects as the COCOMO I. This data is used in calibrating a cyclic replacement to his original model, called COCOMO II [adapted from Boehm95].

Boehm realized that the ease of the use of late-generation languages and other tools would make three layers of practice necessary. At one extreme is end- user programming, made possible by scripts used in programming spreadsheets, query systems, and planning systems. The other end is infrastructure, like operating systems, database managers, and networking systems. In the middle are application generators and system integrators. The estimation needs of each of these groups are different, so there are different layers to COCOMO II.

Therefore, there are three stages to making a COCOMO II estimate: Application Composition, Early Design, and Post Architecture. The Application Composition model also supports prototyping at any point in the life cycle.

The end-user programmers are unlikely to need anything as detailed as lines of code, both for the simple reason that they just do not need the fine-graininess and because their development cycles are so short that they don t have the time to develop a detailed estimate. Prototyping is the same. Therefore, the Application Composition model uses Object Points, not to be confused with object-oriented development objects. They may turn out to be roughly contiguous with such objects, but they aren t necessarily the same.

There is a seven-step process in coming up with this estimate. First, count the estimated numbers of screens, reports , and third-generation language objects. Second, figure complexity using Table 13.1.

Table 13.1: Determined Complexity According to the Estimated Numbers of Screens and Reports
 

For Screens

 

For Reports

Number of Views

Number and source of data tables

Number of sections

Number and source of data tables

 

Total<4(<2
servers
<3 clients )

Total<8
(2/3 servers
3 “5 clients

Total 8+
(>3 servers
>5 clients)

 

Total<4
(<2 servers
< 3 clients)

Total<8
(2/3 servers
3 “5 clients)

Total 8+
(>3 servers
>5 clients)

<3

Simple

Simple

Medium

0 or 1

Simple

Simple

Medium

3 “7

Simple

Medium

Difficult

2 or 3

Simple

Medium

Difficult

>8

Medium

Difficult

Difficult

4+

Medium

Difficult

Difficult

The third step is to attach weights to each number in the cells (Table 13.2).

Table 13.2: Complexity-Weight for Screens, Reports, and Third-Generation Language Components

Object Type

Complexity-Weight

 

Simple

Medium

Difficult

Screen

1

2

3

Report

2

5

8

Third Generation Language component

   

10

The fourth step obtains the Object Point (OP) count by adding all the weighted object instances. Then, estimate the percentage of reuse. Use the percentage of reuse in this equation to get the New Object Points: NOP = (OP “ % reuse)/100. The sixth step is determining productivity from Table 13.3. This corresponds to velocity in eXtreme Programming (XP).

Table 13.3: Productivity According to Developer Experience and Ability and Development Environment Capability and Maturity

Developer experience high and ability

Very low

Low

Nominal

High

Very

Development environment capability and maturity

Very low

Low

Nominal

High

Very high

PRODUCTIVITY

4

7

13

25

50

Finally, based on PRODUCTIVITY = NOP/Person-Month, compute the person-months: Person-Month = NOP X PRODUCTIVITY.

The Application Composition model provides enough structure for the estimations necessary in the XP Planning Game. The Early Design model uses function points, and the Post Architecture model uses COCOMO I. The frequency of XP cycles makes the use of these additional components of COCOMO II too difficult. However, those interested in function points are directed to [Albrecht and Gaffney83]. Those who want to use COCOMO I, see [Boehm81].

Basically, in a noniterative production schedule, Application Composition estimates are done early in the software development life cycle, function points after higher-level analysis, and COCOMO I after detailed design. Therefore, there are at least three iterations to the estimate.

Task  

Take a software product and try to identify the number of person-months to build it using COCOMO II.

Earned Value

So far, we reviewed some estimation methods. Before moving on, we discuss what some consider a very accurate tracking mechanism. Tracking software development is important. Estimates that are clearly wrong have to be redone. At all times, consistency with the budget must be tracked.

Currently, this tracking is done with status meetings. Some projects use weekly status meetings and some wait for several weeks to elapse. Obviously, frequent iterations tend to favor weekly meetings.

In Earned Value estimates, an engineer does not get credit for something until it is completely finished. No more 90 percent done. And 100 percent of a component may only be a small percentage of a product.

Let us consider this in more detail. Think of a product with 20 roughly equal parts. If we consider percentages again, nine components completed 100 percent are 45 percent of the product. However, we might actually be farther along than 45 percent, as we may have done some work on other components. When all components are 100 percent done, the product is 100 percent done. If you are going to use this tracking method, it is important to explain it to the clients, especially ones who are used to hearing developers say that they are 90 percent done for half the length of the project. Otherwise, they will not understand why you ve been working for months and are a small percentage finished.

Let us take an example from our 20-item product. If there is an overall architecture, and this is a deliverable , the product probably is 90 percent done, or we would not know what to do next. It is not completely done, however, so we get no credit for it, making some obvious work seem untracked. It is clear that we should use abstraction more extensively, to make the components stand alone. Therefore, in the waterfall life cycle with some feedback, we would be 0 percent done until the feedback ceases. That would be very strange to most clients!

Iterative methods are favored when using earned value to track, because you tend to be finished in a short time because small components are developed. This forces the client to divide the problem. The role of tracker in agile processes is easier using earned value.

Task  

Pretend you are walking down the hall to a project status meeting with the client. You have been using earned value to track progress in the project. Explain how it works before you reach the meeting room.

The Planning Game

The Planning Game of XP has an entire book devoted to it [Fowler01]. Here we will follow the inputs and outputs to the Planning Game in some detail. Hopefully, in this way, it can become obvious how to use it for better estimation.

Initially, everyone ”the clients, the developers, and management ”agrees on the length of the iteration, usually two to six weeks. This roughly sets the amount of tasks that can be done.

The client prepares story cards. These can be both functional and nonfunctional. For example, The user selects what function to execute, or the functions are presented in a clear manner. Requirements can later be derived from these stories. For example, selects indicates that there is more than one function, and clear is a nonfunctional quality attribute, presently undefined in detail.

The client prepares enough of these story cards to cause slightly too much work for this iteration. When these cards are initially handed to the developers, they are the first move in the Planning Game. The second move is considering velocity (see Chapter 2, Software Engineering Methods ) ”the developers estimate the time needed to do each story. Since each developer has a slightly different velocity, the person estimating the work takes ownership of the task. Developers can use either Clark s Method or COCOMO II to estimate the size or time it would take to do the task implied by the story.

These results are then returned to the client. The client considers the market and the length of the tasks and returns the set of stories to the developers in a prioritized manner.

The developers spread out the prioritized tasks. Some are done immediately and some are delayed indefinitely. For example, the clear story may wait for the look and feel of the product to be decided in a later iteration when all functions are done, or the look and feel may be done immediately.

The developers split the stories up by ownership and iteration time. Some tasks might change hands at this point. Note that the tasks may not equal the stories. For example, consider a story s function. It may take several abstractions to achieve that function; therefore, several tasks. The tasks are noted on the story cards. It could result that the same programmer does all the tasks, or some are split off at this point.

We want each developer to finish at about the same time. This time is the length of the iteration. The customer is asked if this iteration is too long to return business value. If it is, then some tasks are delayed. The idea is that the amount of time for iteration and to complete most of the stories should match and be short in duration.

The developers then work on the code, implementing functions and quality attributes defined by the stories. When they and the iteration are finished, the product is delivered to the customer for release. The income from this release can partially fund later releases.

Task  

Conduct the Planning Game for two iterations of developing software for an automatic teller machine (ATM).




Human Aspects of Software Engineering
Human Aspects of Software Engineering (Charles River Media Computer Engineering)
ISBN: 1584503130
EAN: 2147483647
Year: 2004
Pages: 242

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net