Agile Planning Building Blocks

In this section, we look at various planning techniques that work particularly well on an Agile ICONIX project.

Agile planning is not normally seen as a stand-alone discipline, therefore attempts to define it (like agility itself) can result in a somewhat nebulous description. Indeed, the very name “agile planning” might strike some people as oxymoronic. How can you create a plan for something that by its nature keeps changing?

We see agile planning as something that operates at two levels:

  • Adaptive planning: Planning ahead, tracking changes, and adapting the plan as the project progresses.

  • Planning for agility: Preparing for the unknown, so that when changes take place, their impact is kept to a minimum. Agile planning is a way of anticipating and controlling change.

Planning for agility is an essential (and fundamental) aspect of agile development, because it addresses the problem of inexact, inaccurate, and evolving customer requirements. ICONIX Process helps a lot in this matter, as it focuses on identifying gaps in the use cases/requirements and helping to disambiguate them (see 3.]).

For now, let’s look at the first item, adaptive planning, which is closer to what people generally mean by the term “agile planning.” The techniques described in this section are ways of organizing and maintaining an agile (i.e., flexible) plan. These techniques work well together, but they can also be applied individually as needed. The following are essentially the “core practices” of agile planning:

  • Prioritizing requirements

  • Estimating

  • Release planning

  • Tracking project velocity

  • Timeboxing

In addition, the following practices should also prove useful:

  • Tracing requirements back to goals

  • Minimizing the use of Gantt charts (at least for technical planning)

Let’s look at each of these practices in more detail. All of them can be used to good effect on an ICONIX project.

Prioritizing Requirements

The following conversation, in various forms, has probably been repeated thousands of times in just as many projects and organizations:

  • Manager: I need these three items done by next week.

  • Programmer: I have time to do only one of them. Which one has the highest priority?

  • Manager: They all do!

(Depending on the type of manager, this might be followed up with, “Achieve wonderful things—make me proud!”)

Of course, the problem here is that all three items really do have a high priority. If the available resources just aren’t sufficient to cover the work required in the allotted time, then something’s got to give. The worst-case scenario is that the programmer “goes for it,” attacks all three items, and ends up completing none of them in time. The best-case scenario is that one of the items gets done in time, and the other two, just like the programmer said, don’t get done due to lack of time. However, the item that was done ended up being chosen by the programmer (on a technical, personal, or arbitrary basis) rather than by the manager or customer (using more business-oriented criteria).

An extension of this problem is when, at the start of any iteration, the customer is faced with a list of outstanding features, only a few of which can be shoehorned into this iteration. If we consider the manager’s inability to decide which of the three features was the highest priority, the same problem is multiplied many times when it comes to prioritization of multiple features over an extended period of time. Then multiply this by the number of actual project stakeholders, bearing in mind that few projects are driven by just a single individual.

Clearly, we need an effective method of eliciting some realistic decisions from the project stakeholders, so that all involved (customers and programmers alike) are happy with what has been planned for the next iteration.

Estimating

Being able to estimate how long something is going to take is, of course, vitally important in software development. Unfortunately, it’s also an activity toward which many teams take the least scientific approach. Estimates can range from educated guesses to random numbers. A programmer’s “estimation algorithm” frequently involves watching the project manager’s face and giving a series of steadily descending numbers until finally the manager stops frowning. Often, a project manager will choose numbers that he likes without even consulting the programmers, the people who will be doing the work.

Note 

There is also a difference (which often isn’t taken into account) between effort and duration. Most estimates are made on effort, and duration is almost always longer than effort.

With a little more method and a little less madness, it’s possible to produce some quite accurate estimates. As with many things, the more you practice estimating, the better at it you get. There will always be some intuition involved, but it is also possible to be quite methodical in estimating work.

The main way to make work easier to estimate is to break it down into smaller chunks.[4.] Each chunk of work needs to be discrete (i.e., have as few dependencies as possible). This in turn reduces the complexity of having to wait for other parts of the project to be finished.

Programmers beginning a new piece of software are generally embarking on a trip into the unknown. As such, it’s understandable that they don’t like to be pinned down to a particular deadline. If they don’t want to supply a completion date, it’s because they genuinely don’t know. During the planning and analysis stages, getting the programmers involved helps, because they can do exploratory prototyping work to get a feel for how long the “real thing” will take.

Three-point estimating is particularly useful for extracting estimates from reluctant programmers. Sometimes programmers do have a good idea of how long a piece of work will take but are reluctant to commit, just in case it takes longer. Asking them to specify a range (e.g., 3–6 days) is one possibility, but it’s also a bit too fuzzy for planning purposes. Three-point estimating, on the other hand, seems to extract uncannily useful (i.e., accurate) estimates from programmers! The method involves coming up with three effort estimates for each software feature: the best possible case, the worst possible case, and the most likely case. A spreadsheet can be used to combine the three-point estimates in a number of ways.

Once the team of programmers has supplied three-point estimates, it’s possible to calculate the best estimates for a number of features, using the following formula:

image from book

where b represents the best possible case estimate, w the worst possible case estimate, and m the most likely case estimate.

For example, say a programmer, Bob, gives this estimate for retrieving and then displaying hotel details on a web page:

  • Worst possible case = 3 days

  • Most likely case = 1 day

  • Best possible case = 0.5 days

This could be shown on a spreadsheet, along with some other estimates, with the worst possible case/most likely case/best possible case estimates shown in brackets:

Show Hotel Information on Web Page

[Bob]

(3/1/0.5)

Create ShapeFile for Map Server

[Bob]

(2.5/1/1)

Generate Hotel Map

[Sally]

(2/0.5/0.5)

Totals

 

(7.5/2.5/2)

So to calculate the best possible case estimate for Bob’s first task (“Show Hotel Information on Web Page”), we would use this:

image from book

The best possible case estimate for all three items totaled would be

image from book

Note 

Three-point estimating is described further in the book Better Software Faster by Andy Carmichael and Dan Haywood.[5.]

The techniques described in this section are particularly useful for agile planning, because they allow estimates to be produced at a relatively early stage (these estimates are later refined as the project progresses), when the overall design hasn’t yet stabilized.

Release Planning

As the name suggests, release planning is all about breaking the project into smaller releases and planning for each release. To achieve this, the project is divided into small, fixed-length iterations of about a month each. We also plan incrementally—that is, we revisit the plan frequently (at a minimum once per iteration), adjusting it as we go.

The advantage of the fixed-length iteration approach is that it introduces a regular heartbeat to the project, a rhythm of design/code/test/release (or design/test/code/release, depending on your methodology). Very quickly, the whole team settles into this rhythm. The iteration size is key: we keep iterations small so that we get the optimum balance of feedback, urgency, planning, focused work, and delivery time for each project deliverable.

It’s preferable to keep the iteration sizes fixed, but if the project release plan is at odds with this approach, it isn’t worth having a cow over. The fixed-length iteration approach is either highly recommended or mandated by just about every agile process in existence, but it can also be surprisingly difficult to apply to real-world business constraints, where some functionality must be delivered by a certain date.

To put it another way, small fixed-length iterations and frequent short releases are often at odds with each other. If you aim to produce a working piece of software at the end of each month, but you’re also producing point releases of the product every couple of months or so, then the effort of releasing the software halfway through a month will be constantly disruptive to the effort of incrementally producing new working software at the end of each month.

Generally, we’ve found that using fixed 1-month iterations (timeboxing, to use DSDM parlance) works extremely well, but we also recommend that you tailor the approach appropriately for your project. If your project is regularly releasing new functionality to the customer (e.g., every 2–3 months), then you may find it sufficient to use this as the project heartbeat. In other words, align the iterations with the releases.[6.] An iteration size of 2–3 months is really the maximum, though. Anything longer and you should seriously think about introducing a shorter iteration size within the release schedule.

Tracking Project Velocity

Project velocity is, simply, the speed at which the programmers are completing their programming tasks. It’s used in XP to adjust estimates of “ideal development days”[7.] required to complete a feature to the actual effort that the team eventually took to complete the feature. This is done repeatedly and ruthlessly throughout the project, so that eventually (we hope) the team gets better and better at estimating its work.

An “ideal day” is a day when everything goes right and there are no interruptions (annoying meetings, lengthy phone calls from depressed-sounding spouses, nasty and unexpected network outages, illnesses, etc.). It also represents the time spent purely working on the feature in question, and it doesn’t include other activities such as helping your pair programmer on other tasks.

To effectively track velocity, you need to break down the work into chunks of work of roughly equal size (in terms of the effort taken to complete them). Of course, this won’t always be possible, because some atomic items of work are different sizes than other atomic pieces of work; one item might take a day to complete, the other 1.5 days. Generally, this shouldn’t be too much of a problem, but if it is, try using an arbitrary unit of measurement (e.g., Gummi Bears) to measure a feature’s complexity, or preferably use a smaller base unit (e.g., ideal hours instead of ideal days).

Work estimates wherever possible should be based on prior experience of writing a feature of a similar size and nature, so if writing a data entry screen last month took a day, a similar screen this month should also take about a day. Using historical data to make new estimates is key to this approach—it means we’re not “guesstimating”; rather, we’re measuring based on previous experiences.

As with estimates, the historical data needs to be recorded in terms of ideal days. By tracking both the estimated time taken and actual time taken on a weekly basis, we can start to track the project velocity. If fewer of our equal-sized features were delivered this week than last week, then the velocity has dropped.

As we get better at estimating work in terms of ideal work days, if we start to find that the actual time spent on a feature is greater than the estimated time spent, then it could be for one of two reasons:

  • The programmers are still underestimating the work.

  • There have been too many external interruptions recently (meetings, etc.) distracting the programmers from their work.

If the programmers are underestimating the work (a common industry affliction), perhaps the work units are too large and need to be further broken down, or (equally likely) the estimates weren’t based on historical data.

image from book
OR, WE COULD ACTUALLY ESTIMATE BASED ON OUR MODELS INSTEAD OF USING GUMMI BEARS

A somewhat more scientific alternative to measuring work units in Gummi Bears can be easily accomplished by representing logical software functions as controllers on our robustness diagrams, and by assigning a complexity level and a responsible programmer to each controller. This functionality is built into tools such as EA (see the following image). It’s then a simple matter to export this information to a spreadsheet. This doesn’t mean we’re giving up the adaptive aspects of planning, but it does allow us to do a much better job on the predictive end.

image from book

image from book

Timeboxing

Timeboxing, like much else in software agility, has sprouted more than one definition. We describe here the two main contexts in which the word “timeboxing” is used. (The first is the more formal definition, whereas the second is the informal way that’s commonly used in fast-paced, buzzword-peppered management meetings.)

Planning in Fixed Iterations

The timebox has a set period, usually a few weeks at most. Planning takes place at the start of each timebox. The project stakeholders decide what features to include in the next timebox, and the programmers estimate the features. If there are too many features to fit, one or more features are taken out. (Similarly, it may turn out that more features can be added.)

Timeboxing has its roots in DSDM, but a similar mechanism appears in almost all modern agile processes. Scrum uses fixed monthly iterations known as sprints. The list of features that may be included in the next iteration is known as the sprint backlog. The project stakeholders decide at the start of each monthly sprint what features from the backlog to work on during the upcoming month.

Similarly, fixed iterations in XP involve the on-site customer deciding at the start of each iteration which features (or user stories) to include. A big difference is that the feedback loop in XP is much shorter—the fixed iteration can be as short as a single week.

Whatever the iteration size (we generally recommend a month), timeboxing of this sort is an essential part of pretty much any agile project. The approach helps to reduce risk, because the feedback loop between developers and users is kept short. That is, the customer and end users get to say “This is wrong” or ask “Can this be changed?” sooner rather than later.

Placing a Limit on Indeterminate Work

Outside of DSDM, the word “timebox” is increasingly used to indicate that a certain amount of time (less than a single iteration) has been set aside for a certain activity, and the programmers involved will fit as much “stuff” into the timebox as they can. While this sounds ad-hoc, it can occasionally be useful for activities that by their nature don’t have specific criteria for when they’re finished. Specifically allocating person-days for the activity helps because it establishes a definite endpoint for the activity.

For example, if a module has been written without very many unit tests, the team members might plan to spend 2 or 3 days simply writing tests and fixing the bugs that they will inevitably uncover as they go. The expression that appears on the project plan might be “Timeboxed unit tests for XYZ module – 2 days.”

Although occasionally useful, timeboxing in this sense should be used with care because of its ad-hoc nature. Actual features with specific endpoint criteria should instead be planned for individually.

Timeboxing when used in this context is typically also accompanied by a variety of Buzzword Bingo phrases such as “touching bases,” “leveraging the extranet,” and “empowering the employees.”

Tracing Requirements Back to Goals

Every project should have a set of goals to drive it. These goals are defined at a fairly high level—a higher level than requirements, for example. They’re kind of like a project mission statement in bullet-point form. In fact, a typical project probably wouldn’t have more than five or ten goals. The purpose of the goals is partly to keep the project focused, but their primary purpose (from an agile planning perspective) is helping us to gauge the impact of any potential change in requirements.

The idea is to keep each requirement traced back to one or more project goals. This is easier than it sounds, mainly because there won’t be many goals to trace back to. So, for example, one of the goals might be

[GOAL 3] To put our entire range of services online using a secure Internet connection.

And a couple of the requirements might read

ACC-10. The user must be able to log in at any time and check his/her account balance [GOAL 3]

ACC-20. The user must be able to transfer money to or from his/her account. [GOAL 3]

This helps our agile plan in a number of ways. Midway through the project, it’s likely that at least some of the requirements will start to change. If either requirement in the preceding example was to change, we could check with its associated goal to make sure that the change doesn’t break its original goal.

Similarly, if the project takes a new strategic direction, then it’s likely that one or more of the project goals themselves are going to need to change. For example, GOAL 3 might need to be amended so that this first version of the project will only allow read-only access to the company’s range of services—that is, the users can log in and browse, but they can’t yet change anything. By checking all of the requirements that are traced back to GOAL 3, it’s relatively easy to determine which requirements need to be changed or removed.

Tip 

This approach is also useful for eliminating redundant requirements. In other words, we might find that certain requirements can be removed altogether without breaking the high-level goals in any way.

Traceability is sometimes seen as nonagile, because it involves additional documentation and extra work to keep the documentation and traceability matrices up to date. We hope we’ve demonstrated here, though, that a small amount of traceability (without going too wild) can actually help agile projects by indicating which requirements are linked to a common set of goals.

Minimizing the Use of Gantt Charts

Unfortunately, the Gantt chart will always be with us, taunting developers with its quantitative and predictive demands. It’s increasingly obvious that Gantt and PERT charts just aren’t realistic ways of planning software projects, at least not at the technical level. More appropriate methods of planning tend to be less about predicting X, Y, and Z activities down to the nearest hour, and more about tracking progress and adapting expectations over time. These more appropriate methods reflect the fact that software development is a creative, “fuzzy” activity as much as a precise engineering discipline.

image from book
WHAT ABOUT DEPENDENCIES?

Gantt charts do, of course, help with tracking precedence between individual engineering tasks—that is, task A must be completed before task B can begin.While we recognize that these planning dependencies can’t be completely eliminated, agile practices (and pure old-fashioned good design principles) do help to reduce them, for example:

  • Following the design principles of high cohesion and low coupling help to reduce or eliminate dependencies between classes, allowing them to be tested and developed in any order.

  • Mock objects can also help to overcome the dependency problem, especially when combining ICONIX modeling with a test-driven approach (see Chapter 12), because it means we can develop classes that use a “mock interface” before the interface has been implemented, as long as we have a set of unit tests to assert that the results are as expected.

image from book

Technical development work planning should involve tracking progress on a series of fine-grained engineering tasks and adjusting expectations based on the rate at which these tasks are being completed. These expectations can then be projected onto higher-level (less detailed, less technical) plans which, as you’d expect, become steadily more accurate the further into the project we get.

However, this type of planning isn’t going to keep upper management happy. In fact, presenting management with a list of engineering tasks and the level of progress on those tasks is probably only going to annoy them. The problem is, “planning” of this sort is of no use to the big boss, because it doesn’t tell him what he needs to know.

Business-level managers love Gantt charts, because Gantt charts represent a model of the world that makes sense at their level. Therefore, in whatever way we approach planning at the technical/project level, at some point our technically precise plan will need to be translated into a nice obtuse Gantt chart to keep upper management happy. Essentially, project plans of this sort are required to keep management apprised of the project’s overall progress. If the managers can’t see, and plan for, progress at a high level, then they start to become uneasy. So you need to hide the trees so that management can see the forest.

When producing project plans, a common error is to put technical information on them. It’s important that with a project plan, just as with any other type of document, you know your audience. Once you acknowledge that the plan is really just for upper management, then you can begin to tailor it according to the information that managers really want to see. It’s likely that all they actually want to know about is the following:

  • Is the project on track?

  • Is the next release (e.g., in 2 months’ time) going to be late?

  • What will be in the next release?

  • Has anything slipped to the next release?

Thus, we can eliminate all the engineering tasks from the project plan (at least from this particular form of plan—obviously, you need to track the engineering tasks somewhere!). So the project plan simply needs to consist of a list of requirements, or high-level deliverables (use case names would be an appropriate level), or even just the project goals (depending how fine-grained they are).

Tip 

Try to avoid the lure of putting “percentage complete” values on any project plan, because these are—let’s face it—valueless and often misleading. You’re not fooling anyone with that “90% complete” talk![8.] It’s probably better just to include the estimated completion date, if there is one, or to include a range (see the three-point estimating discussion earlier in this chapter in the section “Estimating”).

[2.]Dean Leffingwell and Don Widrig, Managing Software Requirements: A Use Case Approach, Second Edition (New York: Addison-Wesley, 2003).

[3.]Karl E. Wiegers, Software Requirements, Second Edition (Redmond, WA: Microsoft Press, 2003).

[4.]Of course, having a well-defined use case model and detailing the use cases with robustness diagrams is also an enormous help here.

[5.]Andy Carmichael and Dan Haywood, Better Software Faster (Upper Saddle River, NJ: Prentice Hall, 2002). See in particular Chapter 5, “The Controlling Step: Feature-Centric Management.”

[6.]Leading Doug to speculate that since this is so simple, why do anything else?

[7.]Kent Beck and Martin Fowler, Planning Extreme Programming (New York: Addison-Wesley, 2000).

[8.]As we all know, the first 90% is the easy part; it’s that second 90% that’s a killer.



Agile Development with ICONIX Process. People, Process, and Pragmatism
Agile Development with ICONIX Process: People, Process, and Pragmatism
ISBN: 1590594649
EAN: 2147483647
Year: 2005
Pages: 97

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net