Questions and Answers


Is there any proof that iterative development is worthwhile, or better in some qualities than the waterfall?


Yes. For the whole story, see the Evidence chapter (p. 63).


For example, will an agile or iterative method make my team more productive?


Yes, there is some evidence, both for productivity and defect reduction (p. 76). However, beware silver-bullet "faster, cheaper" claims made speculatively by various agile method promoters. One must also be mindful of the Hawthorne Effect when new methods are introduced: That individual productivity and other behaviors temporarily improve when people adopt new methods and know they (or their project results) are being studied.

Sustainable productivity improvements only arise over a long period of effort and change, with long-term initiatives and management support. Gerald Weinberg once cautioned against claiming that anything would make more than a 10% improvement wise advice.

Perhaps more important than asking about more productivity is asking about less failure. Software project failure rates are high, averaging 23% in the USA in 2000. Rather than first searching for the next 2% productivity improvement, it may be a rational higher priority to first focus on preventing some of the lost (estimated for the USA) $50 to $70 billion that is spent each year on cancelled software projects.

Interestingly, investigation by the Standish Group into project failure indicated that this level of failure has not been visible at the CEO and VP levels; not all executives know how bad it is in their software groups. To quote from their 1998 report:

For years [software] project failure was simply not discussed. And it certainly was not discussed with the CEO.

In this context, perhaps the most relevant value of IID is that it is correlated with lower failure rates. And, through the process of early feedback, leads to products more closely aligned with what customers really want.


How do you plan an iterative project?


See tips starting on p. 248.


My customer expects a week-by-week schedule and detailed PERT chart. What should I do?


See "Rolling Wave Adaptive versus Predictive Planning" on p. 253.

One approach is to present a short seminar on IID and client-driven adaptive planning, and propose starting the next project based on this approach. If they are not satisfied after some time period, you will agree to revert to their desire for a predictive plan.

If that isn't possible or doesn't work, another approach is to create the predictive plan they desire, but then run the project with client-driven adaptive planning. At the end of the first iteration, show the customers a demo and update them on progress and new insights. They too will have new insights and priorities based on this feedback. Then invite them to tell you their current priorities and refinement of ideas. Covertly, the project may then slip into adaptive planning as the customer directly sees the benefit of their ability to guide it iteration by iteration.


How to handle fixed-price contracts when applying an IID method?


Also see the follow-on related question and answer: Can IID be applied with contracts (usually fixed-price) in which we are forced to do major up-front requirements analysis? (p. 303)

There are at least two ways to answer this: the ideal, and the usual.

The ideal which has been sold with success by various consulting companies is to organize the project into two contract phases. Contract phase one corresponds to inception and elaboration (or at least a good part of elaboration) in the UP. See "Fixed-Price Contracts" on p. 18.

Note how this approach reduces the risk for the customer. In phase one they paid for tangible results that moved the project forward, but they didn't commit everything to the project or to one service provider. Also, it is desirable to hire very talented people for this phase one, to create a solid foundation. But it may thus be possible to use less expensive average resources for phase two. Finally, for phase two, the customer will more reliably know the true costs and duration, and will be working with a provider who has a greater chance of remaining both solvent and sane, as the provider accepted the challenge of the project with sufficient initial information to make an informed (rather than desperate) choice.

In summary, this approach balances the risks for the customer and service provider.

As an aside, some consulting companies have run a refinement of this phase one in which one developer from the customer side joined the consulting team. They not only provided insight for the project, but insight to the culture and leaders of the organization. During an end-of-phase demonstration to the customer, the customer-developer herself leads the demonstration created with the consulting team, and thus creates an in-selling effect to help the consulting company win the phase two contract.

The more commonplace answer to the contract question is that service providers are forced to create fixed-price bids without the luxury of the above phase one, and are taking a larger risk. In this case, they bid however they prefer. (Note, as an aside, that the estimation technique of WideBand Delphi can improve their estimates see p. 260) Yet, running the project iteratively still gives them an advantage: Since iterative development is about tackling the high risks and hard elements in early iterations, they will receive early feedback about how much trouble they are really in, or not! They will discover more quickly if their estimate of the cost or difficulty was low, and be able to take early mitigating actions, such as hiring experienced specialists, looking for preexisting components, early expectation management, and so forth.

Furthermore, by running the project iteratively, they will be showing early visible results of value to the customer. There is an increased chance, through thus winning the confidence of the customer, that they will be able to renegotiate some of the difficult fixed-price contract terms at the suitable strategic moment.


Can IID be applied on projects or contracts (usually fixed-price) in which we are forced to do major up-front requirements analysis?


Yes. even with "the complete requirements" developing via many short iterations has advantages.

To reiterate points from a prior answer, the team receives early feedback about how much trouble they may be in, by building, integrating, and testing early and often (there's nothing like programming to discover what you don't know). This approach drives down the risks, shakes out the requirement bugs, and provides opportunities for reacting sooner rather than later to major problems.

There will be early tangible results for the customer (user, marketing manager, …), leading to confidence building and quality feedback. If the product needs to be rolled out earlier than planned, there's something available.

And research suggests that developing in short, timeboxed steps is associated with higher productivity and lower defect rates.


What are typical risks and mistakes when adopting an iterative process?


Near the top of the list of mistakes or risks is that the customer or executive management does not understand and accept the change in values and practices, or appreciate how deep and far-reaching the changes need to be. I see this manifest in situations like "Congratulations! We've adopted <iterative process X>. When will the requirements be finished, so we can decide how to design the system?" Or, "It's budget season. Please take a few weeks to identify all the projects for next year, and how much they will cost and how long they will take." Inconsistent culture and expectations derived from waterfall or mass-manufacturing values clash with an iterative and agile approach. As another example, the customer does not actively participate, iteration by iteration. And so on.

The solution includes having an executive and customer champion who understands and can communicate this to their peers, education seminars and learning aides (e.g., this book) for these groups, and "post-partum" project sessions in which the stakeholders share their experiences with other customers and management.

Another common mistake is changing or adding to the goals of an iteration, once underway. In a sea of constant change and chaos, some stability and control is necessary. That comes from leaving the team alone, once they've committed to an iteration. Save the change requests for a future step.

Another problem is using so-called iterative or agile consultants or consulting organizations who don't really comprehend evolutionary, adaptive development. They superimpose waterfall values on top of an iterative process, or try to recast their old waterfall process as an iterative one. Then we get corruptions such as misinterpreting the UP inception phase as requirements, elaboration as design, and so forth. Or, we get promotion of excessive specifications and other documents, instead of early programming. Or, we get predictive planning in which a plan is created listing how many iterations there will be, their durations, and what will occur in each. Or, we get misunderstandings such as "let's iterate over the requirements until they are stable, then we can nail down the design in a series of iterations."

Another risk is attempting to transition to an iterative and agile process without coaching from someone who's been there and done that. I sometimes see well-intentioned local managers championing the adoption of an iterative method, who think they or their staff can lead the adoption without help. And sometimes they can, but a timeboxed, iterative, agile approach is rather different for many teams. Colleagues and I have seen a number of homemade adoption attempts where the adopters didn't appreciate the synergy between the different practices, and they were modifying the newly adopted methods unwisely. For example, claiming to adopt XP by simply eliminating written requirements and writing unit tests occasionally.

Using a coach who isn't steeped in the old local ways and who has the confidence of knowing that the iterative process works is money well spent. And, because we are talking about value changes, it is often more effective that the change agent be from the outside. It seems to be the way of the world that we're never a prophet in our own land, especially regarding software methods.

Another common mistake is overselling or mis-selling the advantages of an iterative, agile method to customers or management. The popular press and a number of agile method books still exhibit the silver-bullet syndrome. The master consultant Gerry Weinberg advised to never promise more than a 10% improvement, not only because greater yields are seldom sustained (and let us not forget the Hawthorne Effect), but more can suggest the current management and management practices are really inept, which is seldom true.

On the subject of mis-selling, note that iterative methods are not fundamentally for improving productivity or delivery speed or reducing defects, although there is research showing correlations. Rather, they are less ambitiously for reducing the risk of failure and increasing the probability of creating something of value that the stakeholders wanted. Given that recent data shows that 23% of projects fail, this is no small feat. Issues like improved productivity are secondary when one out of four projects simply "goes pear shaped" (as they say so evocatively in the UK), after consuming on average $1.2 million USD. Note as a related point that this same failure research indicated that the CIO or CEO has often been "shielded" from these failure rates, and is unaware of the true extent of failure in her development organization.

Big-bang process adoption is another common mistake: Educating many managers and developers in the new method over a short period, and/or applying it on many projects during early adoption. Just like a software project itself, adopt the process iteratively and incrementally, in small steps. Start with pilot projects, and learn from the experience.

Another risk is to try to recast the new process in terms of your current culture's vocabulary and ideas. For example, attempting to adopt the UP but rename the phases and workproducts to old, local names. Or, to make efforts to explain how the new process fits into the ideas and phases of the old one, in a misguided desire to help the new process be successfully adopted. Just surrender to the new; make a clean break from the past.

Some organizations have a small group responsible for process, methods, best practices, and so on. A risky adoption approach is for this group to speculatively decide how to apply the new IID process armchair methodology. They try to "enhance" or "refine" it speculatively. Coupled with this problem is the related complication of top-down process advice, which doesn't usually work. For example, if an organization is adopting the UP, it has the concept of tailoring the process to fit the project and organization. A risk is to let this group speculatively create a UP tailoring, an XP tailoring, or whatever. The result will often have little to do with what's really useful in the actual project. Rather, determine how to adopt the new process through experimentation and by the advice of the coach and practitioners on several pilot projects; more bottom-up than top-down.

Some organizations have a separation of software designers and programmers. Maintaining this separation is another mistake when adopting iterative development, although there is still need and value in expert designers such as a chief architect. The programmer must be an active designer, as the design is not fully pre-cast in these methods, but evolves in response to growing insight, test results, refactoring, and so forth.


How to adopt an iterative, agile process within an existing waterfall process culture?


Suggestions include:

  • Have an executive and customer champion who understands and can communicate the ideas to their peers.

  • Define a goal or reason to adopt the method, and a quantitative measure of its success. For example, the number of failed projects per year, or the results on satisfaction surveys for developers, managers or customers. Measure and communicate the results. Don't expect quick or dramatic improvement; process change takes time and skill over a series of projects, and a new method is not a silver bullet that will revolutionize things. The famous (and efficient) Toyota manufacturing system took over 10 years to be fully adopted.

  • Present education seminars and learning aides (e.g., this book) to executive and customer groups.

  • Adopt the method with pilot projects and an incremental approach. Start with one project and a method coach. Drive the adoption from the learning that emerges from these early projects.

  • Don't oversell. Don't claim it will improve productivity and so forth, but propose a pilot as an experiment whose results will guide further steps in other words, an empirical approach.

  • A failure on the early projects will not surprisingly kill the adoption drive. So, mitigate that risk by using a good method coach. Choose a project big enough to be meaningful but not so big it is dangerous; for example, five or ten people on a six-month project is a good size. Don't introduce too much novelty on the projects, such as many new technologies or unproven third-party components. You don't want the pilots to fail for reasons unrelated to the new method.

  • Let the participants in the early pilot projects become the new method leaders (or "process engineers") in subsequent projects.

  • Hold "post-partum" project sessions after these early projects in which the stakeholders share their experiences with other customers and management. This in-selling is more powerful than executive or consultant recommendations.

  • Results speak louder than theory. Assuming the pilot projects do achieve earlier valuable and visible results with lower risk than the prior waterfall process, record this achievement, and communicate it.

  • If the waterfall organization is resistant to the idea of short iterations, propose instead that their next 12-month project be run as two 6-month projects or three 4-month projects, in order to "lower the risks and show early results." Capture a record of positive experiences with this change, communicate it, and on the next project, suggest a shorter step: "We improved with two 6-month steps. We think we can do even better with three 4-month steps on the next project."

On the question of what type of pilot project to choose, XP and Scrum have a different answer than the UP. The latter suggests a not-too-risky project, but the former methods (especially Scrum) recommend first adopting it on the most difficult project the organization faces. These method leaders feel confident that their methods, applied correctly, will yield success, and that the crisis of a difficult project provides the right fertile ground to truly abandon the old waterfall habits and seize the new ideas wholeheartedly.


How to control costs if adaptive planning?


Before answering the question directly, there's often an implication in the mind of the questioner: That with predictive planning, costs (and schedule) is successfully controlled. But research shows this is not true; indeed, predictive planning has a poor track record for software projects. The root problem is the flawed assumption that software development is predictable manufacturing rather than new product development.

Failure research

Evo and evolutionary delivery provide one model for the answer. At some point an overall budget or estimate is generated for the project, although it is acknowledged as unreliable. Thereafter, take a small iteration step that represents between 2% and 5% of the budget (or desired duration). Plus, choose a step with a high value-to-cost ratio. Have a quantified goal (or goals) for the iteration, and measure the impact of the step. In the worst case, we have "wasted" a small percentage of the budget on an unsuccessful step. In the best case, we have made a good return on the small investment.

The adaptive plan emphasizes using the most recent information maximizing value for a small cost commitment.


How do we measure quality in an iterative process?


The short answer is, the same as in any process, but earlier and more frequently. Yet, there is a distinctive component I want to emphasize. A useful best practice is to continuously verify quality, and this implies not only quality of the product, but of the process.

A number of agile methods promote some kind of iteration assessment, which I usually prefer to call the beer party (being Canadian). That is, at the end of each iteration (or the start of the next), get together for a half hour as a group, and ask some questions: what worked well, what didn't, and what are a couple of concrete actions we could take in the next iteration to improve? Maybe Jill spent too many hours explaining the defect tracking system to new joining consultants, and she should take a few hours to write up a Web page summarizing the introduction for the next incoming batch of people. Maybe the evolving set of use cases are useless, and their creation should be stopped, or improved.

The Scrum meeting also provides a way to measure the quality of an iterative process. We see, day by day, how things are going, what's working, and what isn't.


How to coordinate subteams or subcontractors on a large IID project?


One part of the answer involves establishing early personal liaisons between the subteams, and a leader in each with an understanding of the project vision and architecture. See "Multiteam or Multisite Early Development" on p. 248. That section provides the details.

The value of forming these personal relationships with other liaison team members is most significant during later coordination and communication. And, having forged a common, deep understanding of the vision and architecture lowers the risk that the subteams don't understand or create what is needed.

Another part of the answer is to establish cross-team milestones for integration and testing (system, load, etc.) of all components. These are the macro-iterations of the project. For example, the project may have a macro-iteration of six weeks; at its completion, all components across all subteams are integrated. Within this macro-iteration the subteams may choose to decompose the time and their own work into shorter micro-iterations, such as three two-week iterations.

These macro-iteration milestones provide the heartbeat and mechanism to force regular coordination between the subteams and subcontractors.

Note that very short macro-iterations for large projects with many subteams may become awkward or unproductive. The overhead of pulling everything together and testing it takes nontrivial time and resources; a two-week macro-iteration may be too short for completion of sufficient new work, given the overhead of integration and test.

Another proven variation on this macro-iteration approach is to relax the requirement that all subteams must integrate, and instead only require that at least two of the subteams must integrate. This may allow a shorter macro-iteration, such as two weeks, as the effort of integration and test is lowered. See "Difficult Multiteam or Multisite Iteration Planning" on p. 249.

All of these variations can (and usually should) be combined with the practice of continuous integration see p. 275.


How to estimate overall effort or duration for an IID project?


One way to answer is, the same as before. Don't assume that scheduling the tasks iteratively will meaningfully change productivity or duration. You may still use parametric estimation models (such as COCOMO II), micro-estimation methods based on work breakdown structures or use cases, and so forth.

That said, some of the IID methods, such as XP, include specific advice on how to estimate. These are introduced in their respective chapters, although study of dedicated method books (listed in the recommended readings sections) is needed for full details.

However, I do want to recommend an excellent iterative and team-based estimation technique that is complementary to other estimation techniques, Wideband Delphi see p. 260.

Wideband Delphi


How to estimate the duration of an IID project without having a plan of what will happen week by week?


A total effort (or duration) estimate precedes and is not dependent on detailed task scheduling. Rather, it is primarily a function of the requirements, team size, novelty, and so forth. Of course, scheduling issues can affect an estimated completion date, such as if the project spans the summer months of vacation.

I get this question frequently, and the questioner usually really means to ask, "How to estimate the dates of intermediate milestones with an IID method?" Major milestone dates can be estimated based on common effort estimation methods, once the goals for the milestones are decided. As always, the reliability of the estimate is commensurate with the quality of the information and the project's current point on the cone of uncertainty.


If we have use cases, how to schedule them with respect to iterations?


Although it is desirable to fully complete a use case within an iteration it's a straightforward approach it isn't always best, because some use cases are so complex (with many scenarios) that it would take an excessively long iteration (such as three months) to complete. Short iterations are almost always preferable for a number of reasons. For details: See "Iteration Goals: Use Cases and Scenarios" on p. 269.


How do we track use case requirements across iterations?


To expand the question, how do we track that some scenarios of a use case have been done, and others haven't?

The answer is dependent on the requirements tracking tool, and the way you write use cases. Let's assume you are using the popular (Cockburn) format for use cases. In this case, for each use case there is a "main success" scenario and various "extensions" (or "alternatives") with labels such as 3a, 3b, etc.

If you're using a tool like Rational's RequisitePro (which adds macros to Microsoft Word), you can use Word to textually highlight a scenario, and then mark the selected text (e.g., the scenario 3a) as its own requirement in the RequisitePro database, with lifecycle state information such as "approved," "underway," "completed."

If you are using a feature or issue-oriented tracking tool, such as Issuezilla or Bugzilla (which is often used for new requirements, not simply defect tracking), you can record the scenario names as labeled issues, with associated lifecycle state. For example, scenario 3a of the Process Sale use case can have the label "process sale-3a" and the state of "complete" in Bugzilla.


How to persuade our customers (or management) to adopt IID?


Don't propose a definite adoption within the organization. Rather, suggest an experiment, motivated from the data and trends: that IID is associated with lower failure rates and earlier results, that it is now used by many organizations, its use is increasing, and so forth. Some of the data in the Evidence chapter (p. 63) may be helpful.

Then, organize a half-day seminar for the executive team, customers, and other relevant stakeholders. Use a seminar speaker who can present the key ideas and make a persuasive case for the experiment, but avoid overselling the benefits. It is useful to emphasize that IID methods support early visible results and ongoing steering by the customers; of course, customers are interested in this point. It must also be stressed that customers will need to take an active and ongoing role in clarifying the requirements, evaluating results, and providing feedback.

Next, get commitment for the experimental pilot project and find an executive champion. Run the project, capture data on the experience and results, and communicate these. Make a decision to continue, or not.


We want to apply XP, but don't have an onsite customer. What do we do?


See p. 152.


We think we are applying XP, but use fairly detailed written specifications for the iteration rather than an onsite customer. Is that OK?


See p. 156.


What's going to happen with our existing test and QA department if we adopt an IID method?


In a classic waterfall environment, the QA team expects to receive a final system for testing near the end of the project, but may not otherwise be significantly involved in the development. Although a final QA step never hurts, it is not sufficient or efficient because research shows that it is cheaper to remove defects early rather than late.

With IID methods, there are at least two approaches to working with the QA group. The first is to allocate a QA person (or persons) to the iterative project from the earliest iterations either full-time or part-time. They are involved, iteration by iteration, in the early creation and execution of tests and other evaluations. They become members of the development team. If the project is running in a common project room, then ideally that's where they work, although there are times this is not possible due to the complexity of the test environment.

Microsoft takes this to the extreme by dedicating more-or-less one tester for each developer, and they collaborate throughout the project. As an aside, Microsoft developers do not usually practice test-first development (see p. 292); it would be very interesting to know if the same number of independent testers would be needed if they did test-first development.

A second approach is to deliver the internal release from each iteration to the QA group for evaluation. While the development team is moving forward with iteration N, the QA team is evaluating the results of iteration N-1. See "Overlapping or "Pipelining" Activities Across Iterations" on p. 251. Their feedback can be handled in the current iteration, or if it is too laborious and there are no slack resources, allocated to the next.


Can a project fail with an IID method?


Certainly, although I like to call IID approaches "fail early" methods, and the waterfall a "fail late" method. A waterfall project can be like the story of the guy who fell off the cliff:

As he was hurtling down, someone yelled, "How are you doing?" The guy replied, "So far, so good!"

In the waterfall, the risks pile up near the end; the project can have the mirage of running smoothly for many months while the less risky and easier work is done. Then, pow!

On an iterative project we discover how much trouble we are in sooner rather than later. We have a better chance to cancel the project before too much is invested, or experiment with solutions.


What new skills are needed for managers and developers?


For managers, perhaps the biggest shift at least with methods such as XP and Scrum is to step back and avoid assigning tasks or directing work, not being the taskmaster. Recall that in these methods self-directed teams and volunteering for work is important. The manager's role is to reinforce the project vision and company goals, manage risks, communicate the iteration goals, remove blocks, provide resources, and track progress.

They are also responsible for the new skill of iterative and adaptive planning, which is easier but more frequent than detailed predictive planning.

For developers, they will participate in more project management activities, such as task identification and estimation, each iteration. Especially with XP and Scrum, their biggest shift is perhaps the attitude of "owning" the project and its problems. Recall that in the daily Scrum meeting, it is the team's collective responsibility to spot and fix problems with the project or team members, not the manager's responsibility.

On the technical side, developers require skills in how to set up and do continuous integration, and more frequent and thorough testing than they may have previously been used to.


How to deal with change management in an IID method?


This is specific to the method, although most have in common the following constraint: Once an iteration is underway, no changes are introduced to the iteration. This gives the team a short stable period some control over the chaos.

Most also have in common the practice of not treating change requests informally via talk or email; therein lies a path to project ruin! Rather, changes are captured in a change request (whose form ranges from a simple story card in XP, to an entry in the Bugzilla database), and considered decisions for the requests are made by the key stakeholders during the iteration planning meetings.


Is IID useful for commercial products?


Certainly, and in fact IID methods found early and widespread adoption in the product sector. In many software product companies of Silicon Valley, for example, you will find IID has been commonplace for years.


We have to tell the customer what they will get and what it costs before starting to build it. Therefore we can't work iteratively, true?


False. There are several ways to answer this. For one, if you are in such a market (still common, for example, with fixed-bid contracts with governments), then do what you must with up-front analysis and estimation. And, the customer might have required a detailed predictive plan; you focus in this plan on identifying what they want in the first and second iteration, and accept that the remaining iterations will be less rigid. Next, start to develop in short iterations, and bring the customer into the evaluation and feedback process. By showing the customer the results of the early iterations quickly, you win confidence. At this point you may say to the customer "Even though we planned <X> for iteration 3, you now have a chance to re-choose according to your latest insight and priorities." And at this point the customer is more likely to view this flexibility and control not as a defect in your skill as a predictive planner, but as a more valuable way. The same practice and psychology applies to the evolution of requirements.

In short, we make a waterfall attempt as desired by the customer, and then run an evolutionary IID project as trust is established, to really benefit the client.

Another perspective is that even if we don't have the degree of requirements evolution we wish, by at least organizing the "frozen" requirements work iteratively, we gain several advantages. As the Evidence chapter shows (p. 63), productivity, defect, and success rates may be improved. And we may still have flexibility over the ordering of the development iterations, to meet our desire to drive down risks early.

Customers don't usually care about fine-grained weekly scheduling. They may want to define a milestone that <X> is completed in two months and <Y> two months later. But, they don't have to see that you organize a two-month phase into four iterations of your choosing.


We can't make a solid architecture if we do not know all the requirements up front, true?


False. What we do need to understand early is the architecturally significant requirements, which is a subset of the total. Plus, architecturally influential factors are mostly nonfunctional quality requirements, such as reliability, security, and so forth, rather than the myriad detailed functional requirements. It is less difficult to learn the former than the latter during early analysis. In addition, if use cases make sense for the project, we can focus in the early phase on understanding the subset of architecturally significant use-case scenarios (which may be 10% of the total set), rather than all use cases.

For example, in the UP, the idea is to analyze something like 10% of the requirements during the inception phase those that are most architecturally significant. Then, quickly start programming in the elaboration phase, while the majority of the remaining functional requirements are uncovered and evolved perhaps in a series of requirements workshops in parallel to programming the core architecture.


Rework (or refactoring) each iteration sounds expensive. Isn't it cheaper to design it correctly up front?


In practice, IID projects infrequently require massive rework; it is more a theoretical than practical concern. This is due to a combination of taking small steps and emphasizing early testing and feedback, so that a solid path is discovered and maintained sooner rather than later.

In addition, modern powerful refactoring tools (in several Java IDEs, for example) make large-scale changes easier and faster.

In any event, complete and near-perfect up-front speculative design or architecture is seldom observed, even when it has been diligently attempted. Decades of failed attempts in waterfall projects demonstrate the difficulty and expense of this approach. The reasons are varied: the constant use of new (and unproven) technologies, high complexity, the many degrees of freedom software solutions offer, the unreliability of the requirements on which speculative design decisions are based, and more.

Also, bear in mind that it is only XP which promotes almost no up-front architectural thinking, not the other IID methods. Scrum, UP, Evo (and others) all support some degree of up-front architectural analysis and design, with a balanced interplay of early programming and testing to prove or disprove the ideas.


What use are iterations for short projects of, say, three months duration?


Organizing the development and priorities in two- or three-week timeboxes still helps with achieving early visible progress, and keeping the complexity manageable. Often in these projects, most requirements are semi-reliably known near the start; very good. If your organization has adopted the UP and its concept of the four phases, the distinction of the first three (inception, elaboration, and construction) is not particularly useful on such short projects; rather, simply a series of "development" iterations prioritized by value and risk, followed by a transition phase, is sufficient.


How can we get our management to realize they don't need a final, detailed plan on "day one"?


Through demonstration, analogy, facts and logic.

Demonstration Create and run an IID pilot project applying the principles of adaptive planning. Have external management and clients drive the choice of work each iteration, and at each post-iteration demo ask "Is the project proceeding as you want?"

Analogy When we appreciate that building software is new product development or discovery, we can draw analogies from other industry planning practices. It is normal to avoid detailed predictive planning at the start of a project in other discovery-dominant domains. For example, examine how new potential oil fields are planned. Or a new type of car, a new bridge, or a new consumer gadget. In each case, there's a significant exploratory phase before reliable plans are expected.

Facts and logic Perhaps the most relevant fact is an average of 20 40% requirements change on medium to large software projects See "Change Research" on p. 72. Not surprisingly, then, the historical track record of early detailed predictive planning is poor. The problem isn't bad planners, the problem is high degrees of novelty, uncertainty and change. An early and highly speculative fixed plan in that context is not logical; the wrong (mass-manufacturing) model is being applied to a discovery-dominant domain: software development.


Our test environment is very complex and run by another organization. How can we iterate and test?


Before offering some suggestions, note there are similar organizations that do this, iteratively. Microsoft is probably the largest example of a company that applies IID in a complex test environment with separate testing groups.

One part of the solution is continuous integration or the more mild daily build and smoke test practice. The separate test team adds unit or acceptance tests to the build environment incrementally, as soon as possible.

Another element is pipelining. In this case, when the development team starts iteration N, the test team starts evaluating the just-finished iteration N 1. See "Overlapping or "Pipelining" Activities Across Iterations" on p. 251.


What do we do when time, budget, and scope are all frozen but we still want to apply an iterative or agile method?


The constraints are irrational, but it happens. Spend more time looking for existing (perhaps open source) components or frameworks, contract with specialists who have done something similar and used the reusable components, hire consultants with an existing template system they can modify to your goals, use the technologies most familiar to the team, don't ignore communication (e.g., a daily Scrum) and lots of testing in a misguided rush to save time, and as usual, rank requirements and implement them across the iterations in rank order. When the deadline comes up, at least you'll have the most valuable elements, if not all.


Doesn't iterative development mean that we don't know when we're finished?


It is possible to know when we're finished and what "finished" will mean. There are several ways to tackle this dilemma. One approach is to have an initial requirements workshop (part of the Release Planning Game in XP) in which all or most requirements for the release are identified at a high level, such as just the names of use cases or features (XP story cards), with some brief description. This can be the basis for a rough scope, effort, and end-date estimate. Of course, as the project progresses, these high-level requirements will evolve into detailed descriptions and may expand, but this does not imply an endless moving target. Rather, it is a temporarily moving target that over time has smaller and smaller pertubations (see the "cone of uncertainty" on p. 18). In early iterations, the fluctuation in the total requirements set is larger, and then it settles. On average, perhaps 20% into the project, a more complete and stable picture emerges.

One can argue that this period of uncertainty is undesirable and that an up-front waterfall requirements approach is thus preferred, but research shows that the requirements change significantly in any event; evolutionary methods admit and embrace this, waterfall-oriented methods deny or resist it.

Another variation, used in the UP (on larger projects especially), is to not expect a definition of "complete" or an end-date until several iterations into the project, at the end of the elaboration phase. This is analogous to exploratory drilling at an oil field. Management doesn't expect reliable answers until after some phase of investigation. In this approach, there are a series of requirement workshops across the early development iterations. By the end of the last workshop (for example, after three workshops across three iterations), the goal is to have discovered all the requirements at a high level (such as the names of use cases) and defined in detail around 80% of the most significant ones. At this point, there is a relatively reliable definition of what "complete" means.

Other variants are build-to-cost (in the 1970s this was known as design-to-cost) or timeboxing the overall project. In the first, "complete" is defined as whatever is finished by the time a fixed budget is consumed. In the latter, "complete" is defined as whatever is finished by a fixed project end date. Both of these strategies may be coupled with evolutionary delivery.


Should I plan the work for all the future iterations to ensure the scope and resources (e.g., people) fit the desired end date?


Laying out a detailed, predictive schedule does not really satisfy this concern, and in fact by doing so and following it, the team is less likely to meet the goal. The underlying problem is superimposing a predictable manufacturing model of planning onto new product development projects. Such a plan can give the illusion of satisfying the concern, but since it is highly speculative, assumes low rates of uncertainty and change, and is not feedback driven, it is less skillful than an adaptive planning method.


How do I get feedback when there is little or no user interface?


Primarily from tests and measurements. This question usually comes up for embedded applications, middleware, or servers, where issues such as memory footprint, memory leaks, load, throughput, responsiveness, and so on are important questions. In a well-run IID project, a growing application is evaluated each iteration with respect to these qualities, in the most realistic test environment possible.


Should iteration activities overlap? For example, requirements for the next while testing for the previous?


In general, no. An exception is discussed on p. 251.


How long should iterations be?


See p. 267.


How to handle the design of a database with an iterative process?


Contrary to whatever fears your database experts may hold, it is both possible and effective to apply evolutionary database design and development, especially with the structured application of database refactorings changes to a database schema that improves its design while retaining both its behavioral and informational semantics.

The details are beyond the scope of this introduction. See for discussion of agile database development.


Should the customer always be in charge of what gets built each iteration?


Only XP recommends the customers choose the goals of the next iteration, independent of other advisors. Most other IID methods imply or suggest a collaboration between the customers and chief architect. In early iterations especially, the architect is likely to have recommendations on the priority of requests, prompted by their architectural influence or level of technical risk.


How to plan an iteration?


See many of the tips starting on p. 248.


Do I give the results of every iteration to my customer?


No, except for evaluation and feedback, unless your method is Evo (which promotes evolutionary delivery each iteration). This is a common confusion with iterative methods. In fact, there may be 10 or 20 iterations before an application is ready for production or commercial release. The "release" of each iteration (except the last) is an internal release for testing and baselining the growing system. Some milestone intermediate releases may be made public for alpha testing. That said, one of the advantages of iterative methods is that some internal releases can become without extraordinary effort a production release of lesser goals, if circumstances required.


How to do documentation for maintenance, when we want to be agile?


First, define what to document by need, rather than speculation. Is there anyone who has maintained a prior version of the product? What did they previously find useful, or miss?

A few tips:

  • Put the documentation on a project Web site, such as a Wiki.

  • Within many systems there are a few key tricky or subtle elements, or themes. Find those, highlight them, and write a short "technical memo" [Larman01] Wiki page for each.

  • It is usually useful to document different architectural views. See [Kruchten95] for details.

  • Agile documentation can be created by splitting the team into pairs, and asking them to document in parallel on different whiteboards. One pair will sketch a logical view of the architecture (perhaps loosely in UML notation) and write some related whiteboard notes, emphasizing the key noteworthy elements in that view. Another pair will sketch a deployment view, another the security view, and so on. A digital picture of each whiteboard is taken, and the pictures inserted on separate Wiki pages, one page for each architectural view. Then, the pairs type in some supporting text on the Wiki page below the picture. Using this approach, I once coached a team that needed only three hours to create the maintenance documentation.

  • Some insights are worth capturing with a digital movie; it is quick, low effort, and often rich with information. Place the movie file on the project Wiki. For example, consider an interview with the architect structured so that she discusses each architectural view (logical, deployment, …) in turn. They may be situated at a whiteboard (for sketching) while being filmed. Likewise with an experienced maintenance person.


How can I create a work breakdown structure (WBS) without a weekly schedule, or an iteration-by-iteration schedule?


The key point to appreciate is that a WBS is not or at least should not be a schedule. It should be a breakdown of work or tasks independent of how or when they are handled.

Some WBSs are organized at the top level by major project phases a phase-oriented WBS. Such a schedule-oriented, predictive planning approach is not consistent with evolutionary development and adaptive planning.

Some WBSs are organized by a decomposition of tasks within major software design elements (subsystem-1 tasks, subsystem-2 tasks) a design-oriented WBS. This is acceptable if the chosen top-level design elements are sufficiently general or high-level to be guaranteed correct (for example, vague elements such as "UI layer"). However, a design-oriented WBS is usually a dangerous approach, since in evolutionary development there should not be fixed, up-front decision on the major design elements they need to be discovered and evolved during the early exploratory iterations.

A better approach is a discipline-oriented WBS whose top-level elements are major project disciplines with activities that occur in parallel throughout the project (test, change management, project management, development, design, environment, requirements analysis). During an iteration planning session, items from this WBS are chosen (i.e., scheduled) for the iteration.

Agile and Iterative Development (Agile Software Development Serie. A Manager's Guide2003)
Agile and Iterative Development (Agile Software Development Serie. A Manager's Guide2003)
Year: 2004
Pages: 156 © 2008-2017.
If you may any questions please contact us: