Agile Fact or Fiction (Continued)

In this section, we conclude the fact or fiction discussion we began in Chapters 2, extending the discussion beyond team structure and human factors. We’ll cover the following categories:

  • Modeling and documentation

  • Change and change management

  • Customer- and contract-related issues

  • Working practices

  • Planning and design

This discussion should help to highlight some of the thinking behind the core agile subset we presented earlier in this chapter. Plus it should provide a bit of light relief!

Modeling and Documentation

One of the more controversial areas of agile development is its apparent disdain for modeling and documentation. Of course, “not doing documentation” is not what agile development is about—it’s about being realistic about the amount of documentation that should be done. In this section, we tackle this and other similar agile misconceptions.

Being Agile Means You Don’t Need to Write Down Requirements

  • Doug:   There’s the notion espoused by the XP community that requirements are properly handled by jotting down a few notes on index cards. In my opinion, this is a poor way to go about developing software. So while this may be widely perceived as an “agile practice,” it is fiction in my opinion. Our example for this book starts with a brief statement of written requirements, and as the development proceeds, we refer back to the requirements and show how the design and the code satisfy them.

  • Matt:   Fiction. I don’t think any agile methodology in existence says that you shouldn’t write down requirements (at least in some lightweight form or other). XP gets there with user stories, though over the last couple of years these have tended toward the even more lightweight (a single sentence per story). The rest is left to acceptance tests (defining requirements through a set of executable rules) and conversations with the team of on-site analysts. But, human nature being what it is, not having a permanent record of agreements between you and the customer can come back to bite you. So, fewer detailed written requirements means being able to defer decisions until later, but at an increased risk of a disillusioned, confused, and angry customer (and disillusioned, confused, and angry developers!) if change management isn’t handled incredibly well.

  • Mark:   Fiction, on a project of any size at least. The objectives of writing down requirements are to a) make sure you understand the requirements yourself; b) give your customer the maximum opportunity to correct any misunderstandings you may have about the software (and this is important because it’s a lot more cost-effective to change a few words in a document than to rewrite the software); c) give your customer the chance to spot any omissions; d) communicate the requirements to a wider team audience; and e) assist in prioritizing requirements. There are, however, many ways in which you can write down the requirements. On my current project, we use a customized version of an open source software tool called Gnats. It’s particularly useful in planning and prioritizing in what is a very incremental project. I do think it’s a good idea to have requirements stored in some electronic form, however.

Being Agile Means You Should Never Write Documentation

  • Matt:   Definitely fiction, but then, I don’t think any agile gurus are really saying this. A lot of them are saying that you can get away with writing less documentation, and some even say that you can write almost no documentation, with the caveat that the source code itself is written clearly enough to be self-documenting and everyone on the team (including the customer) works very closely together.

    But source code doesn’t provide complete insight into a design. Sure, you could use a tool to reverse engineer the code into some form of documentation, but the tool has only the information available in the code to do this. Also, you could look at the code for module A to analyze why it needs to communicate with module B, but it would be much easier to simply look at a high-level design document that tells you the End-Of-Week payroll module uses the Tax Calculation module after allocating wages but before deductions have been calculated, for example. Effective documentation is an essential part of agility, because it allows you to find answers quickly and move on to the next problem. But it becomes less agile when you have too much documentation, or a design document is simply paraphrasing the source code, or you have documentation just for the sake of having some documentation.

    Does having less permanent documentation allow us to be more agile? Probably, because there’s less stuff to change, but possibly at the cost of higher risk—a lower “bus number” and so on. There’s an agile sweet spot to be found, where you write down just enough documentation (and write down the right documentation) to keep the project afloat without overloading it.

  • Doug:   Fiction. Do the agile gurus say this? Maybe not precisely in those words, but it certainly gets implied from their remarks, and if you read between the lines it is quite often the message that comes across. One quote that immediately comes to mind is Bob Martin’s “Extreme Programmers are not afraid of oral documentation.”[14.]

    Of course, sending this message to programmers is like selling catnip to cats. Most programmers prefer writing code to documenting their work, but this doesn’t make it a good practice. It’s kind of like cooking and washing dishes. It’s more fun to cook than to clean up afterward, but you have to clean up just the same.

    The other fundamental point here is that documentation is first and foremost a communication tool. So it makes no sense to “optimize communication” by eliminating documentation.

  • Mark:   Fiction, but it’s a matter of emphasis. The overall goal is to deliver a software system that meets our user’s needs, not to produce a documentation set per se. However, documentation—and perhaps more important the models it may well contain—can be of benefit if it helps us in meeting our key objectives. And we certainly believe that in some circumstances it can do that. That’s what this book is about.

Being Agile Means You Should Never Use Development Tools

  • Mark:   Fiction. If a tool helps you deliver software more quickly, use it. However, tools can be very expensive, and they can often cause more problems than they solve. Do be careful you aren’t just being sucked into using something because of marketing hype or corporate policy (which is usually the result of marketing hype aimed at higher levels of management). If it is a new tool, try it out on a subset of the project before committing the whole team to it!

  • Matt:   Quite the reverse, if we’re talking about programming or code-level design tools. However, the agile misconception tends to be more around the use of analysis tools, CASE tools in particular. Sometimes, a CASE tool slows you down and can be overkill for the task at hand. At other times, a CASE tool can be a lifesaver, providing a data dictionary of interconnected domain objects; a binding, cohesive overview of the design; and a way of identifying good designs that we might not have spotted if we were immersed deep down in the code. It can save a lot of “baby-step” refactorings that might have eventually arrived at the same good design but with a lot more time and effort. Also, code-generator tools (not just the MDA variety) are increasingly generating large parts of applications from analysis-level descriptions of the problem. It doesn’t get much more agile than that.

  • Doug:   Fiction. Most of the excuses for not using tools don’t really hold up anymore. Enterprise Architect, the tool we use for the example in this book, costs about $150, is extremely intuitive and easy to use, and supports both forward and reverse engineering. We used the reverse engineering capability extensively in making sure the model and code synced up over successive iterations and releases.

Being Agile Means You Don’t Use UML or Other Modeling Techniques

  • Mark:   Fiction. If you choose the right techniques from UML or other places, ensure you understand why you are using them and what value they add, and know when to stop using them, then modeling can speed up your development efforts. It’s a lot quicker to update a model on a whiteboard than it is to endlessly refactor functioning code as you uncover yet another thing you hadn’t thought about.

  • Doug:   Fiction, as I hope we demonstrate with the example project in this book. Any and all techniques that help get the job done efficiently and correctly should be used. If you know how to make use of it, this certainly includes UML.

    I’d go so far to say this is not only fiction, but also utter rubbish.

  • Matt:   Pretty much every agile process including XP advocates using at least some form of modeling technique in addition to programming, so definitely fiction.

Being Agile Means You Should Always Prove Concepts in Code As Working Software

  • Mark:   Fact. Architectural spikes, screen prototypes, and so on are examples of this principle in action. It’s all about risk minimization: communication risks, technology integration risks, and so forth. Models, despite their usefulness, don’t actually have to work (“No one ever got fired for producing the wrong model,” as John Daniels once said to me).

  • Doug:   Fact. Modeling is useful for about 300 different reasons, but at the end of the day, if you can’t build it, it’s not a good model.

  • Matt:   This is still true and probably always will be. But the distinction between a specification or model and source code/working software is becoming blurred, as more and more code gets generated directly from the spec.

Being Agile Means Oral Communication Is Always the Best Method

  • Mark:   Fiction. To say that oral communication is always the best form of communication doesn’t make a lot of sense to me. There are many forms of communication we can use—oral, SMS (much used in the UK and Europe), e-mail, written document, whiteboards, to name a few—and each has its place. If I want a list of IP addresses for 20 machines, I’ll ask someone to e-mail them to me, and then I’ll have a reference. If I want to know what the person sitting next to me is working on, I’ll ask him directly. If I want to discuss a design issue, I might well go and discuss it in front of a whiteboard while drawing some models to clarify details. In some circumstances, I’ll go and put those informal models into a CASE tool, as I know there are some details that need to be fleshed out and thought through a bit more. And if it’s an important model, I’ll print out the end result and put it on the wall where everyone on the team can see it. Some things do need to be documented for later reference.

    However, having said all that, I think that the motivation behind this type of statement is to stress the importance of oral communication in many circumstances. I’ve worked in environments where two people sitting next to each other will use e-mail to communicate rather than talk to each other—and this is equally stupid!

    The overriding rule has to be to engage your brain and communicate in the most effective manner.

  • Matt:   Fiction. There’s a common attitude in the agile world that talking to each other is more precise and less prone to ambiguity than writing something down. Part of the reasoning behind this is that when people talk, they communicate not just with their words, but also with their body: raised eyebrows, rolling eyes, sweaty palms, tone and inflection of voice, hesitation, that sort of thing. This is great for transient information (“What are you working on right now?” or “How do I do this?” or “Do you really know the answer or are you just guessing?”), but it’s not so good for things like agreed-upon customer requirements, records of design decisions, and so on.

  • Doug:   Fiction. Our project for this book, for example, was developed entirely with a remote, off-site customer, with written requirements and e-mailed use cases and UML diagrams. I don’t think it kept us from being agile in the slightest. It certainly didn’t keep us from being successful on the project, whether you regard our efforts as “agile” or not. In the real world, it’s often just not practical to have a full-time on-site customer (or customer team) co-located with a programming team for months at a time.

Change and Change Management

Agility is about change. More specifically, it’s about managing change: allowing the customer to change her mind about what she wants the finished product to contain. But of course, agility isn’t a silver bullet. Although it puts practices in place to ease the difficulties commonly associated with change, it doesn’t (for example) make change free.

Being Agile Means You Should Encourage Your Customer to Think That Change Is Free and Won’t Affect Your Overall Development Timescales/Costs/Deliverables

  • Matt:   A customer can always change his mind; it’s his prerogative. But he may not be able to do anything about it, of course! The problems that arise often have less to do with the process and more to do with the contract (which in many ways drives the process). If the customer is locked into a particularly nasty contract that states the requirements up front in iron-clad, unchangeable legalese, then the project will probably fail, or what is eventually delivered won’t be very close to what the customer really needs, which is pretty close to failure anyway. So I’d say that this statement is fiction, because the customer can change his mind anyway, and it isn’t really agility that prevents or allows him from effecting a change in the requirements. Where agile practices do help is in reducing the pain of making a change midway through the project.

  • Mark:   Change is always possible, even when using a waterfall approach to software development. The problem with the waterfall approach was that the analysis and design periods were often so long—quite often months—that changes put them into a continual process of flux, and they consequently never actually got finished before the customer decided to pull the plug because nothing concrete had been delivered. This was especially true on internal software development projects.

    More-traditional software houses had some success with a waterfall approach because they could lock the customer (often at the customer’s own request) into a fixed-price development (based on a fixed set of requirements, of course) and then change-control the customer (getting more money, of course) for every subsequent requirements change. The customer couldn’t pull out—he’d committed to the project, so he’d have no choice but to pay up. While this worked in some way at a commercial level, I don’t think it delivered real value for the money, and I think it often left a lot of people on both sides feeling frustrated.

    Anyway, coming back to the question, change is always possible—the question is at what cost and in what timescale. It’s important to understand that anyone who says you can get change for free is being either dishonest or naive. Agile, iterative, and incremental approaches to software development attempt to mitigate the cost of change using a variety of techniques, from incremental delivery to using cost-of-change-minimizing design techniques. But remember, once pen has been put to paper or fingers have been put to the keyboard, there is an additional cost involved. After all, if the change wasn’t necessary, it wouldn’t be necessary to change models or code, would it?

  • Doug:   Fiction that you should do this, but in some cases fact that it happens. And, in my opinion, this is one of the great myths of agile development. Many of the popular agile approaches involve rewriting code over and over again. Developers are encouraged to repeatedly toss out and rewrite code. But nobody bothers to tell the poor customer, who has been sold on the virtues of agility, that this approach actually costs time and money.

Customer- and Contract-Related Issues

Increased customer involvement is one of the areas on which agile methods place a particularly high emphasis. Of course, customer involvement isn’t just about getting the requirements right (which is pretty fundamental); it’s also about negotiating a suitable contract to suit the type of development being undertaken.

Being Agile Means You Can’t Undertake Fixed-Price Contracts

  • Matt:   Fiction but only just! Agile methods tend to be geared at making changes as the project fishtails toward a fixed deadline. If the deadline is fixed but the requirements are changing, then the overall price is something that you probably wouldn’t want to fix (except in the sense that the price per iteration, or per release, may be fixed). But that doesn’t mean you can’t do agile development to a fixed price. As I mention elsewhere, the contract drives the process to an extent; you have to tailor the process to suit the contract.

  • Doug:   Fiction for “any agile approach” but fact with some agile approaches. The whole notion of an “optional scope contract” as defined in XP is the example that jumps to mind.

  • Mark:   Fact and fiction. Fact in that, as I’ve already mentioned, the whole concept of large fixed-price developments relies on the customer knowing every requirement in detail at the start of the project (an unlikely scenario), and this type of project is the antithesis of agile software development. Fiction in that it is, however, possible to fix the cost of a number of small-ish increments (1–3 months in duration, depending on the project) by incrementally phasing requirements gathering and costing (for the next increment) on top of the software delivery of the current one. This approach enables the customer to bite off well-costed chunks of the development, while also having the opportunity to request change in what would be “midproject” in a traditional-style fixed-price development.

Being Agile Means You Must Have a Full-Time On-Site Customer and All the Developers Should Talk to Him

  • Mark:   Fiction. Of course, on any project it is important that your customer understand his commitments and responsibilities, but a dedicated full-time on-site customer is an unrealistic expectation in most circumstances. No project I have ever worked on has had such a beast. I also wonder whether this is actually necessary in many circumstances, and this relates to the second part of this question: should all developers interact with the customer?

    While I’m not in favor of stopping anyone from talking to the customer if a question needs answering, I do believe that some people are better at getting information in a concise, effective, and timely manner than others. I also believe there is a particular skill to taking a somewhat ill-defined requirement and turning it into a set of commands that can be issued to a computer—not just in the UI design sense, but also in terms of coming up with a minimal set of operations that meet the needs of the customer. Getting human-computer interaction right is difficult, and not everyone can do it.

  • Doug:   Fiction. And once again, I’m hoping that our example project demonstrates this successfully. I was the customer for this project, and I don’t think I spent more than 8 hours in the same area-code as the developers while the code was being written, although I did spend 3 days on-site when we did the initial modeling workshop, when we defined the use cases. But I brought a written set of requirements with me to that workshop.

    As I recall, I made three trips to the location where the developers were working, and virtually all of the remaining communication was by e-mail. I met with the developers for a couple of hours each time I visited.

  • Matt:   If you can get a permanent on-site customer, that’s great—it really can make a big difference. But it’s probably a bit much to ask of the customer, and generally speaking, it isn’t likely that you’ll find a customer who is prepared to do this for a whole project. Even the proponents of XP, which propagated the idea of an on-site customer, have changed their minds and gone instead for a team of business analysts who act as a proxy between the customer and the programmers. But the problem with the on-site customer practice (whether it’s an analyst or the real customer) is that it goes hand in hand with the concept of not defining the requirements in detail up front, so it runs the risk of not discovering entire systems that need to be developed (which might take several months, a year, or whatever) until late in the project. Or insufficient time spent exploring the problem domain might result in an immature domain model, which results in software that is more complex than it needs to be and therefore more expensive to change later on when the real requirements emerge.

Working Practices

Agility changes the way team members work together at a fundamental level. However, if members of a “nonagile” team are creating unit tests and integrating their code frequently into PVCS (or their source control system of choice), chances are they’re already more agile than they thought they were.

Being Agile Means You Must Use Automated Tests

  • Matt:   Fact, especially if the tests are written up-front, as fine-grained unit tests (i.e., micro tests). Unit/acceptance tests have become a vital element of developing software, and not just in the agile world. But when making design changes (or factoring in new or changed requirements), a comprehensive suite of tests can save a lot of debugging time. There’s a cautionary note, though, which is that teams can become overconfident that their test suite will save them. It will save them from the defects that they’re expecting to occur, but it’s the unexpected ones that always catch people out.

  • Doug:   This is a tough one, I guess because of the word “must.” I think that in most cases this would be a fact, but there are some kinds of systems in which these techniques are not as useful as others. In our book example, which is a map-based system, it turned out that visual inspection of a comprehensive set of maps was the most efficient way to proceed. So we still had a strong emphasis on testing, but it wasn’t the classic regression-test environment for a number of reasons. Which means that, I guess (with some reluctance), I’ll have to say fiction, because of the “must.”

  • Mark:   Automated tests mean you can have confidence that your code is working correctly at any point in time and also that when necessary you can “refactor with confidence.”

Being Agile Means You Must Continuously Integrate (That Is, Check Your Code Back into the Source Control System After Every Change and Make Sure It All Still Works)

  • Mark:   Fact. Continuous integration means you are reducing the risks of technology or code incompatibilities. You know where you stand at any point in time.

  • Doug:   Fiction. You should be able to keep a chunk of code checked out for a couple of weeks while you work on it, without being deemed nonagile. On any project of any decent size and complexity, there are times you want to “own” some stuff for awhile and make it work right before trying to integrate it with everybody else’s stuff.

    I once had a manager who required me to check uncompiled/untested code into source code control, so if I wanted to add a semicolon I had to explain why I was doing it. And I was the only programmer on the project. It was asinine.

  • Matt:   Fact ish. Continuous integration, where you check in code almost at the point where you changed it and got a green bar, reeks of idealism. At a lesser extreme, though, it does pay to integrate your code at least once a day and have an automated build running, also at least once a day.[15.]

    It’s common nowadays to have a dedicated build PC that does an automated hourly build. It gets the latest source out of the source control system, builds it, and runs all the unit tests. If there’s any failure along the way (particularly for the compiling part), an e-mail gets sent to the whole team, alarms go off, and so forth.

    A side effect I’ve noticed is that individual team members prefer to keep code out of source control (or label it so it’s hidden from the automated build) for longer periods of time while they’re getting it working, so as to avoid that dreaded “build failed” or “tests failed” e-mail being sent to the whole team. As long as the code they’re working on is sufficiently insular, this generally doesn’t cause a problem, but there’s a lot to be said for working in smaller increments and verifying that what you’ve written works so far.

  • So I’d say that if individual team members are integrating code less often than once a day, then your team is not being as agile as it could be, and if you don’t have an automated build running at least once a day, that definitely isn’t agile!

Being Agile Means You Must Release a (Production) Version of the System Every Few Weeks (the Fewer, the Better)

  • Mark:   Strictly speaking, I’d say fiction, but there is a dilemma here. On the one hand, you want to release a production version of the system to your customer as often as possible, as this gives her access to new features as soon as possible and enables you to get feedback from her. On the other hand, there is always an overhead of making a full production release of the system—and the larger your code base, the bigger this overhead becomes—even if you’re undertaking automated tests and continuously integrating. The reason for this is that some activities or deliverables (e.g., migrating live data, training users, creating release notes, updating the user manual, performing acceptance testing, etc.) are inherently related to production releases (as opposed to internal releases). The more production releases you undertake, the less time there is for raw development. Ultimately, I think you have to discuss this trade-off with your customer. She may not be happy if 40% of potential development time is spent on production release–related activities.

  • Doug:   Fiction. Not only is this not necessarily true, but also there are some real serious drawbacks to mandating this practice.

  • Matt:   It definitely pays to never be far away from a working version of the software you’re creating. Being able to run tests end to end helps to identify issues early. But this is a far cry from actually releasing working software to the customer every few weeks. QA won’t be a happy bunch, for starters. Having said that, it is essential that the customer gets to see progress on the product at least every few weeks, and that the customer gives feedback on whether the product is doing what he was expecting.

Being Agile Means the Project Should Make Frequent Short Releases (Once Every 2–3 Months, for Example)

  • Matt:   Yep, definitely fact—with a few caveats, the most obvious one being that the product should only release when it’s ready, when QA gives it the “all clear” (or at least the “mostly clear”). Another caveat is that an unrealistic release cycle shouldn’t be forced on a project. If it’s going to ruffle feathers, exhaust the testers, result in rushed code, or whatever, then the organization probably isn’t ready yet. In the meantime, the project can still adopt other agile practices and get its “agile badge.”

  • Mark:   On balance, fact. While releasing every 2–3 weeks can mean the overhead of doing production releases may be excessive, there are significant risks involved in not releasing something at least once every 3 months, in terms of both visibly demonstrating progress made to your customer and also getting feedback from the customer, although the latter can be mitigated by doing interim “investigative” releases for your customer to play with.

  • Doug:   Fact. Avoiding lengthy intervals (i.e., those lasting several months) without the customer seeing any new functionality is one of the keys to being agile.

Being Agile Means Prototypes Should Always Be Evolved into Production Systems

  • Matt:   There are really two different kinds of prototype. There’s the “big” kind, the proof-of-concept type, which has been developed using some up-front design and for which some requirements analysis was done, and which could feasibly stand as a product in its own right (albeit a slightly wobbly one). And then there’s the kind of small, ad-hoc prototype that is a tiny subset of the overall system (a “spike” in XP terms) and that is used to investigate a particular part of the design (usually for estimating purposes).

    Whichever type of prototype you start with, turning it into a real production system will always involve some risk. You need to do some work to make it production-worthy (e.g., refactoring the design, bringing the code in line with the design documentation and vice versa, and so on). We show some of this in Chapter 8. But another (probably easier) alternative is to scrap the prototype and start over with a new system, using what you learned from the prototype to accelerate development. Funnily enough, our example project in this book (the ESRI mapplet) starts with a prototype that then gets developed further into a production system. But the initial version of the mapplet was solid enough that the team was able to do this. Also, the requirements didn’t change for the next release, but the initial set of requirements was added to. If the prototype had revealed that the first set of requirements was plain wrong (or that the design wasn’t appropriate), we would have scrapped it and started over.

  • Doug:   I suppose it’s a fact that this can happen. Whether or not it’s the best way to develop software is another question entirely, because there are significant issues associated with doing it, although I’m certainly in favor of doing a healthy amount of prototyping. Should it always happen, in the name of agility? I’d say fiction.

    One of the big differences between prototype code and code that has been designed following a use case–driven approach is that prototypes tend to deal mainly with sunny-day scenarios, while use cases (should) consider both sunny- and rainy-day cases (basic and alternate courses of action) for each scenario. In many cases, the structures put in place to support sunny-day scenarios don’t hold up properly when you start throwing all the special cases, exceptions, and error conditions into the mix. So evolving a prototype into production code isn’t always the best idea. In our example project, there was indeed a prototype release, but a substantial amount of engineering was done to it before it became the production system.

  • Mark:   If you’re going to develop something called a “prototype” (rather than, say, the first release of the system), I’d say fiction. The main reason being that the moment something has the prototype label associated with it, developers tend to drop any consideration of quality. Personally, I’d avoid calling something a prototype if I had any interest whatsoever in using the source code in a later production release.

Being Agile Means Refactoring Is Still Useful After Doing Use Case–Driven Modeling

  • Doug:   Refactoring can still be useful, but the scope of the improvements that will be made to code as a result of refactoring will be dramatically reduced when compared to an approach in which refactoring is a primary design technique and insufficient attention is paid to up-front design. There might still be (relatively minor) code-level improvements that can be made, but these will generally not be make-or-break architecture/design issues. So, refactoring isn’t rendered completely useless by use case–driven design, but its importance is greatly diminished.

  • Matt:   Fact. No matter how good your up-front design skills, the design will change (even a little bit) once you begin coding. Effective up-front design reduces the extent that this happens, but when it does happen, you’ll want to be able to make changes to the code safely. This means refactoring, which means having a set of unit tests in place.

  • Mark:   Fact, with the caveat that refactoring shouldn’t be considered an alternative to doing some analysis and design. However, one reason you might want to refactor the code (after doing your modeling and so forth to make sure you get it right) is if you have a rule that says, “Thou shalt not screw up the existing system by checking in crap code to CVS.” It means that developers can’t check stuff into CVS unless they are 99% sure it works. If they’re working on a big change that takes days, this leaves them vulnerable to other changes and merge conflicts, so if they don’t want this, they have to structure their changes into stages—each of which demonstrably won’t screw up the existing system. Refactoring the existing design to accommodate the new functionality (but not yet to implement the new stuff) and validating it with unit tests helps here.

Planning and Design

On the surface, agile methods seem to have thrown away the rulebook when it comes to planning and design. But look beneath the surface, and you’ll see some marked similarities with traditional development methods.

Being Agile Means You Don’t Have to Worry About Design Because You Can Refactor Later

  • Matt:   Definitely fiction. On a very small project (like a few classes), you might just get away with it, but that’s about it.

  • Doug:   Fiction, fiction, fiction. (This is a pet peeve of mine.)

  • Mark:   I’ve heard of some “gurus” saying things like “You shouldn’t even worry about this afternoon—just focus on the tests you’re currently trying to satisfy, and refactoring will sort out any issues later on.” To me this is, to borrow a phrase from Doug, “plain moronic.” Let’s get one thing straight: it’s cheaper to change a class model on a whiteboard, for example, than it is to change the corresponding implementation code. So to say “Don’t think ahead to this afternoon” is pretty darn stupid, not to mention an incredible waste of customer’s money.

    If you know you have ten requirements to implement over the next few weeks, then getting the design right (or at least as right as you can) up front has got to be the right thing to do. And modeling techniques like class diagrams have been specifically developed to assist you in thinking through design issues in a cost-effective manner.

    Of course, if you don’t actually know what the ten requirements are (because you haven’t done any analysis), then getting the design right is going to be somewhat problematic. But the answer to this problem is to do the analysis and write down your requirements, not to bury your head in the sand and say, “Okay, let’s find out what we’re doing this morning and let this afternoon worry about itself later.”

    Having said this, you may come up against unknowns (e.g., technology issues) that make it difficult to make the right design decision. This is when you should do some investigative coding—a spike, to borrow an XP term—to tie down the unknowns. This, however, has to be done in a directed manner, and it must be targeted at finding the answers to specific questions. In the early days of my current project, the team went off in a frenzy of coding and investigation for a week or so. At the end of this, we all sat down and tried to review what we’d found out. Surprisingly, the answer was “little of any use”!

    As we discussed earlier on, old-style waterfall development processes often had very large analysis and design stages up front. The problem is that while a core set of requirements were often stable, others weren’t—or at least they hadn’t been thought through properly or were likely to become obsolete before the system actually got implemented.

    Another possibility is that your customers don’t actually understand your requirements documentation; they can’t visualize the system from them. The customers may agree with the documentation but later come back and tell you that you’ve implemented the wrong system. If your requirements are unstable or not well understood, then you may design the perfect solution to the wrong problem!

    Another point worthy of note is that models can be wrong. John Daniels once said to me, “No one ever got sacked for getting a model wrong.” The point he was making was that code has to actually run—and therefore be testable—whereas models don’t. There is always an inherent risk that the models may actually be incorrect and that once in a while, at least, we need to “prove it in code” and get some concrete feedback.

    So there is a dilemma. On the one hand, a small window of design look-ahead can cause a lot of unnecessary refactoring later on. On the other hand, a large window of design look-ahead offers the prospect of less refactoring but at the risk of getting it wrong in one way or another.

    So the answer to your question is, “Do as much design up front as you can given your confidence level in the stability and correctness of the requirements, the ability of the team to model effectively, the team’s understanding of the technology base, how much the customer actually trusts you, and myriad other issues. It boils down to you making a judgment call.

Being Agile Means You Don’t Need to Plan Ahead or Worry About the Future (You Can’t Plan Anything Because You Don’t Know What’s Going to Happen)

  • Mark:   Fiction. Agile projects certainly require some planning, but it’s important to bear in mind that the effort you put in and level of detail you go into should be based on how far into the future you’re looking. Plan in detail for the short term (current increment) and in broad strokes for the long term (tentative increment use case or feature list).

  • Doug:   Fiction. You may not (and probably won’t) be able to plan perfectly. The future always brings uncertainty. But that doesn’t mean you shouldn’t take your best shot at anticipating, because there are usually many things that can be (and should be) anticipated.

  • Matt:   Fiction, and fact. You definitely do need to plan ahead, but it’s true that you don’t know what’s going to happen. So what use is a plan? A plan would be useless if its purpose was to set in stone precisely what will happen and when. This would mean binding the project plan very closely to the design (“Bob will take 3 days to write the SQL mapping for XYZ module,” etc.), and we all know that the design will shift and warp throughout the project. So what we need (and what agilists have already recognized) is that our projects need to be planning driven rather than plan driven.

Being Agile Means You Don’t Need to Try to Get It Right the First Time

  • Doug:   Fiction. Again, if you strive for perfection, you’ll probably wind up with high quality. If you don’t strive for perfection or high quality, you’ll probably wind up with crap. This is true with pretty much everything in life.

  • Mark:   Fiction. While I’m certainly not against refactoring in principle when it’s really necessary, one of the biggest fears I have about a culture of continuous refactoring is that it can give developers an excuse to be sloppy (“We don’t need to think about this very much because if we get it wrong, we can always refactor it later”). There’s a danger of putting your project into continuous prototype mode (“We’ll sort that out later”) and ignoring real issues. You should always try to get it as right as possible first time.

  • Matt:   I’ll go against the grain here and say fact, because you basically know that the design is going to change, so why even bother trying to get it right first time? Just kidding—I think Doug and Mark were worried there for a second! Seriously, though, this is at the heart of agile design, so we shouldn’t dismiss it outright. Even the most die-hard agilists will at least try to get a good design the first time, but they might not spend, say, 6–12 months sweating over it.

    Even though it’s important to absolutely try to get the design right the first time, it’s equally important to recognize that the first design you create will never be 100% correct, and it may even be completely wide of the mark, so you can limit the damage by spending less time on the design before you see source code. That’s not the same as saying “Leap straight into coding” or “Do 10 minutes of up-front design,” though, even if you’re following TDD (see Chapter 12). Instead, break the project into smaller sections and design in detail for each subsection before coding.

Being Agile Means You Mustn’t Do Any Design Up-Front

  • Doug:   [Screams “Nooooooooooooo!” quietly under his breath] Fiction.

  • Matt:   Fiction, but this is one of those situations where agilists and nonagilists talk at cross-purposes. One side says, “Agile method A is bad because it doesn’t involve any up-front design.” The other side responds, “What rubbish! We do up-front design all the time—usually for 10 minutes at the start of each day’s programming.” So, of course, it all depends on your definition of “up-front design.” Big Design Up Front (BDUF) usually means 6+ months of design during which no programming takes place. That’s bad. Up-front design might mean a month of intensive design work, or a week, or 10 minutes at the start of each day. There’s a definite cut-off point where the more up-front design work you do, you suddenly stop being agile because the customer has to wait too long before he sees any working software—the feedback loop isn’t small enough. I would say that before a large-ish project, a month of design work (including rapid prototyping to prevent the designs getting too naïve) is essential. It helps to arrange all your ducks in a row before you get started. After that, then design as you go along, module by module.

  • Mark:   Fiction. This relates directly to the last point. If you’ve agreed on the contents of your current production increment, and you’re confident they’re stable, then why wouldn’t you undertake some design and try to get it right first time? You may make some mistakes (as you haven’t proven your design in code yet), but by undertaking some up-front design, you’re going to reduce the amount of refactoring you need to do—assuming you have the skills to do so. But do note, however, that there’s nothing to stop you from doing some coding as part of your design phase if you need to, if you’re not sure if you have it right.

Being Agile Means You Should Never Design Flexibility into Your Software

  • Mark:   It depends on what your definition of “flexibility” is. I think there’s an implication in this question that the “flexibility” being talked about isn’t actually needed in the current increment—that it is there for some future, as yet undefined purpose. Some software developers have a tendency to drastically overengineer solutions based on guesswork about the future. Given this, I’d say fact. Flexibility should be driven by known needs.

  • Doug:   Fiction. While none of us may be able to correctly predict the future on a consistent basis, this doesn’t mean that we should pretend the future won’t happen.

  • Matt:   This is definitely not what agile processes are telling us, although it isn’t a million miles away either. Spending ages creating complex layers of indirection ironically makes the software more difficult to change (given that layers of indirection are supposed to make the design more flexible). Instead, agilists tell us to make the software easy to change by keeping it simple, which is more realistic.

Concluding Fact or Fiction

As you can see, when it comes to defining what agility is or isn’t, and what it should or shouldn’t be, some differences of opinion arise—even in a group of people who for the most part agree on how software should be developed. The definition of agility has become quite nebulous, and people in our industry risk losing sight of some “traditional” core principles that remain valid, even with the advent of agility.

Agility doesn’t make up-front design obsolete. It also doesn’t make in-depth requirements analysis unnecessary. If anything, we rely on these disciplines more than ever so as to better achieve the agile goals.

Software agility boils down to simply giving the customer what he wants, safely, and in a reasonable time frame. If the requirements change, we need to change with them—even late in the project—so that we’re still delivering what the customer wants. (But see the caveat in the sidebar “Change Is Good, but It’s Also Expensive” earlier in this chapter.)

[14.]Mark Collins-Cope, “Interview with Robert C. Martin,” ObjectiveView, www.iconixsw.com/ObjectiveView/ObjectiveView4.pdf, Issue 4, p. 36.

[15.]See www.martinfowler.com/articles/continuousIntegration.html.



Agile Development with ICONIX Process. People, Process, and Pragmatism
Agile Development with ICONIX Process: People, Process, and Pragmatism
ISBN: 1590594649
EAN: 2147483647
Year: 2005
Pages: 97

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net