Agile ICONIX: The Core Subset of Agile Practices

In this section, we define the core subset of agile practices that is Agile ICONIX. The intention is to be able to apply this minimal set of practices to a project to safely achieve the agile goals that we defined in Chapter 1.

As we showed in Figure 3-1). In a similar vein, Agile ICONIX represents a core subset of agile practices (see Figure 4-1). For a more detailed diagram of how the agile practices work together, see Figure 4-2.

image from book
Figure 4-1: ICONIX core UML subset and core agile subset

image from book
Figure 4-2: Core agile practices

As we’ll explore in the next section, our core subset is based on the premise that the Agile Manifesto (which states “people OVER processes and tools”), while close, is off by just a little bit. Our refactored manifesto makes the (somewhat heretical) claim that people and processes and tools are all intrinsically important to success on a project. To us, it doesn’t make sense to say that one is more important than the others, because they’re all mandatory. The core subset we present here is based on that premise.

To recap Chapter 1, here are the goals of agility derived from the Agile Manifesto and various books and discussions on agility, reduced to four bullet points in the interest of information boil-down:

  • Respond to changing requirements in a robust and timely manner.

  • Improve the design and architecture without massively impacting the schedule.

  • Give the customer(s) exactly what they want from this project for the dollars they have to invest.

  • Do all this without burning out your staff to get the job done.

Any practices that don’t contribute toward fulfilling these agile goals are extraneous to our cause, so we leave them out.

The agile goals could be further summed up as

Give the customer(s) what they want within a reasonable time frame without botching it up.

Note 

Why might the task be botched up? Possibly because, in trying to deliver working software in record time, we cut too many corners (requirements exploration, up-front design, unit testing, QA testing, and so forth). It’s an all too common story, and one that we can learn from.

To support the agile goals, we also need to be able to accurately predict when each requirement will be finished. To do this, we need a way of breaking down requirements into deliverables—estimable fine-grained chunks of work.

We also need to be able to quickly estimate the impact of making a change in the requirements and present this information in such a way that the customer can see exactly how his change requests are affecting the project’s timescale.

image from book
WHY CAN’T WE JUST SET THE REQUIREMENTS IN STONE?

In an ideal world, our customers would be omnipotent; have infinite, logical, forward-thinking ability; be able to foresee changes in their business way before they happen; and never make mistakes. Similarly, the development team would never make mistakes, always completely understand all requirements, have a full understanding of the latest technology changes that have hit the industry, and always work perfectly as a team.

Unfortunately, this isn’t the case in most circumstances. As well as the customer, team members learn as they go along, and they figure out better technical and business solutions. So we have to be prepared to accept that changes to software may be required for a variety of reasons. What we want to do is to minimize the impact of this change when it does happen. It’s important to educate customers that there is a cost to change, though.

image from book

The practices and principles we describe here are definitely not new; they’re derived (or borrowed directly) from a variety of agile methodologies. What’s new in this book is that we’ve attempted to decide which practices are essential and which we might be able to leave out. Often, what you leave out is more important than what you leave in. Certainly, the core subset approach has worked well with the ICONIX approach to UML. The intention is to gather up all these different interpretations of how to be agile and present a single, boiled-down, working set of practices and principles.

To work really well, the agile practices and principles listed here must be used in conjunction with an up-front analysis and design modeling process, and we can’t think of a better one than ICONIX Process, a minimal set of object modeling practices that we define in Chapter 3.

Agile/ICONIX Practices

These practices, used together, are the bare minimum set that can be used to achieve the agile goal of safely giving the customer what he wants in a short time frame. We’ve divided the practices into two parts: the ICONIX Process practices and the “true” agile practices.

ICONIX Process Practices

These practices cover ICONIX Process as described in Chapter 10).

  1. Requirements analysis and disambiguation

  2. Domain analysis (capturing a common language that can be shared by everyone involved in the project, including the customer and users)

  3. Interaction design (insert your favorite interaction design technique here) to reduce requirements churn and therefore also reduce the need to respond to changing requirements

  4. Prefactoring/model-driven refactoring (applying the right practices, as described in this book, to get the design as good as possible up front)

Agile Practices

The remaining practices can be thought of as “traditional” agile practices.

  1. Aggressive testing

  2. Frequent small releases[1.]

  3. Frequent integration (aka very-nearly-continuous integration; integrating all production code at frequent intervals)

  4. Synchronizing model and code after each release

  5. Agile planning

  6. Responsible ownership/collective responsibility

  7. Optimizing team communication

  8. Short iterations (for planning purposes; dividing a release into fixed-size iterations)

Figure 4-2 shows the way these practices relate to each other. (You can also download the latest version of this diagram from here: www.softwarereality.com/design/agileiconix.jsp.)

Practices 1 to 3 in the list might not, on the surface, seem like “true” agile practices. However, practice 1 manifests in XP as the so-called on-site customer,[2.] and even practice 2 gets some exposure in XP in the form of the “metaphor.” Practice 3, while not mentioned as a practice in other agile processes, is certainly good advice that most agile adherents would give. We include these practices up front because we view them as linchpins in the success of your project.

In fact, the practices as listed here are ordered from the indispensable to the most flexible. You can use this list as an “adoption roadmap” when applying these practices in your own project.

As this is a core subset, we suggest that you do all these practices in your project as a bare minimum. However, we also have to be realistic—it would be a bit much to introduce all these practices at once and expect everyone on the team to instantly adapt, to shift their minds wholesale to this new way of working. So adopting the practices a few at a time makes more sense. So, to do this, start with the first six practices (these are the core, essential practices) and once you have those down, consider adopting the remaining practices one by one.

The arrows in Figure 4-2 show the feedback that takes place between the analysis and design stages, facilitated by the three practices in the middle (prefactoring, synchronizing model and code, and optimizing team communication). Agile ICONIX is intensely feedback driven. Each stage of the process is validated with some form of feedback, whether it’s a model review (e.g., via robustness diagrams), unit tests helping to validate the design (and, of course, whether the code works), or customer feedback on an early prototype.

The two practices on the left of the figure, agile planning and frequent small releases, govern (and are governed by) all the other practices, again through a process of intensive feedback.

Let’s look at the practices in a little more detail.

1. Requirements Analysis and Disambiguation

This practice involves talking to the customer and end users, getting a properly trained business analyst to extract the correct requirements up front, and using robustness analysis (see Chapter 3) to disambiguate the functional requirements.

“Talking to the customer and end users” may seem obvious, but the point is really that it takes considerable skill to extract the correct requirements from them. It’s a skill that is separate from design and programming skills, so it wouldn’t be entirely reasonable to expect designers and programmers to perform this task.

Our goal with this practice is to reduce requirements churn (i.e., rapidly changing requirements). Sometimes, the requirements will change because the business changes, but more often, requirements change because they were suddenly discovered to be incorrect. The correct requirements are there, buried beneath the veneer of the customer and users’ confusion. A variety of techniques, including domain analysis (see the next practice), can help to clarify what the project needs to do, both for you and for the customer.

2. Domain Analysis

Domain analysis involves analyzing the business domain and creating a domain model (see 3.]

After the domain model is created, it will continue to evolve throughout the project, and it should always be kept up to date. This is vital, because its key purpose is to eliminate ambiguity.

The domain model also provides the basis for the system design, so we end up designing the system using the exact same model that the businesspeople used to define the requirements.

3. Interaction Design

Note that “design” in this case means designing the UI and the way that users interact with it. Interaction design means

Designing interactive products to support people in their everyday and working lives.[4.]

Interaction design, performed as early in the project life cycle as possible, can have a profound effect on the team’s understanding of the problem domain. Interaction design is both an art and a science in its own right; you could spend years studying how to make software usable and how to make it fulfill the real goals of the end users. However, to get you started, in Chapter 10 we describe how persona analysis (a staple ingredient of interaction design) can be combined with the ICONIX Process in an agile project.

4. Prefactoring/Model-Driven Refactoring

We introduced the term “prefactoring” in Chapter 3. It means

Using up-front design modeling to refactor and improve the design before writing the code.

More specifically, it means mapping the set of software functions you’ve identified to implement the (disambiguated) requirements onto the set of classes you’ve discovered.

Doing just enough prefactoring can help to reduce the amount of refactoring you’ll need to do later.[5.] Prefactoring also helps to provide feedback into the analysis stage, by providing a review of the analysis model (the Requirements Review and Preliminary Design Review milestones; see Chapter 3).

Model-driven refactoring can be broken down into the following steps (as applied to iteration n+1). The intent is to synchronize the class diagrams with the existing code (as per iteration n):

  1. Write the new use cases and perform robustness analysis.

  2. Draw new sequence diagrams over class diagrams, noting any changes to existing class structure/relationships, and so on (refactoring the model for the new requirements, effectively).

  3. Refactor the existing code to the new class structure as modeled without adding the new functionality (yet), and rerun the tests to show we haven’t broken the system.

  4. Add the new functionality and the unit tests (we write the unit tests first if we’re following a TDD approach; see Chapter 12).

This sequence of steps assumes that we have a good set of functional unit tests that cover existing functionality. Which leads us neatly to the next practice

5. Aggressive Testing

After you’ve begun writing code, the design will inevitably change somewhat (though hopefully to a much lesser extent than if you’d simply begun writing the tests and code without any preliminary design work). This means refactoring, which in turn means having a healthy number of unit tests at your disposal to help catch errors while making the changes.

TDD, which involves designing at the class and method levels using tests, can be used in conjunction with ICONIX Process to create a more rigorous design and a set of highly focused unit tests. Unit tests of this kind are sometimes referred to as micro tests because of their fine-grained nature; there may be several tests for each method on a class.

We discuss the combination of ICONIX and TDD processes in Chapters 12.

6. Frequent Small Releases

It pays to release the product to the customer often, so that you can get feedback as early as possible on whether the product’s heading in the right direction. If you leave a release for more than 3 months, that’s taking a risk. Conversely, if you release more often than once a month, then the process can become clumsy, with too many releases for the customer, the users, and your own QA team to handle effectively.

It’s common for the first release to take longer than the rest, because the first release often creates a technical foundation for the whole system.

7. Frequent Integration

Each individual (or pair) should integrate all his latest production code into the source control system at least once per day[6.] and have an automated build that checks the code out of source control, builds it, and runs all the unit tests at least once per day (though once per hour is better—if the automated build is doing it once per day, then why not once per hour? It doesn’t take any extra effort, but it does provide earlier feedback when things go wrong—for example, if someone checked in some code that doesn’t compile).

This practice is a slight step back from the common notion of continuous integration[7.] (which implies that throughout the day you continually integrate code into the source control system, almost as soon as you’ve typed it in and it’s been shown to pass the unit tests). The longer you leave integration, the more problematic it eventually becomes. However, continuous integration can also be problematic, because sometimes it’s just easier to get something finished before checking it in and sharing it with the rest of the team.

So while frequent integration is a useful and fundamentally important guideline, it is just that: a guideline. There are occasions when a programmer may need to “drop out” of the core development trunk to work separately for a couple of days. However, in the meantime, the rest of the team is continuing the practice—that’s essential. We cover some of the reasons for this in the “Agile Fact or Fiction (Continued)” section later in this chapter.

As described in Martin Fowler’s article on the subject,[8.] continuous integration can involve just daily integration, so the main difference here is primarily a name change. Frequent integration emphasizes the importance of making sure your code is working (and doesn’t break anything else) before checking it in.

8. Synchronizing Model and Code

With this practice, we’re essentially enabling the next bout of work (“enabling the next effort” to describe it in Agile Modeling terms). After each release, pause briefly to bring the code and the design back in sync. This might mean updating the code, or (more likely) it means updating the model. But only update what you need to—if a diagram is no longer needed, discard it.

This practice also allows the design to feed back into the analysis model. In other words, having implemented part of the system and released it to the customer, you may find that part of the original use cases or domain model just didn’t map correctly to the implementation. This is a good opportunity to correct the analysis work, resetting the foundations ready for work to begin apace on the next release.

This practice gets its own chapter and is tied in with our example C# project (see Chapter 8).

9. Agile Planning

An agile project is one that is planning driven rather than plan driven. In traditional plan-driven projects, the plan (created up front) becomes the gospel by which the project will be driven and ultimately judged. Conversely, in an agile planning–driven project, a lot more planning goes on, in smaller increments and finer detail, throughout the project. This practice is also sometimes referred to as adaptive planning, because the plan is adapted to the changes in the project.

We cover agile planning in more detail in Chapter 9.

10. Responsible Ownership/Collective Responsibility

This is a reworking of collective ownership (a common agile practice in which everyone on the team “owns” the entire code base; there’s no specialization as such). Because we’re not mandating pair programming, we don’t specifically need collective ownership. So instead we can do something a bit healthier: allow people to “own” specific areas of the system, but make everyone collectively responsible for the overall code base. This is a much more natural way of working, and it means that while programmers aren’t limited to one particular niche in the project, they do get a sense of ownership and therefore take more pride in their work.

The corollary to this advice is that making someone responsible for something she has no control over is a perfect recipe for stress. For example, letting Fred own a piece of the code base but making Alice responsible for it could lead to arguments of the most vicious order—it depends on the people. How well this practice works (or doesn’t) is entirely dependent on the personalities involved. Similarly, “true” collective ownership doesn’t work for everyone. Probably more than any other practice listed here, this one needs to be tailored and closely monitored.

As a result, this practice has been deliberately left so that, as a project leader, you can take from it what you want to. If you want to go with XP’s “extreme” collective ownership, it still fits in with the other practices here. Or you might prefer to scale it back (the other extreme being individual code ownership, where each person specializes in specific areas of the code base). The practice as described here is somewhere between the two extremes, allowing programmers to take individual pride and ownership in their work but avoiding the problems typically associated with specialization.

11. Optimizing Team Communication

This practice primarily means looking at the way your team is currently organized and moving team members around (both physically and in terms of their responsibilities) to improve the level and quality of communication.

While intrateam communication is an important part of this practice, communication between the programmers and the business analysts is also important. If the analysts are too far removed from the technical realities of a project, they may produce a set of behavior requirements that is far from being the optimum solution that it could be. Early feedback from the programmers can save time and expense in describing what is and isn’t technically feasible within the customer’s time constraints.

Similarly (and perhaps more important), the programmers need a solid grasp of exactly what they’re supposed to be creating. Getting feedback from the analysts on the new software as the software is being created helps to identify mistakes early. In other words, try to organize the team members so that they’re near the analysts. Then the programmers are more likely to call the analysts over to show them an early working version of the system and get their feedback on whether it meets the real requirements.

For many people, human factors are what software agility is all about: moving away from the cold, logical, dispassionate process definitions of old and addressing the fact that people are the main ingredient in any software project. Observing human factors means keeping an eye on the way teams interact and arranging the team members so that they’re working at their optimum level. For example, it’s good to try and seat people working on the same part of the project in the same room so that they’re more inclined to communicate.

Team size also plays a large part in the level (and quality) of communication that goes on. The smaller the team, the better quality the communication tends to be, and also the less costly it is. It’s also been demonstrated that smaller teams are generally more productive. As the number of people on your team increases, so does the amount of effort involved in communicating and coordinating activities. Large teams are generally less efficient, so it’s always best to start small and add new team members only if you really have to.

Tip 

On large-scale projects, effectively dividing the project into smaller, self-contained subprojects can help, but this is only really “agile” if each team also maintains its own self-contained domain model and code base. If it’s difficult to refactor (both the domain model and the code design), then people won’t.

Human factors and communication issues are very important, but a detailed analysis of these topics is beyond the scope of this book. We recommend you take a look at Alistair Cockburn’s book Agile Software Development for more on the subject.[9.] (We also touch on the subject in Chapter 2.)

12. Short Iterations

A project often finds a natural rhythm. For example, each week on Monday morning, we have a team progress meeting where we discuss issues from last week and sort out what each person will do this week. It pays to make the planning process follow the same rhythm, by dividing the project into fixed-size planning iterations of 1 week each.

Note 

What happens if the iteration isn’t done by Friday? Do you work the weekend or slip the schedule? The purpose of the planning iteration isn’t to set a deadline by which all scheduled work must be wrapped up “or else.” Instead, it’s a way of monitoring progress—the amount of work getting done by the team during each week. So if the work scheduled for this week isn’t finished by Friday, then of course the work carries over into the following week, but the planning iteration provides an early indication that the team is underestimating its work.

The iteration size can be tailored to suit your organization and project. Matt has found 1 week to be the ideal iteration size, as the weekend provides a natural break between iterations and, of course, it helps to get everyone focused on their work on a Monday morning when they might otherwise be blurry-eyed and wishing they were still on their weekend fishing trip. Doug, on the other hand, prefers the Zen-like simplicity of keeping the releases short and not subdividing them into weekly iterations. You should tailor the iteration size to suit your project.

We’ve put this practice last on the list (suggesting that it’s the one you might add to your project last of all). This is because you could get by without this practice but, having said that, it’s also one of the most natural of these practices for your team to adopt. Getting everyone in sync at the start of the week, allocating work, checking up on progress from the previous week, updating the plan, and just getting people talking to each other about their areas of the system—once you get into the habit of doing this each week, it’s a difficult habit to break!

Tracking progress each week can also help to track project velocity over time (see the earlier agile planning practice).

Agile Values That Drive the Agile Practices

The first three values we describe in this section are borrowed directly from XP and can be traced back to the agile goals. As a set of values by which to drive your project, they can’t be faulted (although we have adapted them slightly to present the “digest” version here).[10.] We’ve also added a value of our own: judgment and balance.

  • Communication

  • Early feedback

  • Simplicity

  • Judgment and balance

You can think of these values as the glue between the goals and the practices. The goals are to maximize the values; the practices are how we go about doing that.

Communication

We see communication as an integral part of early feedback, but we’ve included communication separately because it’s such a vital and fundamental aspect of developing software. If the various members of your team don’t talk to each other regularly about the system they’re building together, then the project is probably doomed.

As with early feedback, communication is also about talking to the customer and end users and finding out what they want, either through traditional (but effective) interview techniques or by regularly showing them incremental releases of the product and getting their feedback as early in the product life cycle as possible.

Note 

The primary purpose of a model (besides designing, of course) is to communicate the design (and/or requirements) to others. Modeling is a communication tool. Always model for readability, because if nobody else can read it, it isn’t worth much.

Early Feedback

Early feedback means finding out as soon as possible if you’re “straying from the path of rightness.” It includes such things as getting the customer to review an early release of the system, finding out as soon as possible if a change you made to the code has broken anything, and getting peer feedback on a design decision before it gets coded up.

It also means monitoring and adjusting the development process as you go along, based on whether current practices appear to be working or not. To rework the old saying, “If it is broke, fix it immediately!”

Note 

The saying just mentioned is especially true when it’s the build that’s broken, and it’s one of the reasons an automated hourly build is preferable to a daily build. The most important time in any defect’s life cycle is the first 24 hours after it has been injected into the system. If you can catch the defect in that time, it will be much, much easier to repair than if you catch it at a later date.

Simplicity

Simplicity means, simply, to keep it simple. Okay, we should probably elaborate on that a little bit Simplicity, when it comes to software development, means not doing more than you really need to.

Note 

It’s worth stressing that “keep it simple” doesn’t mean “don’t do the alternative courses.” We’ll try to drive the following point home throughout this book: the alternative courses are a vital part of development and need to be taken into account during design.

In terms of what the software does, simplicity means that the software should do what the customer has asked for and nothing more. In determining what the customer is really asking for, the software can often be made simpler by identifying patterns in the requirements—combining elements of the product’s UI to fulfill the same requirements, for example. One way to do this is to define the problem that we’re setting out to solve in terms of a domain model and to use domain analysis to further break down the problem, and in this way produce a simpler solution.

In terms of how the software works, simplicity means keeping the design simple and not overdoing the design with layers of abstraction and indirection. Complex code is generally written that way because the programmers thought that putting all those layers in there would make it easier to change. Ironically, the opposite happens: a simple change involves modifying some aspect of the code in each and every layer. Simplicity can be taken to an undesirable extreme, however—for example, abandoning business objects and putting SQL code or application logic directly in your presentation tier (e.g., JSP pages). While this might suffice for very small projects, mostly this is something to be avoided.

Judgment and Balance

We were tempted to call this principle “moderation in all things,” but even moderation, per se, can’t always be called a good thing. There are trade-offs to be made at all points in the development process. Trade-offs require understanding both the pros and cons of doing things, evaluating your current circumstances, and making a decision. Here are a couple of examples:

  • The company will go out of business unless we release in the next 2 weeks. This requires extreme action! There are obvious downsides, but they have to be lived with.

  • The persistence mechanism doesn’t perform very well. But this isn’t causing a problem at the moment, and there are other issues causing problems. So we choose to ignore this issue for the time being (while also trying to judge when it will become a real problem).

However, judgment and balance may well mean taking a moderate approach, and moderation in turn means finding a balance somewhere between adopting a practice completely and unconditionally (“pushing the dial all the way up to 10”) and not doing it at all. For example, in Extreme Programming Refactored: The Case Against XP, the refactored process that we describe is mostly about finding ways to tone down XP’s “extreme” approach and let things simmer gently rather than boil over.

[1.]As we describe in Chapter 9, there are three types of release: an internal, investigative release seen only by the development team; a customer-visible investigative release; and a customer-visible production release. There are also different stages of release (release to testing, beta release, full customer ship, and so on). Frequent small releases are typically used to refer to the “customer-visible, full-on, customer-ship, bring-it-on production release,” but they can actually be any of these types.

[2.]On-site customer is an innocent-sounding name for something that actually turns out to be a team of customer analysts equal in size to or larger in size than the team of programmers, who must all squeeze into the same room. Picture a roomful of programmers being squeezed out by a whoop of sharp-suited, clipboard-wielding businesspeople

[3.]Eric Evans, Domain-Driven Design: Tackling Complexity in the Heart of Software (New York: Addison-Wesley, 2003), p. 24.

[4.]Jennifer Preece, Yvonne Rogers, and Helen Sharp, Interaction Design: Beyond Human-Computer Interaction (Hoboken, NJ: John Wiley & Sons, 2002), p. 6.

[5.]See Chapters 8 of this book for a hands-on example of model-driven refactoring in action.

[6.]The obvious exception (at least we hope it’s obvious!) being if the code isn’t compiling or if it’s causing unit tests to fail.

[7.]See http://c2.com/cgi/wiki?ContinuousIntegration. As noted on this web page, continuous integration should really be called “continual integration a discrete action/event repeated endlessly, such as the sound made by a playing card stuck in bicycle spokes.”

[8.]See www.martinfowler.com/articles/continuousIntegration.html.

[9.]Alistair Cockburn, Agile Software Development (New York: Addison-Wesley, 2001).

[10.]As our regular readers will know, it’s XP’s combination of practices (driven by the values) that let it down. See our other book, Extreme Programming Refactored: The Case Against XP (Apress, 2003) for an in-depth analysis of why we believe this to be the case.



Agile Development with ICONIX Process. People, Process, and Pragmatism
Agile Development with ICONIX Process: People, Process, and Pragmatism
ISBN: 1590594649
EAN: 2147483647
Year: 2005
Pages: 97

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net