Persistent Questions


Over the last five years, people keep asking the same sorts of questions: the limits of the agile model, agile's relationship with the Software Engineering Institute's Capability Maturity Model (SEI's CMMI), with ISO 9001 conformance, with customer relations, with domain modeling. They also ask how to introduce agile into a company, what "more agile" means, and how to tell whether their organization is agile at all.

This is the place to consider those questions.

Sweet Spots and the Drop-Off

When applying good practices, there is a gradient and a steep drop-off. Along the gradient, doing more and better is good. But across the drop-off, doing less can be catastrophic. I suggest that getting all the key practices above the drop-off is more important than getting just one or two of them to the top of the gradient.

Recall the five sweet spots of agile development mentioned on p. 222:

  • Two to Eight People in One Room

  • Onsite Usage Experts

  • One-Month Increments

  • Fully Automated Regression Tests

  • Experienced Developers

It is not the case that if you have those sweet spots in place, then you can use the agile model. Rather, it is the case that to the extent that you can get closer to those sweet spots, then the easier it is to take advantage of the agile model, and the lighter and faster you can move.

For the first one, colocation, we see that if the people are not in the same room but in adjacent cubicles, communication is not as goodthis is the gradientbut is still adequate for even the Crystal Clear model to work. When people move farther apart than the length of a school bus[32] and around a corner or two, their communication becomes so constrained that very particular dangers arisethis is the drop-off.

[32] See the Bus-Length Communication Principle, p. 102.

Similarly, it is not really necessary to have a full-time, onsite expert user in order to get good usage and usability information. I have seen projects do very well with expert users spending two hours a week onsite with phone calls during the rest of the time. This is the gradient. Less than once per month is on the low side of the drop-offthere just is not enough information and feedback from the expert, and the project is in danger of delivering the wrong system.[33] Most projects never get to see a real user expert at all.

[33] See the research study described in (Cockburn 1998, p. 134) on "user links" as critical to project success.

One-week, one-month, and three-month deliveries all lie on the gradient. One-year deliveries are off the drop-off.

Continuous, daily, even every-other-day integrations all lie on the gradient. Weekly integration and manual testing are off the drop-off.

Having all competent and experienced developers is really great. One competent and experienced developer for every three not-so-outstanding or young and learning developers is still on the gradient. Having only one competent and experienced developer for seven or ten of the others is off the drop-off.

Understanding the gradient and drop-off helps us understand the different agile methodologies being published. One of the goals of XP is to get as high up the gradient as possible. Thus, the original XP called for establishing the center of each sweet spot.

The goal of Crystal is project safety, with regular, adequate deliveries. Therefore, Crystal only calls for establishing the top of the drop-off. Every improvement from there is left to the discretion of the team.

Think about where your practices are relative to the drop-off. Consider that getting all of the practices above the drop-off is more important than getting just one or two of them to the top of the gradient.

Fixed-Price, Fixed-Scope Contracts

My debut as a lead consultant was on an 18-month, $15 million fixed-price, fixed-scope contract, with 45 people on the development team by the end. I found on that project that all the ideas we later wrote into the agile manifesto were needed to pull off the contract: incremental development, reflection, frequent integration, collocation, and close user involvement (we succeeded without automated tests, which these days I would try harder to get in place).

Most fixed-price, fixed-scope contracts are priced aggressively and typically require large amounts of overtime. Their problem is not rapidly changing requirements but development inefficiency. Fortunately, the agile ideas increase development efficiency, and there is almost always wiggle room in the development process that the team can exploit to increase their efficiency (See "Process: the 4th Dimension" [Cockburn 2003c]).

Thus, I came to the writing of the agile manifesto through the door marked "efficiency" instead of the door marked "changing requirements."

If you are working on fixed-scope contracts, remember two things:

  • At the end of each iteration or delivery, if you need to keep the scope constant, simply don't change the requirements! There is nothing in the agile manifesto that says you have to change requirements at the end of the iteration; only that agile development facilitates requirements changes in those situations where it is appropriate.

  • The contract puts a bounding box on the time, scope, and money you can apply. Within those limits, you can collocate the team, work incrementally, pay attention to community and communication, automate testing, integrate often, reflect and improve, and involve users.

Agile, CMMI, ISO9001

Rich Turner highlighted for me the key difference between the CMMI and the agile methodologies: The agile methodologies are focused at the project level, whereas the CMMI is focused at the organizational level.

There are additional philosophical differences that can be debated, but one must first take into account that their targets are different.

An agile initiative addresses the question: How do we make this software on this project come out well? A CMMI initiative addresses the questions: How well is the organization doing at being what it wants to be? Do the different groups have in common what they are supposed to have in common? Do they train newcomers to behave in the ways the group want? And so on.

Thus, it is not a meaningful question to ask whether XP is a CMMI level 3 methodology because XP does not contain any rules about how to train an XP coach or how to detect whether people are pair programming or refactoring (or why to bother detecting that). Yet those are the things the CMMI wants the organization to think about.

Since the writing of the agile manifesto, an increasing number of agile practitioners have been asked to help install agile processes across an entire organization. Interestingly, we find ourselves asking the same questions the CMMI assessment process asks: Is your team really doing daily stand-up meetings, and how can you tell? What is the training for a new Scrum Master? Where are your reflection workshop outputs posted? Where is your information radiator showing the results of the latest build?

It may be time for agile practitioners to take a close look at the lessons learned from the CMMI as it relates to creating common processes across an organization.

Taking into account that the targets are different, we find that

  • Getting an agile development shop assessed at CMMI level 2 or 3 may not be so difficult, especially as the SEI is starting to train assessors to work with agile processes.

  • Deep philosophical differences remain between the CMMI approach and the agile approach.

There are two very fundamental philosophical differences between CMMI and agile that generate tension between the two worlds.

The first difference is this:

  • The CMM(I) is based on a statistical process assumption rejected by the authors of the agile manifesto.

Process engineering literature categorizes processes as either empirical or theoretical (also called defined). In the process engineering literature, a theoretical or defined process is one that is understood well enough to be automated (quite a different definition of defined than the CMMI uses!). An empirical one needs human examination and intervention. The textbook by Ogunnaike (1992) has this to say:

"It is typical to adopt the defined (theoretical) modeling approach when the underlying mechanisms by which a process operates are reasonably well understood. When the process is too complicated for the defined approach, the empirical approach is the appropriate choice."

For a defined or theoretical process to work, two things must be true. First, prediction must be possible; the system must be predictable in its response to variations. Second, one must know and be able to measure all the parameters needed to make the prediction.

I don't think either is satisfied in software development. First, we don't yet know the relevant parameters to measure. The purpose of the agile movement and this book has been and still is, to some extent, to get people (including researchers) to pay attention to new parameters, such as quality of community, amicability, personal safety, and factors affecting speed of communication ("erg-seconds per meme").

Second, even when we get far enough to name the relevant parameters, I think it quite likely that team-based creative activities such as software development are chaotically sensitive, meaning that a tiny perturbation can cause an arbitrarily large effect. As one example, I once watched in amazement as a high-level employee quit the organization in anger over what seemed to some of us to be a small offhand comment. I'm sure she had been subjected to this comment and similar ones before, but on this day, the same comment was just one too many, and she quit.

The CMMI ladder is built on the assumption that both conditions are satisfied.[34] This assumption is embedded in levels 4 and 5. At level 4, the organization captures numerical metrics about the processes. At level 5 those metrics are used to optimize the process. The assumption is that gaining statistical control of the software development process is the key to doing better.

[34] Ilja Preuß mentions that the book Measuring and Managing Performance in Organizations [Austin 1996] has interesting material about what dysfunctions arise if you work to such an assumption where it is wrong.

The second philosophical difference between CMMI and agile is this:

  • The CMMI places optimization of the process at the final level, level 5. Agile teams (teams using Crystal, in particular) start optimizing the process immediately.

At the very moment of starting their agile work, the team will be challenged to ask themselves how they can do better.

In asking how to do better, the statistical process assumption shows up again. In what follows, I will speak for Crystal in particular, in order not to speak incorrectly for anyone else.

In the reflection workshop or post-iteration retrospective, the process and the project's condition are evaluated by people's emotional responses in addition to numerical metrics.

My working assumption is that a human, as a device, is particularly good at taking in a lot of low-grade analog information along various channels and summarizing it in a feeling or emotional statement. Thus, when a person says they feel "uneasy" about the state of the project or "uncomfortable" with the last iteration's communication patterns, they are in fact saying a great deal, even when they can't provide numerical values or details. It is the very fact of feeling uncomfortable that the team must attend to.

This view, which seemed out of fashion at the time I first formulated it, has been getting attention in recent years. Several books have been written around it, most notably Blink (Gladwell 2005), Sources of Power (Klien 1999), and Sketches of Thought (Goel 1995).

From sharing their feelings about what is going on, the team is in a position to start improving their working habits long before they get statistical data, and even before they are consistent in their habits. This is fundamentally different from the CMMI model.

There is one final, significant point to be made about combining agile and CMMI: It appears to be easier to incorporate agile practices into an organization already assessed at CMMI level 3 or higher than to move an agile project team into CMMI.

A CMMI level-3, level-4, or level-5 organization has already demonstrated the ability to adopt an organization-wide process to train the people and the leaders in their roles. The agile elements they will add are unlikely to endanger that rating in any way:

  • Colocation. The team may not be colocated initially, but if they choose to colocate, that will not affect any other part of their process.

  • Daily stand-ups. Getting the team together daily to talk about what they are doing should not affect any of the team's other rituals or deliverables.

  • Incremental development. Many CMMIcertified shops do not do incremental development, but working in increments should not break any part of their process.

  • User viewings. Having one or more users visit and give feedback to the development team each week is a good practice and will not affect any part of their documented process.

  • Continuous builds, automated regression unit and acceptance tests. These are probably recommended in their process and won't be damaged for being turned up.

  • Reflection workshops. Getting together each month to discuss how to do better will not endanger any process assessment. The tricky part comes if the team decides it needs to change some part of its process to do better and that change puts them out of step with the greater organization's process.

In other words, a CMMI organization should, theoretically, have no trouble in adopting the key recommendations of the agile approach. Most organizations striving for CMMI assessment that I have talked to are unwilling to do those things because they don't see them as competitive drivers.

The agile organizations I have visited generally have no interest in CMMI certification at all, viewing the CMMI ladder as a simple waste of money. That is, they don't need the CMMI certification to get contracts, but it will cost time and money to get that certification.

The way I understand these two views is that organizations that profit from having a CMMI level-3 assessment for certain government contracts are not competitively hampered by not having the levels of productivity and responsiveness offered by the agile approach. Organizations operating in a high-flux, competitive environment and needing the agile approach for organizational survival can't afford to or don't see the value to spend the money or time to create the process superstructure required to get the CMMI level certifications.

The difference here is based not on philosophy but in priorities created by the business operating environments.

The CMMI question transfers to the equivalent question about ISO9001 certification. ISO9001 certification is easier, in one sense: there is only one step on the ladder. The statistical assumption and the late optimization objections therefore don't apply.

One company, Thales Research and Technology (TRT UK), as part of its Small Systems Software Engineering activity, ran a trial project using the Crystal Clear methodology and had an ISO9001 auditor evaluate what it would take to get that project's process to pass an ISO9001 audit. The report is too long to reprint here; it is presented in Crystal Clear (Cockburn 2005a), pp. 293-298. The interested reader is encouraged to examine that report. The sort of comment we see from the auditor, though, is of this nature:

"In order to be fully compliant with this clause of ISO 9001, whiteboard records [ed. note: they used printing whiteboards] would need to indicate who was involved with the viewings and workshops. In addition, the Sponsor and/or User should be present at some or all in order to ensure reviewer independence. Actions should be clearly identified to enable tracking at the next workshop. The whiteboard records would need to be kept for the mandatory records of review required by ISO 9001. (Cockburn 2005a, p.296)."

In other words, there was not a problem in using design and reflection sessions at whiteboards, as long as everyone present signed their names on the whiteboard and dated it (and then either photographed or printed it).

When the team updates their working conventions, as called for in Crystal, they would presumably similarly print, sign, and date the new list. They would then use that as the new process to evaluate against.

Another View of Agile and CMMI

by Paul McMahon

While in some cases I agree it is "easier to incorporate agile practices into an organization assessed at CMMI level 3 or higher than to move an agile project team into CMMI," my experiences indicate that this is not always true.

I am helping a number of organizations with high CMMI maturity (levels 3, 4, 5) become more agile. But I am also helping small organizations that started out agile to use the CMMI model for process improvement. Last year I participated in a formal appraisal for a small company that is very Scrum-oriented, and they achieved a CMMI level 3 rating. If you use the CMMI model as it was intended, I don't believe you have to "move the team into CMMI."

The CMMI model is a reference framework for process improvement. It isn't a set of required practices. In the case I mentioned, we didn't change the behavior inside the small company, except in isolated instances where change was clearly beneficial. For the most part, we demonstrated where what they do (largely Scrum) meets the intent of the level 2 and level 3 reference practices. To accomplish this requires a good understanding of the CMMI model, including the use of what is referred to as "alternate practices."

As an example, the agile daily standup meetings can be viewed as an "alternate practice" within the CMMI model. These daily meetings along with the follow-up actions by the Scrum Master can achieve the intent of a number of the specific practices within the Project Monitoring and Control Process Area.

User viewings meet part of the need for stakeholder involvement, which is an important aspect of the Project Planning and Requirements Management Process Areas and the Generic Goals within the CMMI model.

With respect to reflections workshops, the CMMI model actually recognizes that change is a good thing. It expects the team to capture lessons learned and improve the process. So reflection workshops provide lessons learned, feedback, and improvement, as required within the generic goals of the model. The one subtle difference is that if you are going for level 3, then improvement recommendations that come out of the team should not be limited to the team but should also be communicated to the organization level so that other teams in the organization can benefit from what you have learned.

Attaining CMMI level 3 doesn't have to mean more detailed process definitions. It means that the processes that work for an organization are shared across that organization and adhered to. Nothing says they can't be "agile processes." Institutionalization is an important part of the model, which means the processes don't break down in times of crisis and aren't ignored when new people are brought in. In my view it is often easier to institutionalize agile practices because agile teams usually believe strongly in how they do their job.

As another example, the agile company I have been helping has an open communication culture where lead engineers aren't afraid to walk into a senior manager's office as soon as they know they have a risk or just need help. The team members are also encouraged to raise risks at the daily standup meetings, and they often do.

At first, based on the Risk Management Process Area in the CMMI model, we created a form to capture risks more formally, but the form was not well received in the organization. So instead we defined a risk process that captured exactly what the team does and then trained new personnel in the organization's expected risk management behavior. We did require that they capture their risks in a periodic briefing to Senior Management, but we continued to encourage the informal and immediate communication that was already working well.

Our lead appraiser had no trouble at all with processes like Risk Management that clearly were institutionalized across the organization and met the intent of the process areasas long as the "direct artifacts" and "affirmations" substantiated the effectiveness of the process (direct artifacts and affirmations are discussed later).

My experience has been that when we try to "formalize" effective processes, too often this results in negative side effects because we unintentionally change what works. As an example, if we required formal written meeting minutes on the agile daily standup meetings, people would be less likely to speak openly, based on my experience. I have observed this type of behavior change on numerous occasions.

Another way to look at what we did was to capture what the successful people in the organization were already doingin this case mostly Scrumand then share it across the organization. This is an effective way to use the CMMI model and agile methods together. These examples demonstrate that agile practices can actually help achieve CMMI goals if the model is used correctly as a framework.

Be aware that one of the keys to this working is to get a CMMI appraisal lead who understands how the CMMI model is suppose to be used and who understands agile methods. For example, you need to have a lead appraiser who isn't afraid to use "alternate practices" as they were intended by the model. Some lead appraisers discourage the use of alternate practices out of fear that it will be perceived as "trying to get out" of a required practice. This can be a legitimate concern, but it shouldn't be used as a reason to change what is already working in a successful agile organization.

Although it is true that some "organizations striving for CMMI assessment" are "unwilling" to employ agile practices, my experience indicates that many organizations involved with DoD work are actually very willing and interested.

I have worked with multiple large U.S. Defense contractors who came to me, already having achieved a CMMI level 3, 4, or 5 rating, but wanting to infuse more agile practices into their work processes. Their motivation has come from multiple sources including customer interest, developer interest, and a belief that to continue to be successfuland maybe to surviveincreased agility will be required.

A question I am hearing frequently from large DoD contractors is, "How can we infuse more agile operations into our existing processes?" In one case, we created an agile developer's guide to help his projects tailor their traditional company process assets employing agile practices.

The DoD acquisition community has also expressed interest through questions like, "How can we get our people trained in agile methods so that we can be more effective at evaluating the agile implementations of our contractors?" In this case, we are training acquisition community personnel and providing evaluations and recommendations on DoD projects that are already moving toward increased agility.

It is certainly true that many "Organizations operating in a high-flux, competitive environment and needing the agile approach for organizational survival, can't afford to (or don't see the value to) spend the money or time to create the process superstructure."

But, at the same time, my experience indicates that you don't necessarily need a "process superstructure" to achieve a high CMMI Process Maturity Rating. Let me explain why I believe this and why so many organizations still go the "process superstructure" route.

To attain a staged CMMI level 3 involves 18 process areas and 140 expected specific practices. Today, many CMMI-based process improvement initiatives focus on the stepwise procedural definitions, which can be implied (although erroneously in my view) from the specific practices. This approach often leads organizations to produce explicit objective evidence for each expected practice. These efforts frequently lead to team frustration and reduced performance. Comments such as "Our processes don't reflect what our people really do" and "Our processes force us to do non-value-added work" are not uncommon.

I believe this approach goes wrong based on the way objective evidence is handled. Objective evidence (OE) is critical to the appraisal method, but more than one type of OE is allowed. To achieve an expected practice, an organization's adherence to the practice must be verified by an appraisal team through what is referred to as direct and indirect artifacts.

Direct artifacts relate to the real intent of the practice. Examples are requirements, design, code, and test artifacts. (Note here that the model doesn't dictate the form these direct artifacts must take nor the order in which they must be produced.) Indirect artifacts are a consequence of performing the practice, but they are not the reason for the practice, nor are they required by the practice. Examples are meeting minutes and status reports.

But this is where I have seen many process improvement initiatives go off track. Too often I have found organizations being driven to perform unnatural and non-value-added behaviors to produce unnecessary indirect OE, rather than maintaining focus on the real target: quality products produced for their customers (direct OE).

My point is that indirect artifacts (e.g., meeting minutes, status reports, and so on) are not required by the model. I am not saying that meeting minutes and status reports are not important, but I am saying that you can use what are called "affirmations" during an appraisal instead. This means you interview people, and they tell you what they do.

For example, the appraisal team can interview an agile team, and the project team members tell the appraisal team what they do. As an example, a response from a team member might be, "We hold daily standup meetings where the team members report status and the team lead listens and then removes obstacles."

The appraisal team takes notes on what they hear (affirmations). These notes are then used as evidence that the organization is meeting the intent of many specific practices in the model. We don't have to force unnatural and non-value-added behavior and extra time-consuming work to create unnecessary artifacts to get ready for a CMMI appraisal. The direct artifacts and the affirmations are sufficient.

So why then do many large DoD contractors spend huge amounts of money to get ready for a CMMI appraisal? One reason I have observed is that many contractors don't trust what their people will say in those interviews. They therefore force the creation of huge amounts of indirect evidence (e.g., meeting minutes, status reports, etc.) to "reduce the risk" that someone might say the wrong thing, which might lead to an unsuccessful appraisal.

There is a lot of negative talk today concerning the effectiveness of the CMMI model. One of my government clients told me that he doesn't care anymore about the CMMI rating because he can't see the difference in the performance of an organization that says it is level 5 from one that says it is level 2. My view is that the problem isn't with the model but with the way companies apply it, focusing on the rating rather than its real intent. In my opinion, agile methods can actually help us use the CMMI model as it was intendedfor real process improvement.

Our philosophy, with the small agile company that had a business goal to achieve CMMI level 3, was not to drive the team to perform any unnatural or non-value-added acts in preparation for the appraisal. The team's affirmations and the direct artifacts became the real proofthe products they produced, their satisfied customers, and their business success: the business is currently growing 30% a year.


When to Stop Modeling (Reprise)

If you are not doing any modeling at all on your project, there is a strong possibility that you can benefit by thinking carefully about the structure of your domain (the domain model). If you think that your models need to be complete, correct, and true before you start coding, you are almost certainly doing too much.

At the time of writing the first edition of this book, most people modeled either too muchbecause they thought modeling was goodor not at all, typically because they didn't think about it at all. When agile development became fashionable, overzealous would-be agilists proclaimed that modeling was bad. Once again they gave incorrect advice.

Scott Ambler, in his book Agile Modeling (2002), set out to correct both imbalances. He describes, for those who model too much, lighter and simpler ways to model. For those who don't bother to model at all, he encourages lightweight modeling to assist in thinking and communication.

Scott's approach lends itself well to the text I wrote on page 36:

"Constructing models is not the purpose of the project. Constructing a model is only interesting as it helps win the game.

"The purpose of the game is to deliver software. Any other activity is secondary. A model, as any communication, is sufficient, as soon as it permits the next person to move on with her work.

"The work products of the team should be measured for sufficiency with respect to communicating with the target group. It does not matter if the models are incomplete, drawn with incorrect syntax, and actually not like the real world if they communicate sufficiently to the recipients . . .

"Some successful project teams built more and fancier models than some unsuccessful teams. From this, many people draw the conclusion that more modeling is better.

"Some successful teams built fewer and sloppier models than some unsuccessful teams. From this, other people draw the conclusion that less modeling is better.

"Neither is a valid conclusion. Modeling serves as part of the team invention and part of their communication. There can be both too much and too little modeling. Scrawling on napkins is sufficient at times; much more detail is needed at other times . . .

"Thinking of software development as a cooperative game that has primary and secondary goals helps you develop insight about how elaborate a model to build or whether to build a model at all."

The difficult part of this advice is that it requires you to periodically stop, think about what you've been doing, and discuss with your colleagues about whether to turn up or turn down the dial labeled modeling. There is no correct answer. There is only an answer that better fits your situation than some other answer.

The answer is complicated by the Shu-Ha-Ri levels of the people in the room. Some Ri-level people will step to the whiteboard or CASE tool and model almost as fast as they can talk. Other Ri-level people will do the modeling in their spoken language, in their tests (using TDD or XXD[35]), and in their code. A Ri-level person's advice to a Shu-level person is likely to be to copy the Ri-level person's style. That may or may not suit the Shu-level person.

[35] XXD is described on page 275.

My suggestion is that Shu-level developers need to model and discuss those models with the most experienced people around them in order to learn how to think about their models. As part of their professional growth, they can learn to model in both UML and in tests. CRC cards and responsibility-based design (Beck 1989, Evans 2003) are very good ways to get started with learning to think about models.

The tangible artifact that results from the modeling activity can take many forms: snapshots of whiteboards, UML diagrams in a CASE drawing tool, collections of index cards taped to the wall, videotaped discussions of people drawing and talking at the whiteboard; all these are useful in different situations.[36]

[36] I collected a number of photos and drawings as work samples in Crystal Clear (Cockburn 2005a). There isn't space to reprint all those again here. The interested reader can look through those and other work samples there.

Always remember to pay attention to both goals of the game: deliver this system, and set up for the next round of the game and the next people who will show up for that game.

For comparison, here are comments from three other agile experts on the subject of how much modeling to do:[37]

[37] All three are from http://blogs.compuware.com/cs/blogs/jkern/archive/2006/03/29/Depth_of_Initial_Modeling.aspx.

Jon Kern: "I am subconsciously stomping out risk to a given level so that I can proceed with some degree of confidence. If one part of the model remains pretty lightweight because you have done it before 100 times, so be it. If another part of the model got very detailed due to inherent complexity and the need to explore the depths looking for 'snakes in the grass'well, that just is what it is!"

David Anderson: "I think I'd stophave enough confidencewhen three features fail to add any significant new information to the domain modeli.e. no new classes, associations, cardinality or multiplicity changes. Not significant would be methods or attribute changes. When it looks like the shape will stand up to a lot of abuse without the need for refactoring then you are ready to start building code.

However, in recent years I've done two layers of domain modeling. The first with the marketing folks to help flush out requirements and help them understand the problem space better. I have a lower bar for 'done' in this early stage.

The second phase domain modeling done by architects and senior/ lead devs has this very well defined bar that the shape must hold because we don't want to pay a refactoring penalty later."

Paul Oldfield: "I have similar experiences [as Jon]I always know when I know enough about the domain to move on, and when I need to know more. The only time I have a problem is when I can't go back for more information later. For me the key question is, do I know enough to keep me busy until the next time I get contact? If I'm fronting for a team, the question is very similarDo I know enough to keep the team busy until next time?"

See what your team thinks is the appropriate threshold for modeling.

The High-Tech/High-Touch Toolbox

The editors at CrossTalk magazine asked me to write an article about what the agile "toolbox" contains (Cockburn 2004c). This turned out to be doubly interesting because the agile toolbox contains both social and technological tools and both high-tech and high-touch tools.

High-tech tools are those that use subtle or sophisticated technology. These include performance profilers and automated build-test machines. High-touch tools are those that address social and psychological needs of people. These include daily stand-up meetings, pair programming, group modeling sessions, and reflection workshops.

People often find technology depersonalizing, so we tend to see high-tech tools as low-touch and tend to see high-touch tools delivered in low-tech. Thus, agile developers will often express an interest in using low-tech tools for their social sessions, such as paper flipcharts and index cards ("so we can touch them, move them, and remember the discussion by the stains on the napkins"). More recently, people are using high-bandwidth communication technology to increase the touch factor on distance communications (and indeed, the touch factor is one of the reasons people might prefer watching videotaped discussions over reading paper documents).

Agile teams use an interesting mix of high-tech and high-touch tools and an interesting mix of technological and social tools.

The technological tools are interesting in their diversity. There are the well-known low-tech, high-touch tools: lots of wall space, index cards, sticky notes, flip charts, and white boards. There are the well-known high-tech tools for automated testing, configuration management, performance profiling, and development environment.

There are automation tools that had little popularity before the agile wave hit but that have become indispensable for the modern agile team. Foremost among these are continuous integration engines such as CruiseControl. These run the build every twenty or thirty minutes, run the automated unit and acceptance tests, and notify the development team (in all their various locations) about the results of the tests (often by announcing whose code failed). The continuous integration engine holds the team together both socially and across time zones.

There are the distance-communication tools. These include instant messaging tools with conversation archives, microphones, and web cameras to get "real-person" interaction, speakerphones for daily stand-up meetings across time zones, technical documentation on internal wiki sites, and PC-connected whiteboards to keep records of design discussions. All these serve to address the absence of frequent personal encounters between the people on distributed teams.

Most interesting, however, is the specific inclusion by agile teams of social tools. The top social tools are to collocate the team and attack problems in workshop sessions. Other social tools revolve around increasing the tolerance or amicability of people toward each other, giving them a chance to alternate high-pressure work with decompression periods, and allowing them to feel good about their work and their contributions.

The following is the list of tools from the article:

  • Social roles such as coach, facilitator, and Scrum Master.

  • Colocated teams, for fast communication and also the ability to learn about each other.

  • Personal interaction, within and across specialties.

  • Facilitated workshop sessions.

  • Daily stand-up status meetings.

  • Retrospectives and reflection activities.

  • Assisted learning provided by lunch-and-learn sessions, pair programming sessions, and having a coach on the project.

  • Pair programming, to provide peer pressure as well as camaraderie, better pride in work, and energy load balancing.

  • A shared kitchen.

  • Toys, to allow humor and reduce stress.

  • Celebrations of success and acknowledgment of defeat.

  • Gold cards issued at an established rate, to allow programmers to investigate other technical topics for a day or two.

  • Off-work get-togethers, typically a Friday evening visit to a nearby pub, wine-and-cheese party, even volleyball, foosball, or Doom competitions.

  • Posting information radiators in unusual places to attract attention (I once saw a photo of the number of open defects posted in the bathroom!).

The interesting thing is probably less that agile teams do these things than that they consider them essential tools.

The Center of Agile

What would it mean to get "closer to the center" of agile development? This is a question I often hear debated. The trouble is, there is no center.[38]

[38] Rather like certain old European cities. The center of the city is the "old town." But there is no center to the old town. It is just a maze of twisty little passages, all different. Once you are there, it doesn't make any sense to ask, "How do I get closer to the center from here?"

Most development teams are so far away from doing agile development that it is easy to give them a meaningful, "more agile" direction:

  • Decentralize decision making

  • Integrate the code more often

  • Shorten the delivery cycles

  • Increase the number of tests written by the programmers

  • Automate those tests

  • Get real users to visit the project

  • Reflect

The problem arises only when the team is doing relatively well and asks, "What is more agile?", thinking of course that more agile equates with better.

When we wrote the agile manifesto, we were seeking the common points among people who have quite different preferences and priorities.

  • For some, testing was important.

  • For some, self-organization was important.

  • For some, frequent deliveries were important.

  • For some, handling late-breaking requirements changes was important.

These are fine things to strive for, and they are typically missing on the average team. After a team gets a threshold amount of each in place, however, it is not clear which is most important, which is the "center" of the word agile. None of those is most important across all project teams. Each team must decide what is most important for it to address at this time. Eventually, there's no point in asking, "Where is more agile from here?"

How Agile Are You?

Agile developers dread being asked how to evaluate how "agile" a project team is. Partly this is due to fear of management-introduced metrics. Partly this comes from an understanding that there is no "center" to try to hit and therefore no increasingly high numeric score to target.

Scoring Agility Points

Much to my chagrin and frustration, and over my strenuous objections, the (really very experienced) Salt Lake City agile round table group decided to use the table in this section to evaluate the projects they were involved in. After they had assessed themselves for each property in the table, they asked: "But where's the final score?"

To my further chagrin and frustration, and over my further strenuous objections, they found ways to score each property (and argued over them to try to raise their scores). After they had added up all the numbers and declared a winner, one of them said, "But this is no goodwe got a high agility score, and we haven't delivered any software in two years!"

Which, as far as I'm concerned, proves my point.[39]


[39] See also the somewhat tongue-in-cheek blog entry: http://alistair.cockburn.us/index.php/Agile_machismo_points.

Nonetheless, a manager, the sponsoring executive, or the team itself will want to track how it, or different teams, are doing with their move into agile territory. Scott Ambler wrote a short test that provides a good starting point (Ambler 2005). He wrote:

"My experience is that if the team can't immediately fulfill the first two criteria then there's a 95% chance that they're not agile.

"They

  1. can introduce you to their stakeholders, who actively participate on the project.

  2. can showand runtheir regression test suite.

  3. produce working software on a regular basis.

  4. write high-quality code that is maintained under CM control.

  5. welcome and respond to changing requirements.

  6. take responsibility for failures as well as successes.

  7. automate the drudgery out of their work."

The following is a more detailed way to examine how you are doing. It occurred to me during a visit to a company that had a number of projects trying out different aspects of the agile approach. We characterized each project by how many people were involved and the project's iteration length. Because iteration length is secondary to frequency of delivery (see "Iterations Must be Short," page 244), we included iteration length only to characterize the projects, not to evaluate them.

We filled out this table, marking how frequently the deliveries, the reflection workshops, the viewings, and the integrations had happened. Osmotic Communication got a "Yes" if the people were in the same room; otherwise we wrote down the team's spread. Personal safety[40] got "No," "1/2," or "Yes," depending on a subjective rating. (If you don't have personal safety on your projects, can you post a "No"?)

[40] From Crystal Clear, p. 29: "Personal Safety is being able to speak when something is bothering you without fear of reprisal. It may involve telling the manager that the schedule is unrealistic, a colleague that her design needs improvement, or even letting a colleague know that she needs to take a shower more often. Personal Safety is important because with it, the team can discover and repair its weaknesses. Without it, people won't speak up, and the weaknesses will continue to damage the team."

For focus, we put down "priorities" if that was achieved, "time" if that was achieved, or "Yes" if both were achieved. For automated testing, we considered unit and acceptance testing separately (none had automated acceptance testing).

We marked some of the times with "!" if they seemed out of range. Seeing a "!" in one place leads you to look for a compensating mechanism in another part of the chart. For example, both SOL and EVA had only one delivery after a year. Both received "!" marks by that delivery frequency. However, SOL has a compensating mechanism, a user viewing every week. EVA, which had no compensating mechanism, did all the agile practices except getting feedback from real users (missing both deliveries and user viewings). When that company finally produced a product, the product was rejected by the marketplace, and the company went bankrupt (this should highlight the importance of "Easy access to expert users").

Use the table in Figure 5.1-19 to get all of the properties above the drop-off (see "Sweet Spots and the Drop-Off," p. 290), but don't try for a summary score. Instead, use the table to decide what to work on next. I would, of course, post the chart as an information radiator and update it monthly.

Figure 5.1-19. A way to track projects' use of the agile ideas.

Project

EBU

SOL

EVA

GS

BNI

THT

Ideal

# People

25

25

16

6

3

2

<30

Frequent Delivery

2weeks

!1 year!

!1 year!

1 month

2 months

4 months!

<3 months

User Viewings

2 weeks

1 week

!1 year!

1 month

1/iteration

1 month

<1 month

Reflection Workshops

2 weeks

1 month

3 weeks

No

No

1 month

<1 month

Osmotic Communication

1 floor

1 floor

Yes

Yes

1 floor

Yes

Yes

Personal Safety

1/2

Yes

1/2

Yes

Yes

Yes

Yes

Focus (priorities, time)

priorities

Yes

Yes

priorities

priorities

Yes

Yes

Easy Access to Expert Users

No

1 day/week

No

No

voice, email

Yes

Yes

Configuration Managment

Yes

Yes

Yes

Yes

Yes

Yes

Yes

Automated Testing

No

No

unit

No

No

unit

Yes

Frequent Integration

3/week

3/week

daily

1/week

monthly

1/day

continuous

Collaboration across Boundaries

Yes

Yes

No

No

1/2

Yes

Iteration length

2 weeks

1 month

3 weeks

1 month

2 months

1 month

<2 months

Exploratory 360°

No

Yes

No

No

No

Yes

Yes

Early Victory

Yes

Yes

Yes

No

No

Yes

Yes

Walking Skeleton

Yes

Yes

Yes

No

No

Yes

Yes

Incremental Rearchitecture

Yes

Yes

Yes

No

No

Yes

Yes

Information Radiators

Yes

Yes

Yes

Yes

No

Yes

Yes

Pair Programming

No

No

Yes

No

No

No

Maybe

Side-by-Side Programming

No

No

No

No

No

No

Maybe

Test-First Development

No

No

Yes

No

No

No

Maybe

Blitz Planning

Yes

Yes

Yes

No

No

Yes

Yes

Daily Stand-up Meeting

Yes

Yes

Yes

Yes

Yes

No

Yes

Agile Interaction Design

No

No

No

No

No

No

Yes

Burn Charts

No

Yes

No

No

No

Yes

Yes

Dynamic Priority Lists

Yes

No

Yes

No

1/2

Yes

Maybe


Introducing Agile

The question "How do I introduce agile software development into my organization?" has no tidy answer. Here are three ways to rephrase the question to highlight the different unanswerable questions this one contains.

  • "How do I get my company to change its corporate culture?"

  • "How do I get someone else to change their way of working to match mine?"

  • "How do I convince my boss that his way of doing things is wrong and mine is right?"

Eric Olafson, the CEO of Tomax (whose move to agile development is described on page 316), once offered an excellent insight into the problem. He said:

"You don't transition your projects to agile projects; you have to transform your people to think in agile ways."

Transitioning projects you can put on a schedule. Transforming people you can't. Many executives since then have echoed Eric's insight.

I wish I could write that in the last five years I have seen enough organizations move to agile development to formulate a good success strategy. Unfortunately, I haven't, and have instead become wary of large organizational initiatives. My own skeptical phrasing is:

"For any organization-wide initiative X, X gets rejected by the organizational antibodies."[41]

[41] Thanks to Ron Holiday for this outstanding concept ("organizational antibodies")!

Agile development is just one value we can plug in for X.

The failure often comes from a backlash effect that shows up after one or more projects have delivered successfullyand sometimes even because the project(s) delivered successfully. Apparently, simply delivering projects successfully is not sufficient to make a change stick.[42] For more discussion on organizational change, see "Business as a Cooperative Game" on page 11.

[42] This is a good time to reread Gerald Weinberg's The Psychology of Computer Programming (1998). His writings contain many stories in this vein.

Can we say anything about introducing agile development into an organization? The book Fearless Change (Manns 2004) contains some observations from change groups. Here are some starter dos and don'ts from other teams. Bear in mind that many of these ideas help you get started, but none is adequate to make the change stick.

Don't make an executive proclamation that all development will be done using the agile approach and then purchase training for everyone in the company. This approach will burn up a lot of money quickly and fuel the resistance movement from the beginning.

Do seek support at both the highest and lowest levels of the organization. The programmers, at least, have to want the change, and they need the input and support of some of the executive team. Either group without the other will become isolated and unable to proceed.

Do get one project team working successfully.

Do staff that team with at least two people who are competent, respected, and can pull off the agile approach.

Don't shower that team will an undue amount of support and attention. The other teams will just feel left out and get annoyed at that team.

Do get experts in to help. Place them in the teams, not outside as advisors.

Do consider the users and the executive sponsors part of the team ("There's only us").

Do reflect on what you are doing each month and change your working convention to get better. If you are not changing something, trying out something new, each month for the first few months, then you probably aren't getting a taste of it yet.

Do deliver real software to real users quarterly or, better yet, monthly.

Do prepare for a backlash to come from somewhere you can't see.

Finally, one word of comfort: The higher-level executives probably want what the agile approach offers, which is

"Early and regular delivery of business value."

I have not yet met an executive who finds that a troubling proposition. They usually agree that the development process needs lightening, testing needs to be beefed up, and users more involved. The increased visibility given by agile status reports are a comfort to them. They appreciate being able to make mid-course corrections. Early and regular delivery of business value helps the bottom line.

I include in this chapter two experience reports on introducing agile from the topthe following report contributed by Bud Phillips, VP of Capital One; and the one on page 316 by Eric Olafson, CEO of Tomax. Notable in Bud's report is that they were introducing agile from the operations side of the business, not the IT side. I take special note of Bud's comment that they considered the total value chain before embarking on their journey, and checked each activity for its contribution to the total value chain.

Introducing Agile from the Top

by Bud Phillips

Since 2003, my colleagues and I have been revolutionizing the customer acquisitions process for Capital One. We have been using a fusion of lean thinking, value chains, Six Sigma, and Scrum/agile software development. Most observers tell us we have been successful, but, like the rest of the company, we are constantly striving to improve and we feel as though we are only beginning to deliver value for divisions of Capital One. Along the way, we have learned a great deal:

  1. Most companies do not consider their processes in terms of the value they createthey just think of these processes as work. We know it is vital not only to understand the process of how work is really happening, but also to link these processes to the services that they create and to the value that customers get from these services.

  2. In undertaking an initiative like this you first need to convince your people to "forget" much of what they "know," because it is probably based on a functional and sequential way of seeing work, and that way of seeing work is limited and incomplete. While the scope of this "seeing" varies, ultimately, you need to get people to understand and know how to act on the entire "system of the work."

  3. Scrum, agile, and iterations are great ideas. We believe, however, that if any of these processes is "trapped" inside a narrow organizational definition of "IT development," then they do not work to their fullest potential. We take into consideration the value chain of performers, from marketers to product people to operators to developers to testers, as a way to form powerful agile teams. At Capital One, we call them Integrated Delivery Teams.

  4. If you cannot state the capacity of your processes (i.e., how much of your service you can provide), then it's impossible to really manage the system of the work. If someone says that you can flex capacity up and down, you should be immediately skeptical because it probably is not possible, particularly not within a timeframe that matters.

  5. After we build a team, and it gels, we like to keep it togetherit is extraordinarily wasteful to form, disband, and reform teams. We consider a good team a truly great assetwhy throw it away? At Capital One, we believe in the power of teams to get better outcomes. Individuals can spark and catalyze, but teams get results. Perhaps our most successful investment has been in developing our people to work successfully in teams.

  6. From an execution perspective, we always start with thinking about "pull flow"how can we get the work to flow smoothly, at the demand of our customers, with a predictable cadence, for both production processes as well as new development processes? Flow across the system of the work is by far the unifying principle of Lean, Agile, and Six Sigma.

  7. The leadership challenge results from "listening to the work" and then figuring out how to enable performers (who, after all, are the ones creating value) to take pride in their work, to understand it as a system, and to be committed to creating and providing value.

  8. Achieving and maintaining a culture of both radical and continual improvement requires leaders to encourage curiosity (always ask why do we do things this way?) and to build a culture that makes it "safe" to challenge others. Nothing is better than passionate people who want to deliver value, and who understand how their work relates to the system.

  9. Leading at its very essence means being Lean and Agile. If you cannot fully incorporate lean pull flow and agile predictability into how you lead, including how you get information, make decisions, and relate to people, then you better quit trying to effect this change. To be successful, your team must think of the new methodology as principles and a philosophy of execution rather than a set of individual techniques.

  10. Developing and using good data about our services, processes, and performance is key. But, we are careful to use most data for diagnosis, not for people performance management.

  11. Many people do not truly question why work happens in a particular waythey simply accept it and try to do a good job. A few people, when confronted with the questions of why, can really get on board and do something. The majority of people need more time, more explanation, and more persuading.

  12. A sense of joy at trying to achieve perfect pull flow is what keeps us going. You have to be persistent because having the vision of what "could be" is the easy part; doing it is more difficult.

  13. Organizational structure is not likely to be helpful, so you have to get comfortable with seeing past the organization in order to understand how the work really happens, and then you have to be able to work through the organization to get things done. Success in execution can lead to a decision to reorganize to support that execution, but functional thinking about org structure is very hard to overcome!

Action, experimentation, and innovation are our cultural preferences. We want to learn what to do by doing, and continually adjusting from what we learn. To be successful in building a Well Managed Lean Agile Execution Infrastructure requires deep reservoirs of patience, persistence, and resolve.




Agile Software Development. The Cooperative Game
Agile Software Development: The Cooperative Game (2nd Edition)
ISBN: 0321482751
EAN: 2147483647
Year: 2004
Pages: 126

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net