Results from the Experience


Hypothesis 1

The first challenge I address is the difficulty we encountered because no holistic picture of the application was available to everyone during the development process. Although much planning and analysis had been performed before our project switched to an XP approach, once that approach was adopted, our primary roadmap was the set of story cards we developed and arranged in a large spreadsheet. At best, a more holistic picture of the application existed in the minds of those analysts on our team with extensive experience in the leasing business. But for those without such experience, which included most members of the team, the application appeared as a collection of independent parts, without a clear image of their connection in a whole. We refrained from developing any more up-front documentation or a graphic of the whole application to reap the purported benefit of XP's "agility." The story cards, we thought, would be enough of a guide.

But the absence of a readily available, holistic picture in this case contributed to a number of problems. Because leasing has exceptionally complex business logic, which is often counterintuitive, team members without direct business experience in leasing tended not to understand how their particular stories fit in with or depended on other stories in the whole scheme of things. Thus, when they were charged with implementing new stories, they often left out tests that would verify proper functioning of the newer stories in conjunction with the older ones previously developed. As the iterations accumulated and the complexity of the system grew, even our analysts with extensive leasing experience were frustrated by not knowing sufficiently how the parts were connected to be able to write sufficient tests to verify completed stories. So as you might expect, new cards were often "finished" in the eyes of their owners when, in fact, they were not.

When this became evident, we wrote new story cards to cover what we had missed. But there was a general feeling of frustration among card owners when they discovered they had missed some dependency and, at some times, minimal inclination to acknowledge that their card actually included the dependencies in question. A common response from many team members to the difficult task of developing and testing a new bit of functionality in all its entanglements with existing functionality was to quip, "Why don't we just use the XP approach?" The implication is that XP looks at the whole as a collection of atomistic stories stories that can be treated as independent of one another. Although that may tend to be the case in some simpler applications, it was hardly the case in this one.

As the application grew in size and complexity with each new iteration, the amount of "analysis" required to compose a sufficient list of tests for each new story increased tremendously. Developers and analysts both tended to know the local functionality of a finite number of stories or sections of code, but hardly anyone had a comfortable grasp of how it was all connected, despite the extensive lines of communication among team members. For new members joining the team after the beginning of the project, the effort to understand the significance of the part without knowing the whole was especially daunting.

What was lacking, we think, was some easily accessible "picture" of the whole so that both developers and analysts adopting a new story could readily "see" what sort of connections with other parts they would have to test to complete the card. This would not have had to be a static picture that was produced before the first line of code was written. But at the very least, it would have had to be something that was available in an updated, cumulative form at the beginning of each iteration so that story owners could take their bearings and reliably estimate the scope of the stories and the requisite tests. No one could conceive of a usable "metaphor" to help "everyone on the project understand the basic elements and their relationships" [Beck2000], because leasing is too complex a business to be productively communicated through a simpler image. A more traditional picture or graphic was needed and would have helped us tremendously, but producing and maintaining it would have forced us to divert resources to the sort of design overhead that XP deemphasizes for the sake of agility.

To be sure, some of the difficulties here are the fault of the nature and size of our project. Less complex business domains are more easily described with metaphoric images, and less complex applications will have many fewer dependencies among stories. Nonetheless, we found ourselves too easily lulled into the belief that most stories are independent of one another [Beck+2001] and that we could find a useful metaphor for a very complex system. If XP is to be used for developing complex applications, we suggest some mechanism for constantly reminding everyone that functionality divided into distinct stories does not imply the independence of the stories. This is where a single, integrated "picture" needs to supplement a spreadsheet of separable story cards.

Hypothesis 2

Related to this problem was the difficulty we encountered in dividing the whole into story cards. To borrow an image from Plato, as well as from your own kitchen, you would think the division of the whole application into its story parts would be like cutting a whole chicken into its familiar pieces. Even though the joints may not be immediately visible, there are places to do the dividing that are more appropriate than others. But if you are cutting up the chicken for the first time, as I recall I once did in my parents' kitchen, you may encounter great difficulty because you do not know where the joints are.

It took us a good deal of practice to find the joints, particularly because we had many different cues for locating them. One cue we used was to divide into cards in ways that worked well in earlier iterations. Although this tended to work well early on, as the application grew in size and complexity, it became unreliable. The sort of functionality that was an independent chunk in iteration 4 had become, by iteration 14, intertwined with several other chunks that had been completed in the meantime. Thus, on more than one occasion, we discovered that we had missed testing one or more interactions because the list of story cards by itself did not include any built-in guide to interactions among past and present cards. Here again, a holistic picture of the whole application that would be updated regularly would have helped tremendously. Without this, we found ourselves devoting much time to continually reviewing and rewriting story cards to try to keep track of new functional interactions in the card texts. We eventually devoted one full-time analyst to developing and managing the story card list with our customer.

Another guide we used to distinguish story cards was to divide by bits of functionality that would produce some new, visible business value in the build we would deliver to the customer at the end of each iteration. The goal of this was to ensure that the customer could see ongoing progress. But dividing at these "joints" often turned out badly. For example, we played one story card early on entitled "Terminate lease billing schedule." At first glance, this distinct mechanism seemed like a perfect candidate for a story card because it encompassed a clear function that the customer could understand and actually use when finished. But as we began to implement it in a one-month iteration, we discovered that our desire to deliver the whole function at once led us to badly underestimate the time needed for the card. Luckily, the analyst for the card had devised her functional tests in such a way that we were able to divide the card into smaller cards along the lines of her functional tests. Thus, although the customer did not have the whole of the termination functionality available for use after one iteration, some testable part of it was finished. Over the next two iterations, the other, more granular cards were finished. In the process, however, we learned our lesson: not to divide automatically at joints of fully usable functionality. But this meant that we had to prepare our customer to be patient with partially finished business functions after certain iterations were completed.

From this experience, we perceived a precarious tension between two goals of XP. On the one hand, iterative development promotes the impression that the customer receives some level of a usable application at frequent intervals and can, as a result, decide to terminate a project at many stages and still walk away with an application having business value. On the other hand, the division of development into iterative chunks often makes it impossible to deliver functionality that the customer can actually use for his business at the end of any particular iteration. In the case of our "Terminate lease billing schedule" example, the chunk we delivered at the end of the first iteration could be used and tested, but from a business perspective, it was valueless without the other chunks that were completed in subsequent iterations.

In sum, dividing story cards well means not following any particular guide too rigidly. To return to the example of the joints of the chicken, if you insist on quartering a chicken, some of the cuts may be easy because they happen to fall at natural joints, but that last cut through the breastbone will create much additional toil. We found that trying to adhere too rigidly to dividing cards by deliverable functionality or by past experience often created more toil rather than less.

Hypothesis 3

Our customer/partner for this project devoted a team of its employees full-time to this project, but they were not on-site with our development team. This fact contributed to expected problems in the efficiency of communication between customer and developer, but these were not the most difficult challenges that confronted us in this area. Because of the breadth and complexity of the application, it was impossible for us to have the XP ideal of a customer who was also an end user. In a typical leasing business, the person responsible for managing accounts receivable for its customers is not the person who handles end-of-lease transactions or the person who books the original lease. The application we were building, however, required a "customer" who was simultaneously familiar with all these dimensions of the business and with how all of them needed to work together. Moreover, our customer's business was itself divided into multiple leasing divisions, and no two of them had identical business processes or requirements. To top that off, the way our customer did business often deviated from typical practices in the leasing industry as a whole.

This meant that our customer was in fact several distinct and different "customers," each having peculiar requirements that were not always compatible with one another. To be sure, much of this was due to the peculiar circumstance of our trying to build a custom product for one company and a generic product for an entire industry at the same time. Nonetheless, we suspect that more often than not, typical customers for larger applications will be more multifaceted than the ideal customer who speaks with a single voice.

To handle the competing "voices" among our various customers, we instituted "issue" cards in addition to development cards. The issue card stated the particular business function that needed to be addressed, and a team of business domain experts from our team and the customer's team met periodically to resolve how the functionality should be developed. When some agreement was finally reached, the issue card was then turned into the appropriate story cards. Here again, though, the complexity of our project added another weight that reduced the agility of XP on this project.

Hypothesis 4

The fact that our customer, despite its multifaceted nature, should determine the functionality of the system we built was never an issue, and they felt comfortable in that role. But when it came time for the customer team to develop the set of functional tests that would prove the completion of functionality they had requested, their comfort level was much lower. Part of this, we think, was due to the prevalent view among nontechnical professionals that computer applications are complex and difficult, so it's OK to use them but scary to peek at all under the covers. We made an extraordinary effort to convince our customer's team that they needed to not just specify the functionality to build, but also develop the tests to verify its completion. They eventually did so, but only after having relied on many, many samples from our own analysts for a long time. They were just not used to the analytic process a typical software analyst would go through when figuring out how many tests covering which functions would constitute a complete verification of this new card.

There was a clear difference, in our mind, between devising a business scenario and devising a functional test. In the former case, you make sure that, say, when you dispose of a particular asset from inventory, the correct accounting transactions are performed. In the latter case, you verify everything tested in the business scenario, but also verify the proper functioning of all negative and atypical actions that could occur in the process widget action on screens, behind-the-scenes dependencies, and so on. Our customer team did not need much coaching to provide us with the business scenarios, but the functional test itself, in all its detail, required us to do much more training with the customer than we had anticipated. In this respect, we think the typical description of the ideal XP customer working directly with the developer, although surely true in some cases, is not typical and thus underestimates the need for the traditional analyst intermediary.

Where XP Worked Well

Despite the various ways in which we found XP in need of supplemental procedures and artifacts for our unusual project, we came to appreciate many of its basic practices. The fact that we were forced to articulate and develop the functional tests at the beginning of the development process in an iteration was very healthy. Too often, when functionality is designed first and tests devised only much later after development, there is a disconnect between the original design and the tests. Reducing this time to a short iteration makes that discrepancy less likely to arise.

The frequency of deadlines in the iterative process tended to keep us focused and productive on the particular cards we had adopted. We tried to find the optimal iteration length for our project, starting first with one-month iterations (which seemed a bit too long) and then changing to two-week iterations (which seemed a bit too short). Our individual focus was also encouraged greatly by the fact that owners of tasks were responsible for estimating those same cards. It was much more difficult for someone to acknowledge that something could not meet a deadline when that confession also implied that the person had estimated the task badly. We soon learned that task estimation and ownership need to extend not just to developers, but to all roles in the project.

The practice of giving individuals ownership of their own problems also made it possible for several individuals to employ their peculiar intelligence to solve many problems. One case in particular stands out. We attempted to implement one card dealing with a very complicated piece of functionality during iteration 6, and it soon became apparent that the original strategy we had developed would be cumbersome and, in the end, perhaps unacceptable. Seeing this, we assigned time to one of our business domain analysts to "think through" the card again and propose an alternative way of implementing the functionality. He figured out a substantially more elegant and efficient way to implement the functionality on the card something that would not have been possible had we felt obliged to implement exactly what we had been told to do.

This case led us to introduce "analysis" cards in addition to regular development cards. For particularly complex bits of functionality, typically with many dependencies, we estimated analysis time for someone during an iteration to flesh out carefully all the test cases that would be needed for implementing the card in question. During the subsequent iteration, that card was played like any other card. The amount of time required to think through a sufficient list of functional tests for cards varied greatly from card to card, so we had to implement provisions like this to accommodate those differences.



Extreme Programming Perspectives
Extreme Programming Perspectives
ISBN: 0201770059
EAN: 2147483647
Year: 2005
Pages: 445

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net