Results from the Experience


Billing and Contracts

Based on a recommendation from "Uncle Bob" Martin, we decided to bill by the iteration instead of by the hour to prevent any squabbling over hours worked. This also eased the time tracking burden on the developers and made it easier to swap developers when needed. More subtly, we felt that this helped show the client that we were sharing the risk with them and that we were invested in the project.

We found later that this also acted as a feedback mechanism overtime is an effective cure for developer overreaching. We developers had nobody to blame but ourselves if our optimism resulted in long hours. Of course, this can be taken only so far we still believe that we must go back to the customer and reduce scope when we find that the estimates are way off the mark.

The contract that we agreed on with our client did not attempt to nail down the deliverables for the seven-week life of the project that isn't possible when you're going through iteration planning and changing direction every two weeks. We agreed on the size of the team and that we would augment the contract every two weeks with a list of stories for the next iteration.

Iteration Planning

We anticipated problems in convincing the customers to narrow iteration scope to a set of specific stories that made sense for both business and development. However, this negotiation actually went quite well our customers were willing to listen to our reasons for wanting to do some stories ahead of others (though we tried to keep such dependencies to a minimum) and for giving high-risk stories a large estimate.

The customers loved the quick turnaround of the planning game. They were afraid that it would take us much longer to become productive than it did and were pleasantly surprised that we were able to identify the important stories as quickly as we did.

Team Size

Our team size for this project was four developers and one tester, whereas we had identified an ideal team size of eight developers and one tester and usually had at least six developers on previous projects. We found that there were some advantages and disadvantages to this smaller team size. On balance, we'd probably still prefer to have six to eight developers in a team.

Pros
  • We have struggled with whether everybody who is part of the development team should be in the planning game progress sometimes bogs down with too many participants, but a lot of knowledge transfer does take place. This was not a problem here with the smaller team size.

  • Communication and the coordination of schedules were easier.

Cons
  • It was a little more difficult to recover when a developer had to leave for personal reasons.

  • The overhead of tracking the project was more visible as a significant component of the total time spent on the project.

  • We had a smaller spectrum of skills available among the developers.

  • With a smaller number of stories in each iteration, we sometimes found that one of the assigned tasks was a bottleneck because other tasks depended on it.

Stand-up Meetings

How do you avoid discussing design during stand-ups? We had repeated problems in limiting the length of our daily stand-ups; the issues raised were often valid and needed further amplification.

One device that worked was to write on a whiteboard all the issues that people raised for discussion during the stand-up. Then, after the stand-up was completed, those who needed to be involved in each discussion could split into smaller groups and follow through on the "promise to have a conversation."

Another minor innovation was the use of an egg timer to limit stand-ups. We set the egg timer to ten or 15 minutes at the start of the meeting. Then the person who was talking at any time during the stand-up had to hold the timer in their hand. This acted as a reminder to the speaker to be brief and a reminder to others in the stand-up to avoid any private conversations while someone else had the floor.

We found that the client wanted a much more detailed status than we had planned to supply they were used to the traditional spreadsheet view of project tasks with a percent complete for each task. We compromised with a daily status message that summarized the state of the project and the outstanding issues most of the work to compile this daily message was done by one person at each stand-up meeting.

Pairing with Client Developers

We knew from day one that we would need to hand off our code to the customers in-house. For much of the project, their technical lead was on-site and worked with us. Four other, newly hired developers paired with us for different stretches of the last three weeks.

The objective of accelerating knowledge transfer by means of pairing worked very well. It also helped that XP is a cool new methodology that many developers are eager to experience. But that initial enthusiasm was probably sustained only because these developers were able to make constructive contributions. One developer didn't like the idea of pairing at all and quit when he found out that he would have to pair at least until the code was handed off to the client company.

Our experience with the technical lead was more complex. He was technically very capable and made several suggestions and observations that led us down unexplored paths. However, pairing with him was hampered by the fact that he was playing multiple roles, trying to get the most value for the client's investment while still acting as a developer. Therefore, he tried to steer us toward solving the difficult technical problems that he thought would crop up later, instead of focusing on the stories and tasks immediately at hand.

Finally, at the end of the sixth week, the team captain (our instantiation of the "coach" role) and another developer both had to leave the team unexpectedly. We introduced two new developers to the team and were able to deliver all agreed-on functionality on schedule, which further validated the worth of pairing and shared code ownership.

Story Estimates

During the first iteration, we felt the natural pressure to please the customer and bit off more than we could chew. We found that our attention to tracking and to promptly creating the acceptance tests as well as our discipline in sticking to our XP practices all suffered when we were under the gun. We continued to practice test-first programming but neglected to pair up when we thought that the tasks being tackled were simple enough that they didn't need the spotlight of a continuous code review.

As our customers came to trust us more in later iterations, we felt less pressure to prove that we were delivering value by stretching ourselves to the point that our discipline degenerated. We also learned from our failure to meet the optimistic velocity of the first iteration: We reduced our velocity by about 20% from the first to the fourth and last iteration and felt that the quality of our code improved as a result.

Bug Fixes

By the third iteration, a substantial fraction of our time was spent on resolving bugs from previous iterations. We found that we had to take this into account when estimating our velocity. Part of the problem was that our client was new to the idea that we could throw away work when we realized, after it had been completed, that it should be done differently. They saw these issues as bugs we saw them as new stories in the making.

The "green book" (Planning Extreme Programming) suggests that significant bugs should become stories in future iterations [Beck+2001]. We probably should have tried harder to convince our client that this was the right course to follow there's a natural tendency for the client to feel that bug fixes are "owed" to them.

One approach to bug fixes that worked quite well was to have one pair at the start of each new iteration working on cleaning up significant bugs those the customer had decided definitely needed immediate attention. At times we had significant dependencies on one or two of the tasks in the new iteration. Especially in that situation, we found that it was an efficient use of our developer resources to have one pair working on bug fixes while these foundational tasks were tackled by others.

Overall, we were not satisfied with our handling of bug fixes during this project we wanted to convert them into stories, but our customers always felt that they were "owed" bug fixes as part of the previous iteration, above and beyond our work on new stories.

Acceptance Testing

One thing we realized was the importance of a full-time tester to keep the developers honest. When we did not have a full-time tester for the first iteration, we got 90% of every story done, which made the client very unhappy during acceptance tests they perceived that everything was broken. We also found that our tester provided an impartial source of feedback to the developers on their progress.

We made one modification to our process specifically to facilitate testing. Our tester felt burdened by having to ask the developers to interrupt their paired development tasks whenever she needed a significant chunk of their time. So we decided to assign the role of test support to a specific developer for each week of the project.

Because we had multiple customer representatives, we found that a couple of times one customer helped create the acceptance tests, but a different one went through them with our tester at the end of the iteration. This necessitated many delays for explanation during the acceptance testing and some confusion over whether acceptance tests had been correctly specified. We concluded that in the future we would strongly push for one customer to help create and approve acceptance tests.

Unit Testing

Our unit tests proved invaluable in ensuring the quality of our code we found on numerous occasions that refactorings in one area of the code caused side effects elsewhere that we caught with our unit tests. Because we relied so heavily on the unit tests, the frequency and time to run them increased to the point that they took almost two minutes. Our first response was to do some refactoring to reduce this time. We then made use of the flexibility of Apache Ant's XML configuration to sometimes run only a specified subset of all the unit tests.

During one iteration, we implemented a story that required a multithreaded producer-consumer engine that was difficult to test using JUnit. We created pluggable stubs for each module of the engine so we could test any one module while simulating the functionality of the other modules.

Metrics

As a means of encouraging the creation of unit tests, we wrote a simple script that traversed our source tree daily and sent e-mail with details of unit tests written, organized by package and class.

We also used JavaNCSS (distributed under the GNU GPL), which generates global, class, and function-level metrics for the quality of code. We automatically generated these metrics daily and wrote the results to the project wiki to help us determine what parts of the code were ripe (smelly?) for refactoring and whether test coverage was adequate.

In addition to these automatically generated metrics, our tester manually created a graph of the acceptances tests, showing acceptance tests written, run, passed, and failed. This information was available on the development room whiteboard with the tasks and status of the project. A snapshot of the current state of the project was thus available on the whiteboard, while a more detailed view of the project could be found on our project wiki.

The Grade Card

After each iteration, we graded ourselves on the main XP practices and a few other aspects of the process that we felt were important (tester-to-developer communication, clarity of the stories, accuracy of estimates). The scores showed us the areas where the development team needed to focus and provided some useful and immediate feedback into the process. They served as a check against our sacrificing the long-term benefits of sticking with the process for the short-term benefits of churning out more code. We found that our scores improved substantially with each iteration, with the lowest grade in the final iteration being a B . We made these grade cards (see Table 31.1 for an example) available publicly on the wiki, although we did not invite the customer into the grading process. We will consider that step for future projects, at least after a couple of iterations, when some trust has developed between developers and clients.

Object-Oriented Databases

We were fortunate to be able to use an object-oriented database management system (OODBMS), rather than a traditional relational database management system (RDBMS), which enabled us to treat the domain model as identical to the persistence model and therefore to be agile when refactoring the code. It's much more difficult to refactor the model when the data representation cannot be changed at will.

Delivery Day

One aspect of XP that we had to rethink in our circumstances was the amount of documentation that was necessary. Because the client developers would be responsible for maintenance and enhancements, we needed more documentation than for an in-house project. So we put together an overview of the design along with some automatically generated UML diagrams and made sure that all our classes had Javadoc comments. We also added some installation and release documents and a developer FAQ. For most outsourcing situations, this level of documentation is probably necessary.

Table 31.1. Grade Card for the XP Project
Category Grade
Met customer expectations B
Customer-developer communication A
Tester-developer communication B
Clarity of stories (planning game) A
Accuracy of estimates/risk assessment B
Design simplicity A
Tracking A
Unit tests B
Stand-up meetings B+
Pairing B
Refactoring A
Build handoff to test D

The final iteration was completed on Wednesday of the seventh week, with the contract concluded on Friday. As delivery day approached, we noticed how different this was compared with past experiences at other companies. The developers all left before sunset on the day before D-day, and the atmosphere during the last couple of days was relaxed and cordial, even celebratory. For our final handoff to the client, we followed Alistair Cockburn's recommendation to videotape a design discussion. We pushed all our code and documentation over to their CVS repository, burned a CD containing the code and documentation, and celebrated with beer and foosball.

All in all, we were pleasantly surprised with the way our customers (both the developers and the businesspeople) embraced XP during this project. They had some knowledge of XP when we started and were eager to learn more. The CTO had previously worked on a similar application and had some definite ideas on how to handle the complexities of the project, but was still receptive to an incremental and iterative approach. We found that XP worked well when dealing with our technically sophisticated customers. Rather than going around in circles when we disagreed, we could prove (or disprove) our design ideas using the concrete feedback of our code.



Extreme Programming Perspectives
Extreme Programming Perspectives
ISBN: 0201770059
EAN: 2147483647
Year: 2005
Pages: 445

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net