Chapter 26. Looking Back for the Future


We made it through the first iteration! We safely navigated the curves and switchbacks, didn't lose too much time on the hills, and didn't need the runaway truck ramp on the downhill grades. We finished the stories, including the acceptance tests. We still have a few minor defects, but they're being tracked, and we'll be able to fix them in the next iteration. The customer is satisfied with the software. It's time to celebrate!

Some teams don't spend enough time celebrating successes. Even small wins such as completed tasks should be marked with some little reward. Successfully delivering an iteration ought to bring at least a box of doughnuts! We all work better with a pat on the back.

Yes, we're pleased with ourselves and deserve positive reinforcement. Still, there's always room for improvement. Now, while the iteration is fresh in our minds, is a good time to look back and see what we could change to help the next iteration go even more smoothly.

The definition of quality assurance is learning from our mistakes, so we can improve the quality of our product. Hold a retrospective at the end of each iteration to review what you did well, what new ideas worked out great, and what areas still need work. Martin Fowler included this as an XP practice in a presentation at XP Universe 2001. Yes, the retrospective is not a function of the team's tester but an activity involving the entire team. The reason we include it here is to give you another tool to help the team improve with each iteration.

There are a couple of simple ways to accomplish a retrospective (let's not call it a post-mortem that sounds so negative). One is the iteration grade card, shown in Table 26.1.

By storing the grade cards online (simply keeping them on a wiki will do), you can track your progress over time. This feedback helps you, the team's quality watchdog, identify areas where improvements will lead to a higher-quality product.

Another similar and just as useful exercise is to have everyone on the team think about three areas:

  • What we should continue doing

  • What we should start doing

  • What we should stop doing

Table 26.1. The iteration grade card

Category

Grade (0 10 or N/A)

Comments

Clarity of stories (planning game)

   

Customer/developer communication

   

Tester/developer communication

   

Standup meetings (developer communication)

   

Unit tests written (test-first programming)

   

Pairing

   

Design simplicity

   

Refactoring

   

Tracking

   

Build handoff to test

   

Documentation

   

Accuracy of estimates (risk assessment)

   

Overtime

   

Met customer expectations

   

A common way to do this is to distribute reams of sticky notes to the team and have them write items for each category. Put these all on a whiteboard and start sorting them to see if they fall into any particular patterns. It's a good idea to have people think ahead on this subject, so they're ready when you hold the retrospective meeting.

Here's a sample result from a retrospective meeting held by Lisa's team early in a project. Note that this doesn't have to be a pretty document. We don't even really care about grammar and spelling. This is reproduced pretty much as it was recorded, except that the names of people who took responsibility for working on various areas have been changed.

Retrospective for Release 1, Iteration 1

Continue (What did we do well?)

  • Wrote more tests

  • It works 3 stories

  • Pairing

  • Adapt to change

  • Better understanding of app, better documentation on stories and acceptance tests

  • Development environment getting better

  • Reduced travel recognized where it wasn't working

  • Build process has improved

What did we learn?

  • Looking at acceptance tests helps to make functionality clear and drive out "hidden" obstacles …

  • Some acceptance tests written up front to help with iteration planning

  • Communication better getting team together to hear out vision of overall solutions

  • Initial iterations are bumpy

  • Architecture of overall system

  • Build/integration environment

Start (What can we do better?)

Most important issues

  • How we are distributing work between sites. Not a clear delineation between "front-end" work and "back-end" work. (Joe, Tom, Sue)

    - Remote facility not good for team development, not enough connections, get stuck in cubes

    - Problem with productivity (or feeling of productivity) when in remote facility

  • Keep standups short (Joe, Betty)

    - Online status only

    - Offline pairing and tasks for the day

  • Do travel on an iteration basis:

    - Travel for one iteration (2 weeks) for consistency across iteration

    - Try to get pockets of knowledge spread across the whole team so any work can be done anywhere (Ginger)

  • Need more help automating and writing acceptance tests (responsible parties)

Medium importance

  • Difficult to learn what's going on as new person. Maybe have a team buddy?

  • New people in main office for longer

  • Split up the work better to help get outside team members involved

Others

  • Testing strategy (Lisa, Sue)

    - Test first

    - For GUI work we can use WebART, but you can't "test first" with WebART

    - Need to look into setting up WebART for everyone to use easily

  • Pair switching and using a consistant IDE, or at least a default JBuilder

  • Checklist that points to the right things on wiki (Tom)

  • Teambuilding (Joe)

Standups and the wiki are discussed in Chapter 30.

Here are notes from a much later retrospective in the same project. The format of the report has evolved a bit over time. No people were assigned by name to any areas, because the team had decided to focus as a team on just a few areas, chosen by a vote. Your retrospective format might also evolve over time. The critical point is to work to keep your practices sharp and improve the team's effectiveness.

Retrospective, Release 3, Iteration 2

During our last retrospective, we defined these issues to work on:

  • Knowledge sharing brown bags [informal lunch meetings] about the application and general topics of interest (e.g., new approaches to acceptance testing used by Project XYZ)

  • Testing more on the integration box and using HTTPUnit for acceptance tests

  • Demo machine

  • How did we do?

Stop (What slowed our progress?)

  • Code and coding using multiple source-code control systems, checking in code without testing it on the integration environment, checking in non-compiled code

  • Delivery waiting for a pair or task to start working and take initiative if you have nothing to do

  • Documentation using old versions of requirements

  • Iteration planning starting the iteration with requirements that are not complete

  • Pairing switching pairs arbitrarily (only when pair has brought something to completion); using dialup for remote collaboration

  • Shared resources allowing outside parties to control the database

  • Tasks having uneven distribution of task ownership (each person should own at least one task, preferably a couple)

Start (What can we do better?)

  • Code and coding write code that communicates; organize code consistently; edgetting [checking out] files before modifying them; refactoring big units of functionality; creating/updating/sharing class diagrams; commenting all code; add history comments in source code

  • Documentation develop strategy for having current docs available to everyone; update functional specs regularly and distribute to all developers (developers should read and follow)

  • Environments have a demo machine so work is not slowed; do something about the environment as it takes too long to download latest code (tar file does not always contain latest code)

  • General reading e-mail and meeting announcements; having project manager more available and involved in local effort

  • Knowledge sharing haven't done brown bags yet; need brown bags or formal training around testing; when you sense somebody doesn't understand take time to teach them

  • Requirements talk with requirements analysts about stories not only at beginning but during development to check direction; updating reqs documents continually and distributing to each developer

  • Standups stagger the standups for the two teams so people on both teams can attend both (don't overlap with new SWAT standup)

  • Tasks have more diverse tasks to avoid a development bottleneck (and frustration)

  • Teams have both teams communicate occasionally so there is common direction

  • Testing ensure acceptance tests are robust and test records are valid; reviewing acceptance tests internally before reviewing with customers; consistently reviewing acceptance tests with users; running the past acceptance tests consistently (including 14.3 and 14.5 WebART tests) as regression testing at least once per iteration; refactoring tests; using naming conventions for acceptance tests and cases; solve the testing problem with Sue's machine; running tests on integration machine every time a build is pushed out

Continue (What did we do well?)

  • Code and coding refactoring test and production code

  • General the great work … this team rocks; snacks

  • Pairing effective pairing; remote pairing

  • Requirements challenge the requirements analysts with questions about unclear and missing requirements

  • Standups having them on time; keeping them short

  • Tasks having story leads

  • Testing automating as many acceptance tests as possible; getting tests working on integration machine; test-first practices; using HTTPUnit for acceptance testing; enhancing testing practices

Out of these, all of which are important, the team votes on three or four Stop, Start or Continue items to focus on for the next iteration. It doesn't mean we'll forget about the lower-priority items, but we want to see a difference in the top few. For example, for the above retrospective, we might vote to do more refactoring, more regression testing, and make sure the team fully understands the customer's requirements before starting development for the next iteration.

As each iteration proceeds, note anything you want to keep in mind before the next iteration starts. We've found that it speeds up the retrospective to solicit ideas before the meeting about how the iteration went.



Testing Extreme Programming
Testing Extreme Programming
ISBN: 0321113551
EAN: 2147483647
Year: 2005
Pages: 238

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net