Phase: Adapt

Practice Product, Project, and Team Review and Adaptive Action

Objective

The objective of the review and adaptive action practice is to ensure that frequent feedback and high levels of learning occur in multiple project dimensions.

Discussion

There are two main reasons for conducting review and adaptive action sessions at the end of an iteration or milestone. The first reason is obviousto reflect, learn, and adapt. The second is more subtleto change pace. Short iterations give a sense of urgency to a project because a lot has to happen in a few weeks. Team members work at a quick pace, not hurrying, but working quickly at a high level of intensity. The end-of-iteration review period should be more relaxed , a brief time (normally a day or so) in which the team reflects on the last iteration and plans ahead for the next. Most teams need this break in intensity periodically to gather their energy for the next iteration. During this reflection period, four types of reviews are useful: product functionality from the customer team's perspective, technical quality from the engineering team's perspective, team performance checkpoints, and a review of overall project status.

Customer Focus Groups

Customer focus group (CFG) sessions demonstrate ongoing versions of the final product to the customer team in order to get periodic feedback on how well the product meets customer requirements. While CFGs are conducted at the end of iterations and milestones, they should be scheduled at the beginning of the project to ensure the right participants are available at the right times.

A CFG is one form of acceptance testing for a product. Individuals from the customer team, together with developers, meet in a facilitated session in which the product features are demonstrated to the customer team. The sessions follow scenarios that demonstrate the product's use by customers. As the "demonstration" proceeds, change requests are generated and recorded.

While customer team representatives work with the engineering team throughout a development iteration, a CFG brings a wider audience into the evaluation process. For example, whereas one or two individual customers from manufacturing might be involved in the day-to-day work with a project team on a manufacturing software application, eight to ten might be involved in a CFG. This wider audience participation helps ensure that features don't get overlooked, the product encompasses more than the viewpoint of a few people, confidence in the product's progress increases over time, and customers begin to become familiar with the product before actual deployment. These review sessions typically take two to four hours, but this timeframe is highly dependent on the type of product and the iteration length. CFG reviews are wonderful vehicles for building customer teamdevelopment team partnerships. [1]

[1] Who participates in CFG sessions depends on whether the product is for internal or external customers. In the case of external customers, product marketing has to determine when, and if, external customers will be brought in to review the product. Considerations that impact these decisions include confidentiality, beta testing strategies, and early sales potential.

In a shoe development process, for example, designers work with ideas, sketches , and then more formal CAD drawings. At several points in the process, the designers take their ideas over to the "lab," where technicians build mock-ups of the shoe. These mock-ups are wearable, usable shoes built in very small quantities . At the end of a milestone, the shoes can be shown to the marketing staff for their feedback or even to a selected group of target customers.

The definition of acceptance testing varies by industry, but in general CFG reviews provide a wider focus than acceptance testing. Acceptance testing concentrates on system behavior related to critical engineering design parameters, while CFGs focus on how the customers use the product. CFG reviews gather feedback on look and feel, general operation of the product, and the use of the product in business, consumer, or operational scenarios. For example, a specific acceptance test could measure the heat dissipation of an electronic instrument, or a software acceptance test case might ensure a business rule is properly calculated. Running exhaustive engine, electronic, and hydraulic tests to check predetermined values would be part of an airplane's acceptance testing. Actual flight testingtesting the product under conditions of actual usewould be similar to a CFG.

CFG review sessions:

  • Should be facilitated
  • Should be limited to eight to ten customers (Development teams are present but have limited involvement; they are primarily observers.)
  • Review the product itself, not documents
  • Focus on discovering and recording desired customer changes, but not on gathering detailed requirements (if, for example, new features are identified)

CFGs are particularly useful in distributed development scenarios in which daily contact between development teams and customers is difficult. When teams have less-than -optimal contact with customers during iterations, end-of-iteration focus groups can keep the team from wandering too far off track. [2]

[2] For a detailed examination of customer focus groups, see (Bayer 2001).

Customer change requests are recorded for review by the project team after the focus group session. It's best to wait until after the CFG to do this because analysis of these requests often leads to technical discussions that aren't relevant to many of the customer participants. Furthermore, the engineering team's initial response to changes tends to be defensive"That will be difficult (or expensive)"which customers may interpret as a negative response to their suggestions. This environment discourages further suggestions, and sessions lose their effectiveness. The better approach is for the technical team to evaluate the requests the next day and then discuss options with the product manager. Normally, 80% or more of the requests can be handled with little effort, while the others may require additional study or fall outside the project's scope. Accumulated small changes are handled using the time allocated to the rework and contingency cards described in Chapter 6. Significant changes and new features are recorded on feature cards that will serve as input to the next iteration planning session.

Technical Reviews

One of the key principles of exploratory, agile projects is to keep the cost of iteration low (both during development and after first deployment) such that the product itself can adapt to changing customer needs. Keeping the cost of iteration low and adaptability high depends on unceasing attention to technical excellence. Poorly designed, poor-quality, defect-prone products are expensive to change and therefore inflexible to changing customer demands. When customers buy a $500,000 biomedical device, they want it to respond to future needs (within limits) quickly and cost effectively. Often, the flexibility of industrial products, such as a biomedical device, depends on the flexibility of its embedded software.

Periodic technical reviews, both informal and regularly scheduled, provide the project team with feedback on technical problems, design issues, and architectural flaws. These reviews should address the key technical practices of simple design, continuous integration, ruthless testing, and refactoring to ensure that they are being effectively implemented. Technical reviews are also a collaborative practice and as such contribute to relationship building within the team. As always, these reviews should be conducted in the spirit of agile developmentsimple, barely sufficient, minimal documentation, short sessions, lots of interaction.

Technical reviews, informal ones at least, occur continuously during an iterative delivery cycle. However, at periodic intervalsand at least once per milestonea scheduled technical review should be conducted. It should not take more than a couple of hours, except in special situations.

Technical review sessions:

  • Are facilitated
  • Are generally limited to two to six individuals who are competent to evaluate the technical material
  • Review the product, selected documents, and statistics, such as defect levels (The technical team should take time to reflect on the overall technical quality of the product and make recommendations about refactoring, additional testing, more frequent integration, or other technical adaptations.)

As with customer change requests, technical change requests can be handled in the time allocated to the rework and contingency cards. [3]

[3] There are several good references on the practice of peer reviews. For example, see (Wiegers 2001).

Team Performance Evaluations

A fundamental tenet of APM is that projects are different and people are different (and thus teams are different). Therefore, no team should be shoehorned into the same set of processes and practices as another. Project teams should work within an overall framework and guidelines (such as this APM framework and its associated guiding principles), but they should be able to adapt practices to meet their unique needs. Self-organizing principles dictate that the working framework should grant the team as much flexibility and authority to make decisions as possible. Self-disciplinary principles dictate that once the framework has been agreed upon, team members work within that framework. Assessments of team performance should touch on both of these factors.

While people want a degree of flexibility, they also get very tired of constantly starting with a blank sheet of paper, especially when they know they can use established techniques from similar projects in the same company. For example, changing documentation formats from project to project can be a source of frustration to project team members. Starting with a common framework and adapting it based on project and team needs can eliminate many of these frustrations.

Many project management methodologies recommend doing retrospectives at the end of a project. This may be fine for passing learning on to other teams, but it doesn't help improve performance during a project. Iteration or milestone retrospectives of even an hour or two give teams an opportunity to reflect on what is working and what isn't. In coming up with this assessment, the team will want to examine many aspects of the project, asking questions like "What went well?" "What didn't go as well?" and "How do we improve next iteration?" The team might also ask Norm Kerth's interesting question, "What don't we understand?" [4]

[4] The best reference book on conducting retrospectives is (Kerth 2001).

The information shown in Figure 8.1 can be used as a starting point for evaluating team performance. The team evaluates itself in two dimensionsdelivery performance and behavioron a three-point scale: below standard, at standard, or above standard. On delivery performance, the team members are asking themselves the fundamental question, "Did we do the best job we could do in the last iteration?" Notice that the question isn't related to plans but to the team's assessment of its own performance. Whether teams conform to plan or not depends on both performance and the accuracy of the plan (so one piece of this evaluation might be for the team members to assess how well they planned the iteration). A team could meet the plan and still not be performing at an optimal level. In a well-functioning team, members tend to be open and honest about their performance. The team discussion, not the assessment chart itself, is the important aspect of this exercise.

Figure 8.1. Team Self-Assessment Chart (for each milestone, Mn)

graphics/08fig01.gif

The second aspect of the evaluation is team behavior, in which the team, again, assesses its own performance. This evaluation involves answering the questions, "How well are we fulfilling our responsibilities?" and "How well is the organization fulfilling its responsibilities?" Answering these two questions could generate a raft of other questions, such as:

  • Are all team members participating in discussions?
  • Is someone regularly absent from daily meetings?
  • Are team members being accountable for their commitments?
  • Is the project manager micro-managing?
  • Does the team understand how and why key decisions were made during the last iteration?

The team members assess their overall behavior and develop ideas for improvement. For teams that are new to using agile practices, a questionnaire to help them measure their "agility rating" could also be useful.

Finally, the team should evaluate processes and practices that are related to team behavior but not explicitly covered by Figure 8.1. While the team may not want to evaluate the overall development framework at each milestone, it should assess and adapt individual practices to better fit the team. For example, while a team wouldn't decide to eliminate requirements gathering, it might alter the level of ceremony and detail of the requirements documentation. The team might determine that daily integration meetings between feature teams be changed to twice-weekly meetings attended by two members from each feature team. The team might decide that three-week iterations are causing too much overhead. They could switch to four-week iterations and evaluate the impact.

There are a myriad of ways in which teams could adjust their processes and practices. The crucial thing is that they view processes and practices as adjustable and that they not feel the need to continue activities that are not contributing to the goals of the project.

Project Status Reports

Project status reports should have value to the project manager, the product manager, key stakeholders, and the project team itself. The reporting of information should drive activities aimed at enhancing performance. Developing the reports should help the project and product managers reflect on the overall progress of the projectto separate the forest from their daily battle with the trees. The number and frequency of reports and the information in the reports need to match the size , duration, and importance of the project.

Part of the project manager's job involves managing stakeholders, particularly those in upper management, and providing information to them. What stakeholders ask for may be very different from what is needed to manage the project, but the project manager neglects this other information, and periodic interactions with those stakeholders, at her peril. Managing the expectations of various stakeholders can be a delicate balancing act.

Attending status meetings, giving management presentations, gathering accounting information, and a raft of similar activities can drain valuable time from delivering product. At the same time, management and customers are spending money for a product, and they aren't receptive to being told, "Just wait six months until we are finished." Managers have a fiduciary responsibility, and they need periodic information to fulfill that duty. Customers and sponsors need information to make project tradeoff decisions. Status reports must provide information to assist in answering questions such as, "Is the prognosis for the product such that it is no longer economically feasible ?" and "Should features be eliminated to ensure we make the product release schedule?"

Most status reporting looks at what was accomplished within the prior reporting period and focuses on the three-legged stool of project managementscope, schedule, and cost. The tradeoff matrix introduced in Chapter 5 contains four attributes to consider in analyzing changes and making decisionsscope, schedule, resources, and stability (defect levels, one aspect of technical quality). In evaluating scope, the team needs to examine not only features delivered versus features planned, but also the value of those features delivered. Finally, since uncertainty and risk drive many agile projects, the team should monitor whether risk and uncertainty are being systematically reduced.

While each of the status reports or charts identified below provides useful information to the development team, the customer team, and management, the "parking lot" graphic provides an excellent visual picture of any project's overall progress. In Chapter 6, Figure 6.7 showed a parking lot diagram used for project planning. Similar diagrams, Figures 8.2 and 8.3, are used here as the basis for status reporting. In the figures, the bar just above each scheduled delivery date indicates the percentage of the features that have been completed (partially completed features are excluded). Colors enable a quick analysis of the project's progress, especially as the project continues and the colors change from month to month. A white box indicates that no work has begun on the activity, while a blue box (light shading in figure) indicates work has begun on some of the features. A green box indicates that the features within have been completed, while a red box (heavy shading in figure) indicates that at least one scheduled feature has not been delivered in its planned iteration. Figure 8.2 shows project progress by business activity area, or product component, and Figure 8.3 shows an alternative presentation of the status by month.

Figure 8.2. Project Parking Lot Report by Business Activity ( adapted from Jeff DeLuca's work)

graphics/08fig02.gif

 

Figure 8.3. Project Parking Lot Report by Month (adapted from Jeff DeLuca's work)

graphics/08fig03.gif

 

Scope and Value Status

In an agile project, scope performance can be measured by measuring features completed versus plan, by iteration, as shown in Figure 8.4. (This chart, and a detailed listing of features and their status, can be used as supplementary detail to the parking lot report.) In general, since features can be added or deleted by the customer team over the life of a project, development teams should be evaluated based on the number of features delivered rather than the specific features delivered. If the team delivers 170 of 175 planned features, then its performance is very good, even if 50 of those features are different from those in the original plan.

Figure 8.4. Delivery Performance

graphics/08fig04.gif

Scope tells us the raw volume of feature deliverables, but not how valuable they are. Since the objective of agile development is to deliver high-value features early, in some cases to achieve early ROI, then one potentially beneficial report is "features and value delivered," as shown in Figure 8.5. For this report, the customer team needs to apportion the product's value to individual features or to the broader level of a component or business activity. If the development team is burdened with estimating the "cost" of each feature, then the customer team (including the product manager) should be burdened with estimating the "value" of each one. As Tom DeMarco and Tim Lister (2003) say, "Costs and benefits need to be specified with equal precision." With incremental cost and value information for every feature, the stakeholders can make much better tradeoff decisions.

Figure 8.5. Features and Value Delivered

graphics/08fig05.gif

Let's assume that the net present value (NPV) of the revenue stream from the 175-feature product mentioned above is $15 million, and the development cost has been estimated at $3 million. The product manager would apportion the $15 million to each of the 20 to 30 components (rather than to all 175 features). Once this analysis and allocation was completed, then a value performance chart like that in Figure 8.5 could accompany the scope performance report. By recognizing value delivered, even using a cursory valuation of features, the project team, customer team, and executives have better information with which to make project decisions. [5]

[5] An extension to this idea would be to use feature value in an earned value analysis (EVA). Using feature values can turn the "V" in EVA into a real value measure.

For a simple value assessment, the customer team can rank order features using an informal value assessment and then indicate its satisfaction with the delivered features from each iteration on a five-point scale, as shown (in combination with a technical quality assessment) in Figure 8.6. Finally, as systematic risk and uncertainty reduction is a strong indicator of increasing value, a technical risk and uncertainty assessment, as shown in Figure 8.7, can be beneficial.

Figure 8.6. Product and Technical Quality Assessment

graphics/08fig06.gif

 

Figure 8.7. Technical Risk and Uncertainty Assessment

graphics/08fig07.gif

Since delivering customer value and agility may be more important than meeting cost budgets , agility measurements can also be useful. At a feature level, the team can track and report on changes each iterationoriginal features planned, original features deleted or deferred, and new features added. The team can also track the change requests from the customer focus groups and report changes requested, implemented, deleted, or deferred. These reports showing the team's response to customer- requested changes can assist in explaining variances in schedule or costs.

Schedule Status

Schedule reports can take a variety of shapes depending on the organization's standard practices. Figure 8.8 is an example that shows projected end dates (in elapsed weeks) for a project. During the replanning for each iteration, the team estimates, based on progress and feature changes, the projected number of weeks for the entire project. Figure 8.6 shows high, probable, and low schedule estimates. Notice that the range of these estimates is wider at the beginning of the project (greater uncertainty) and narrower at the end (greater certainty ). A range that isn't narrowing indicates that uncertainty and risk are not being reduced adequately and the project may be in danger.

Figure 8.8. Projected Schedule

graphics/08fig08.gif

 

Cost (Resource) Status

Cost reports can also take a variety of shapes depending on an organization's practices. Although accounting reports are outside the scope of this book, one key number that many managers watch is expected cost to complete, which on an agile project should be calculated at least once per month.

Quality Status

As with other project measurements, there are a wide range of quality metrics, many of which are product dependent. One important aspect of quality is a team's assessment of its work, as shown in Figure 8.4. Given the results of technical reviews, defect reports (e.g., find and fix rates), and the team's sense of the project's "feel" or "smell," [6] this chart plots the level of technical qualityas assessed by the teameach iteration. Another example of a quality measure in software development is the growth of test code compared to executable codeboth should be growing proportionally.

[6] In Extreme Programming, aspects of quality are evaluated by "smell," a term that conveys an intangible, but at the same time a very real, evaluation.

Project Team Information

It's not just executives and product managers that need project status informationteam members do also. Agile projects are open projects, meaning that project information is widely shared among all team members and with customers and stakeholders. Project teams need to have key information readily available and communally shared. It needs to be visual and prominently postedeither on walls or a whiteboard for collocated teams, or on a common virtual whiteboard for distributed teams. Alistair Cockburn (2002) calls these displays "information radiators": "An information radiator displays information in a place where passersby can see it. With information radiators, the passersby don't need to ask questions; the information simply hits them as they pass."

Visual displays of project information need to concentrate on vision, focus, scope, and issues. Teams often get so caught up in details that the big picture gets lost. Arguments over details can frequently be resolved by reviewing the product vision information or the guiding principles. Risk and issue lists are used to jog team members' consciousness so they are periodically thinking about solutions. A Web site will be needed to display this information for distributed teams.

Adaptive Action

As mentioned earlier, the term "adaptive action" conveys a sense of responding rather than correcting. In answering the three questions on value, progress, and adaptation posed in the beginning of this chapter, there are three further detailed questions to be asked: Where are we? Where did we plan to be? Where should we be? Adaptive actions run the gamut , from minor tweaks to the next iteration's planned features, to adding resources, to shortening the project's schedule (with appropriate feature adjustments). Adaptive adjustments can impact technical activities (e.g., allocating more time for refactoring) or modify delivery processes to make them more effective. Any of the four review typesproduct, technical, team, project statuscan result in adaptive actions.

The two fundamental categories of risk in new product development are technical performance risk (whether we can actually build the product to meet the specifications and performance requirements within the time allotted) and marketing risk (whether customers will buy the product or, for an internal product, use the product to achieve business value). Since the product development process "purchases" information that reduces these risks, the constant attention of the management team should be on activities that systematically deliver value and reduce risk. Adaptive actions should use these two issues as focal points.

The Agile Revolution

Guiding Principles: Customers and Products

Guiding Principles: Leadership-Collaboration Management

An Agile Project Management Model

The Envision Phase

The Speculate Phase

The Explore Phase

The Adapt and Close Phases

Building Large Adaptive Teams

Reliable Innovation



Agile Project Management. Creating Innovative Products
Agile Project Management: Creating Innovative Products (2nd Edition)
ISBN: 0321658396
EAN: 2147483647
Year: 2003
Pages: 96
Authors: Jim Highsmith

Flylib.com © 2008-2020.
If you may any questions please contact us: flylib@qtcs.net