Scope Complete Milestone and Its Deliverables

The Developing Phase's primary goal is to reach the Scope Complete Milestone. The seven deliverables required to meet this milestone are:

  • Revised Functional Specification This deliverable updates the design to reflect the developed product. The revised Functional Specification reflects any agreed-to changes occurring in the design during the Developing Phase. This specification can also identify new features that the team may consider for future versions.
  • Revised Master Project Plan This document shows how the team plans to execute the last stage of the development process, the Stabilization Phase, including deliverables the team intends to deploy. The Revised Master Project Plan reflects any changes that occurred during the Developing Phase.
  • Revised Master Project Schedule This schedule outlines the time required for the Stabilization Phase and its deliverables. Like the Revised Master Project Plan, this schedule reflects any changes that occurred during the Developing Phase.
  • Revised Master Risk Assessment Document This document shows possible risks to the project, and outlines how the team plans to handle each risk. The revised Master Risk Assessment contains both preexisting risks and newly identified risks. This assessment also describes risk management plans and their progress to date.
  • Source code and executables This set of deliverables is a feature complete release of the actual application. The source code and executables represent the application with all the product features complete and signify the team's first shot at a release candidate.
  • User performance and support materials These materials document how the application works, for both users and support teams. Artifacts that support user performance range from product wizards to user documentation and classroom training materials. Additionally, artifacts for operations teams that will support the application in production will describe how the application is installed, configured, administered, and used.
  • Testing elements This set of deliverables describes how the application will be validated, and what application elements are to be tested, during the Stabilization Phase. Testing elements are test plans, specifications, cases, and scripts that cover the full feature set of the newly baselined product. The Testing team should create automated test scripts for use in the Developing and Stabilization Phases.

Interim Milestones

To accomplish its project goals successfully, the team must set interim product delivery milestones to work toward throughout the phase by breaking the Developing Phase into manageable portions. During the Developing Phase, interim milestones encourage a product-shipping mindset. While the team may choose to include other milestones, basic interim milestones during the Developing Phase are typically represented as one or more of the following: internal release, alpha release and beta release, which lead to the Scope Complete milestone's product deliverable.

Interim milestones primarily measure the progress of building a product. Code creation is often done by different teams in parallel and in segments for different product features, so the team needs a way to measure progress as a whole. To provide solid boundaries for interim milestones, internal releases force the product team to synchronize the code at a product level. This synchronization process is sometimes called integration testing.

Typically, internal releases on medium-size projects are one to three months apart and on large projects two to four months apart. However, the number and frequency of releases will vary depending on the project's size and duration.

The team should also try to achieve user interface and database freeze points during development. Although not formal interim milestones, these points are ideally accomplished early in the Developing Phase. Deliverables, such as user education training documents, can have strong dependency on the user interface to create the user support and training material. Also, the feature teams often begin implementing functionality in connection with the database structure early in the Developing Phase. Thus, the earlier in the Developing Phase the user interface and database design can be frozen, will result in fewer changes to the dependant documentation and code.

The Developing Phase typically acts as an iterative process to reach a well-defined end result. Specific interim releases can have goals for the product team, such as testing a particular feature, portion of the user interface, or product deployment capabilities. Some product releases may be externally focused toward customers and users. Early releases should incorporate high-priority product features to ensure that the team can deliver such features. Early internal releases can address high-risk architectural areas to determine feasibility or identify development changes required to minimize the cost and effect of design changes.

Breaking up large projects helps the team focus on more actionable subsets that can direct daily progress. Any minor corrective actions occurring early in the development process can also decrease the cost of changes. The product mindset of internal and external releases can increase quality by providing a more stable base for new development and allowing the team to fix bugs closer to the time at which they occur rather than toward the end of the project.

Internal Product Releases

Internal releases are similar to the overall product concept of versioned releases. Both involve incrementally adding functionality to a known baseline. These internal releases also help the team stabilize the product, achieve quality goals, and practice shipping the product. When a team uses internal releases during the developing cycle, it is essentially establishing a known baseline, which provides a further measuring tool. The team can then compare the functional specification implementation with the actual product's business need, and additionally identify design issues not discovered during the Planning Phase.

Internal releases happen throughout the entire Developing Phase. Each release is designed to expand on previous releases, as noted in Figure 12.3. This independence gives an additional benefit of viewing each release as a self-contained product, which further encourages the product mindset during the develop- ment process. Reviewing the results of an internal release is a necessary part of turning the team into a learning organization that repeats best practices and learns from mistakes.

click to view at full size

Figure 12.3 Internal releases during the Developing Phase

Each internal release has a quality bar that must be reached before the team can achieve the internal milestone. A quality bar is a quality level measurement that the team has clearly stated and must achieve for the internal release. Each internal release has a small Stabilizing Phase, during which the team can bring the product up to quality standards and practice the stabilizing process.

As the project team develops the product, it uses successive internal releases to incrementally add feature subsets until the product is complete. With each internal release, the overall development scope increases until the entire product is complete. If a team hasn't planned properly during the Planning Phase, it will be much more difficult to separate its development into interim releases, and to implement each successive release.

External Product Releases

Although this practice does not strictly adhere to MSF, we recommend that external product releases also occur during the Developing Phase. These releases help synchronize the coding process with the other project team members' responsibilities. These external releases demonstrate successful execution of the project plan to customers, users, and other external project stakeholders.

Depending on the project's size and duration, an alpha release or set of releases is rarely feature-code complete. An alpha release enables the project team to practice its ability to fully release a product, and also begins to test other product-related deliverables, such as user and operations support materials.

Beta external releases are typically very close to feature-code complete and typically without any performance enhancements.

Revised Functional Specification

As the Planning Phase's primary deliverable, the Functional Specification is the paramount input resource for the Developing Phase. The Functional Specification is the compass that guides the project downstream. While a team can't plot the exact course for the project, the team can use the Functional Specification as a guide. Interim milestones are tools that can be used to make the Functional Specification a reality while revising it as needed.

Reviewing the Functional Specification

The revised Functional Specification doesn't have to be perfectly integrated with the interim milestones. However, the Functional Specification does need to mirror the reality of any differences between the delivered application and the original functional specification before the Scope Complete Milestone can occur.

The team should not be surprised if new changes result from the implementation of the interim milestones. As you may recall, one of the goals of the Functional Specification is to help provide team consensus. Again, just as the project team reviews the original Functional Specification, changes to the Functional Specification should also be reviewed for the following reasons:

  • Everyone with a stake in the Functional Specification is allowed to take part in creating it.
  • A variety of people are involved in making sure that audience needs are met.
  • The Functional Specification serves as a method of communicating what is going to be built.
  • A forum for negotiating and achieving buy-in is provided.

Revising the Functional Specification encourages association of design into reality to assure the team that business needs are being met. Each team role can review the Functional Specification using the criteria in Table 12.1.

Table 12.1 Reviewing the Functional Specification

Team roleRole in reviewing the Functional Specification
Customer Functionality of software product created in interim milestones meets business needs.
Product Management Solution meets known requirements or Functional Specification is reviewed and revised to match changes to requirements.
Program Management Team responsibilities and schedules are realistic. Program Managers believe that responsibilities for each specified function are clear, and that committed sched- ules are realistic.
Development Implement features in the solution based on Functional Specification requirements. Re-evaluate risks on a daily basis to ensure that implementation is achievable. Issues that arise should be discussed with the team and docu- mented where changes need to occur with the Functional Specification and scheduling. Ideally, adequate planning has been done and changes will be marginal.
Testing The ultimate goal for testing in the Planning Phase was to have a defined strategy for test platforms, scripts, and data and commits to testing all aspects of the Functional Specification. In the Developing Phase, testing should align its goals with the interim milestones decided upon by the team. Testing can only test what the development team has built.
User Education Partake in alpha and beta tests to ensure that the product features are usable, though not necessarily bug-free. Us ers should be provided with adequate information on how to work with the product, although formal help files and documentation may not be available during the De- veloping Phase.
Logistics Management Work with alpha and beta testing to support and deploy the product. As the product is deployed, the Logistics team can provide information needed to change the Functional Specification to insure a smoother product deployment.

As an example of revising the Functional Specification, let's examine the development of the model Resource Management System (RMS) application. During the development of the RMS application, a problem was discovered using the Web client, ADO Disconnected Recordset Objects, and Windows NT security. When using a Disconnected Recordset Object, the user's security ID was not being passed properly to the middle layer business objects in MTS. The team was forced to make slight changes to the application's architecture. Although the Web client and ADO Disconnected Recordset were tested during the Planning Phase's proof-of-concept, the integration of Windows NT security was not included in the proof-of-concept. Thus, the design problem was not discovered until the Developing Phase. Fortunately, during completion of the Development Phase, a software patch was released to resolve the problem. Unfortunately, the patch was not released in time to be implemented into the team's first release of the application. Thus, the RMS team identified a design change for the product's next version.

Revised Master Project Plan

The Master Project Plan needs to be updated to match planned implementation details against reality. While the Master Project Plan shouldn't be used to manage the project's day-to-day needs, the plan serves as a "big picture" to tell the team where the project is heading.

As the project evolves, team and work role plans will continue to be carried out, execution of tasks will be updated with the plan, and efforts to integrate the plan into the product will continue to be synchronized. The overall owners of the Master Project Plan are the Program Managers. Each team role develops and maintains its own realistic project plan within the overall Master Project Plan. The development, test, training, user support, communications, and deployment plans are updated to reflect changes caused by details found in implementation.

Revised Master Project Schedule

The Master Project Schedule contains information on the following schedules:

  • Development
  • Internal and external product ship dates
  • Test
  • Training
  • User support
  • Communications
  • Deployment

The development schedule is the driving force during the Developing Phase; therefore, the other schedules should integrate at the interim milestones. Ultimately, a revision of the Master Project Schedule results from changes occurring in any of the project schedules.

During the Planning Phase, and as part of the development schedule details, the team can create release dates for interim milestones to give the team a tangible target to pursue. During the Developing Phase, when the various teams target interim milestones, changes to the schedules should be noted and passed to Program Management. Program Management must assess how individual updates will effect the overall schedule and work with the other team roles to determine the affect to the overall project. When conflicts arise, Program Management is responsible for driving the tradeoff decisions that will affect the schedule. Once the decisions are made, the updated schedules are compiled into a revised Master Project Schedule.

Revised Master Risk Assessment Document

The Master Risk Assessment Document is first created as a deliverable during the Envisioning Phase. This document is owned by the Program Management team, which also has responsibility for ensuring that the revision is complete and accurate. The revised Master Risk Assessment Document represents additional detailed risk assessments from various team members. The team can use this aggregated version of lower-level assessments to get an overall view of risks.

The Master Risk Assessment Document will help synchronize risk assessments across a team. This assessment will also aid in prioritizing the team's decisions about risk management. Risk management is still handled by the individual team roles responsible for given risks.

Source Code and Executables

The source code and executables represent a team's code- and feature-complete delivery. As long as the code produced by programmers begins to meet the functional requirements, the quality of the code being produced should also be considered.

Efficiency in developing the source code and executables is critical for delivery of a timely product. Unfortunately, however, many teams sacrifice code quality when rushing to finish a program by a particular deadline. It is important to remember that high code quality must be maintained throughout the project, even under tight deadlines; the project may cost more in the long run if developers cut corners on source code in the short-term interests of saving money by hurrying through the application's development.

Although management must take responsibility for compromising quality for meeting deadlines, programmers can also lower the quality bar in the interest of saving time. If no one ever monitors the quality of code that programmers write, it should be no surprise that the code quality could deteriorate.

To minimize chances of sacrificing quality to save time, the team should set standards that:

  • Force developers to maintain a methodical and disciplined approach to coding, regardless of pressure to take shortcuts.
  • Remind developers constantly that code's internal quality is important.

The decision to use standards and implement a review process to ensure that such standards are being used will affect the way that programmers approach coding. By making it clear that such standards are mandatory rules, not optional guidelines, the team will ensure that its members will realize that meeting standards is an integral part of their jobs rather than a corollary if convenient.

Coding standards have traditionally focused on the following topics:

  • Naming
  • Layout
  • Commenting
  • Coding dos and don'ts

The emphasis on such topics is usually on writing shareable code—that is, code that other programmers can read and reuse easily. Shareable code is beneficial, even if one programmer exclusively works with the code. When returning to shareable code three months after the initial development effort, the programmer will be more likely to be able to recognize and work with the code again.

Another goal of traditional coding standards is to assist in making the code more robust. For example, such coding standards should mandate error handling, prohibit GoTo statements, and so on. Program elements, files, functions, variables, and constants must all follow naming conventions. Such requirements aid in project organization. Also, prefixing elements to indicate the data type, as in Hungarian notation, is commonplace.

User Performance and Support Elements

Elements that support user performance range from in-product wizards to user documentation and classroom training materials. Together the team should focus on building the product as described in the Functional Specification, and on working with users to ensure product reliability from one internal release to another.

As Steve Maguire noted in Debugging the Development Process (Microsoft Press, 1994):

If users think that you're on the right track when reading the documentation, it goes a long way towards assisting the programmer in writing the code. Programmers need to program with the users in mind when they implement their code. Also, if the programmer thinks a task is not intuitive or slow, it's likely that the user will also.

When testing the product's functionality during interim milestones, beta testers should not use the portions of the application that do not meet the quality bar desired for the product's code. One or two malfunctioning parts of the application may give a negative impression to testers for overall functionality. In beta testing, ideals aren't always going to be met. Therefore, the objectives for testing need to be clearly stated so that beta testers know specifically what to test and what to ignore.

Testing Elements

In this section, we'll discuss various testing tools that can be used to help design and develop a solid product. We'll discuss integrated milestone testing and review the zero-defect mindset, code reviews, and the daily build.

Integrated Milestone Testing

The testing process is not limited to the Stabilizing Phase, but is an integral part of the development process as well. At the Project Plan Approved Milestone, the team baselines the test plan and begins work on a more detailed test specification that describes how the team will test individual features.

The test specification is finished at the Scope Complete Milestone because, at that point, the feature set should not grow further. Because the Scope Complete Milestone represents a feature-complete baseline product, the team can consider the product to be in alpha form for the Stabilizing Phase. This alpha product should not be confused with the alpha and beta testing performed in the Developing Phase's interim milestones. Interim milestones in the Developing Phase also contain a subset of actions to be performed during the Stabilizing Phase.

The transition between the Developing Phase and the Stabilizing Phase is characterized by the transition from coverage to usage testing. Following are examples of types of coverage testing that might be done on a typical project.

  • Unit and functional tests These make up the majority of manual testing. Check-in, build verification, and regression tests tend to be automated, because they are run repeatedly throughout the development process. Developers with the goal of discovering bugs perform unit tests before testers find the bugs.
  • Functional tests These focus on making sure that features are present and functioning properly.
  • Check-in tests These are quick, automated tests that developers perform before checking in code.

Additional testing includes:

  • Build verification tests These are run after the daily build to verify that the product has been built successfully. Build verification tests are also often referred to as smoke tests.
  • Regression tests These are automated tests that run after the daily build to ensure that the code has not regressed in quality.

Following are examples of usage testing that might be done on a typical project:

  • Configuration tests These confirm that a product runs on the target hardware and software.
  • Compatibility tests These involve examining how a program interacts with other programs, potentially even previous versions of the principal program being tested.
  • Stress tests These focus on pushing a product to its limits. Conditions tested include load memory capacity, potential for full disks, high network traffic, and high numbers of users.
  • Performance tests These document how quickly the product performs. Configuration, compatibility, and stress testing may also play a role in performance testing.
  • Documentation and help file tests These focus on errors in documentation and help files, including content defects as well as product deviations.

Alpha tests are the first internal uses of the product as a whole by external resources. Beta tests are product trials conducted by a subset of external users to discover issues that the product team did not find. Despite the widely held perception that all bugs are defects, defects are actually a class of bugs. On the other hand, a bug is any issue that arises from using the product being developed.

As we previously discussed, classifying bugs helps in identifying priorities and risks and to prepare the team to resolve the bugs. Severity and priority are the two main issues for classifying bugs. Severity relates to the impact of the bug on the overall product if the bug is not fixed; priority is simply the team's measure of the bug's importance to product stability.

Typical severity-level classifications are:

  • Severity 1 System failure; a bug that causes the system to fail or risk total data loss.
  • Severity 2 Major problem; a bug that represents a serious defect in software function, but does not necessarily risk total system failure or data loss.
  • Severity 3 Minor problem; a bug that represents a defect in software function, but without much risk of lost data or work.
  • Severity 4 Trivial; a bug that represents a primarily cosmetic problem, which most users are unlikely to notice.

Typical priority-level classifications are:

  • Priority 1 Highest priority; with these "showstopping" bugs, the product cannot ship, and often, the team cannot achieve the next interim milestone.
  • Priority 2 High priority; with these major bugs, the product cannot ship, but the team may be able to achieve the next interim milestone.
  • Priority 3 Medium priority; with these bugs, the product can ship, and the team can also achieve interim milestones. These bugs are low enough in priority that they tend to be fixed only if there is enough time near the end of the project, and if fixing them does not create a significant risk.
  • Priority 4 Low priority; with these bugs, testers typically make enhancement requests. Bugs with such low priority are often negligible and not worth fixing.

Typically, a product cannot ship with known severity 1 or priority 1 bugs.

Resolving a bug is an interim step toward closure, which occurs only after a tester determines that fixing the bug did not create another problem. Closure also occurs if a tester determines that a particular bug is unlikely to surface again.

Bugs are typically resolved as:

  • Fixed The developer has fixed the bug, tested the fix, checked in the code, assigned the fix to a release number, and assigned the bug back to the tester who reported it.
  • Duplicated The bug reported is a duplicate of another bug already recorded in the bug database. The duplicate bug should be closed and linked to the original bug.
  • Postponed The bug will not be fixed in the current release, but might be fixed in a subsequent one. This designation should be used when the team sees value in fixing the bug, but does not have the time or resources to correct it during the current release being tested.
  • By design The behavior reported in a particular bug is intentional and acknowledged in the Functional Specification.
  • Can't reproduce The developer can't verify the existence of the bug with any level of consistency.
  • Won't fix The bug will not be fixed in the current release, because the team does not think fixing the bug is worth any effort.

Zero-Defect Mindset

As discussed earlier, a zero-defect mindset for the entire project team is a critical success factor for the product. Also, a zero-defect mindset represents the team's commitment to achieve a quality product, specifically by building the quality into the product at the time the team does the work.

Having a zero-defect mindset does not mean developing a product with absolutely no defects. The zero-defect mindset is simply a goal to which the team can aspire. Likewise, zero-defect deliverables do not necessarily have any defects, but do meet a predetermined quality bar. Zero-defect milestones, in turn, require the product to meet a predetermined level of quality before such milestones can be achieved.

The central benefit of a zero-defect mindset is that it gives quality a high priority and visibility in the project. Because high quality is a basic customer need, a by-product of the zero-defect mindset is a focus on meeting customer needs.

The Daily Build

Although it can be difficult to implement, executing a daily build provides great benefits. A daily build is simply compiling and integrating the application's source code into a deliverable executable. As the name implies, the daily build should occur every day. In practice, however, the daily build concept can be applied in a slightly longer time frame, but should not exceed a three- to four-day time period. In Dynamics of Software Development, Jim McCarthy notes:

It's easy to be delusional when you're creating software but, in the face of the daily build, much potential for fantasy is harmlessly discharged.

One of the strengths of a daily build is that it's available publicly to anyone on the project team wanting to assess the progress of a project. This build indicates progress by identifying that the product is moving forward as a whole rather than simply in individual pieces.

The build provides the definitive status of team and product progress by allowing the team to examine the product holistically with little room for interpretation. As Jim McCarthy mentions in Dynamics of Software Development, the daily build serves as the heartbeat of the project:

If the daily build fails … the 'heartbeat' monitors start to screech insistently demanding emergency attention.

He continues by saying that a weak pulse indicates a struggling project, whereas a strong pulse serves to reassure the team.

The concept of a daily build has many benefits, but fundamentally, a daily build gives the product life during the development process. Another benefit of the daily build lies in the act of putting the product's pieces together while exposing elements that aren't working properly. Pieces that don't fit properly into the product highlight the product's integration issues. Building their own pieces into the product forces team members to synchronize their efforts.

Yet another benefit lies in having the team test each daily build to determine product status and quality. Team and customer morale improves when they see the product's progress from build to build. The frequency of daily builds can also benefit the team by enabling team members to pinpoint the source of a defect more efficiently. Frequent builds also enable team members to maintain synchronization more easily.

Microsoft Corporation - Analyzing Requirements and Defining Solutions Architecture. MCSD Training Kit
Microsoft Corporation - Analyzing Requirements and Defining Solutions Architecture. MCSD Training Kit
Year: 1999
Pages: 182 © 2008-2017.
If you may any questions please contact us: