Objective 2: Iteratively Develop a Complete Product That Is Ready to Transition to Its User CommunityDescribe the Remaining Use Cases and Other RequirementsAs you implement and test a use case, you often need to revisit at least some of the detailed requirements, and in many cases, you may even want to rethink the entire use case as you come up with better solutions. If the analyst and developer are different people, they need to discuss jointly how to interpret requirements if they are unclear or can be improved. Analysts should have a very good understanding of the business needs, but may have problems coming up with an optimal solution (they may be blind to how business is done today). Developers can in many cases come in with fresh eyes and find new, innovative ways to address identified business solutions. To build really great applications, it is important that there is an open dialog between analysts and developers to avoid losing the fresh perspective developers may provide. Nonessential use cases and those with no major architectural impact are generally skipped in Elaboration. For example, if a general print feature has already been implemented and there is a use case for maintaining certain information, you can be fairly certain that adding a use case to print that information will not significantly impact on the architecture. Also, in some systems there are many similar use cases, having the same general sort of functionality, but for different entities or different actors, with different user interfaces. These types of use cases are often left to be detailed in Construction, along with partially detailed use casesthose use cases that have been detailed for the main flow, or a few, but not all, flows of events. Many nonfunctional requirements, such as performance requirements or requirements around application stability, are essential to getting the architecture right, and most of them should have been properly documented by the end of Elaboration. You may, however, need to add to or detail some of these as you learn more about the system. Fill in the DesignIn Elaboration, you defined the subsystems and their interfaces, key components and their interfaces, and architectural mechanisms. If you have a layered architecture, you implemented or acquired the hard part of the lower layers the infrastructureand the architecturally significant use cases. For each iteration in Construction, focus on completing the design of a set of components and subsystems and a set of use cases. For more information on use-case design, see Design Use-Case Realizations and Components in Chapter 17. As you implement components (consisting primarily of interfaces and stubs), you will see the need to create additional supporting components as a result of better understanding the system. In the earlier Construction iterations, focus on addressing the highest risks, such as those associated with interfaces, performance, requirements, and usability. Do this by designing, implementing, and testing only the most essential scenarios for your selected use cases. In later Construction iterations, focus on completeness until you eventually design, implement, and test all scenarios of the selected use cases. Design the DatabaseDuring Elaboration, you made a first-draft implementation of the database. In the Construction phase, additional columns may be added to tables, views may be created to support query and reporting requirements, and indexes may be created to optimize performance, but major restructuring of tables should not occur (this would be a sign that the architecture was not stabilized and that the start of the Construction phase was premature). Implement and Unit-Test CodeIteration planning is primarily determined by deciding which use cases to implement and test, and when . Use-case implementation is done component-by-component. Generally, by the time you get to Construction, some of the components have already been implemented or partially implemented. And for layered system architectures, most of the components in the lower layers already are implemented. Figure 8.5 shows how the component implementations generally evolve over time. Figure 8.5. Evolution of Components over Time. As time progresses, components become more and more complete, with lower layer components being finished more rapidly . Some higher layer components need to be implemented to drive requirements down to lower layers and to enable the effective testing of lower layer components.
Developers need to test their implementations continuously to verify that they behave as expected. To test component(s), you may need to design and implement test drivers and test stubs that emulate other components that will interact with the component(s). A visual modeling tool may be able to generate these test drivers and stubs automatically. Once you have the stubs, you can run a number of test scenarios. Usually, test scenarios are derived from the use-case scenarios in which the component(s) participate, since the use-case scenarios identify how the components will interact when the users are running the application. You also look at the nonfunctional requirements to understand any other constraints that need to be tested . Do Integration and System TestingWhen producing a build, components are integrated in the order specified in the integration build plan. Usually, the build is subjected to a minimal integration test by the integration team before being fully tested. To increase quality, continuously integrate and test your system. To minimize testing costs, you need to automate regression testing so you can run hundreds or thousands of regression tests daily or weekly toward the current build, thereby ensuring that newly introduced defects are rapidly found. The following steps will help you in your testing effort:
See Chapter 18 for more information on testing. Early Deployments and Feedback LoopsPerforming frequent builds forces continuous integration and verification that the code works. Integration and system testing also reveals many quality issues. Additionally, it is crucial to get early feedback on whether the application is useful and provides desired behavior, by exposing it to actual users. For example, maybe it is performing according to requirements, but the requirements do not quite make sense. This is especially important when developing unprecedented applications or applications in unfamiliar domains, where it is difficult to assess what the real requirements are. Future users of the system often do not want to, or have the ability to, spend time on early versions of the application. It may, for example, be hard to convince any one user to spend time on providing you with feedback, since the benefits may not be obvious to the users. This is often the case when building commercial products, when the identity of future users is unknown. During early stages of Construction, the application may be hard to use, cumbersome to install, and filled with workarounds, so much so that it is difficult to put it in the hands of the target user group without active hand-holding. Based on your needs for feedback and the availability of customers to provide it, you should choose the right approach for getting feedback, which provides value both to the development team and to the future users of the system. These approaches include
Typical results of successful early deployments and feedback loops include verification of whether requirements are right or need to be modified, feedback on usability and performance, and identification of insufficient capabilities. Testing in a development environment that is not equivalent to the target (production) environment may produce misleading results. Organizations that focus on tight quality control may need to invest in a separate environment that is equivalent to that of the target environment. This simulated environment enables frequent test builds and more accurate test results. Prepare for Beta DeploymentA beta deployment is "prerelease" testing in which a sampling of the intended audience tries out the product. Beta deployment is done at the end of the Construction phase and is the primary focus of the Transition phase. A successful beta program needs to be prepared in Construction. Beta testing serves two purposes: First, it tests the application through a controlled actual implementation, and second, it provides a preview of the upcoming release. The deployment manager needs to manage the product's beta test program to ensure that both of these purposes are served . It is important to get a good sampling of the intended audience by making sure that you have both novice and experienced users and users in different environments and with different needs. This variety will help ensure that all aspects of the products are properly tested. It is also essential that the product is complete, based on the scope management that has occurred during the iterations. Although all features should be implemented, it is acceptable to have some unresolved quality issues, such as an unstable element (as long as it does not cause data loss), or Help files or dialog boxes with less than optimal crispness in their guidance, or partial implementation of a rarely used function. You need to include installation instructions, user manuals, tutorials, and training material, or you will not get feedback on them from the beta testers. The supporting material is essential, but unfortunately it often is not included. Prepare for Final DeploymentFor many projects, you need to prepare for the final deployment in Construction (and sometimes earlier, during Elaboration). These activities typically include
We describe these and other activities related to the final deployment in more detail in Chapter 9. For our three example projects, the work in Construction is not significantly different, with respect to coding, integration, and testing, except for the number of people involved. There are, however, differences in the activities related to deployment, which are outlined here:
|