Equally important in completing a project is to learn from its successes and failures. As a good practice we recommend a wrap-up session with the entire team after each software release. This gives us an opportunity for celebration as well as brainstorming about best practices, newly used tools, and ways to improve processes, communication, or customer relationships.
The project has concluded, and we now want to take a closer look at some of the practices we applied throughout the book, analyzing what worked well and where we see room for improvement.
13.5.1 Project Planning
Throughout almost all chapters in this book, we had to perform the project planning task to adjust to events and new circumstances. Revising a project plan is a necessity for most software projects and does not mean that the initial plan was simply bad. Far worse is to keep working with an outdated project plan and not to communicate with the customer throughout the project.
Within this project we had a very good experience with planning the entire project in dedicated phases and relatively small iterations. All iterations had clearly defined go/no-go criteria, a practice that allowed us to identify and address project and schedule risk at any given time. The Unified Process worked pretty well for this project, and we will continue following this process for similar projects.
13.5.2 Requirements Refinement and Customer Feedback
In most projects, requirements change or are added throughout the life cycle. For the Unified Process, which is an iterative and requirements-driven approach, refining requirements in all iterations is part of the process itself. Refined or even newly added requirements are not a synonym for feature creep, but instead allow us to identify new risks and adjust the project plan and schedule if necessary. Involving the customer throughout these iterations provides valuable feedback on features and priorities.
Prototypes can provide valuable input for design decisions that deal with technically challenging aspects, and prototypes can even be used to evaluate new technologies, as exercised in Chapter 3. Specifically, throw-away prototyping, where code developed during prototyping is not intended to be used in the product, seems to be a good way of separating out single aspects and trying to solve them one by one instead of "prototyping" the product up front. The clearly defined goals also help us to stay focused on the issue under investigation.
Despite its name, throw-away prototype code and the results of the investigation are not necessarily destroyed. Instead, they should be archived and referenced in design documents or used for training purposes. For example, the SmartNotes application from Chapter 3 can be incorporated into our in-house .NET overview training as a hands-on session.
13.5.4 Use of Unified Modeling Language and Design Patterns
We often used UML diagrams for visualization, a practice that helped in the abstraction and analysis of problems. The advantage of using the standardized notation of UML is that it is widely acknowledged as the industry standard and therefore is understood by many managers, developers, and testers in the field. This is particularly important when external contractors are added to a project.
The UML standard covers a variety of notations and diagrams for different purposes that have been used within this project:
Furthermore, a large number of design patterns documented in UML can be seamlessly integrated into the project.
Proper documentation of software's requirements, function, design, and code greatly improves quality and maintainability. In this project we created all software documentation in the form of XML documents or embedded XML comments in source code, together with XSL stylesheets for formatting purposes. The structure enforced by XML greatly simplified the tracing of all requirements through the workflows. Visual Studio supports the creation of complete source documentation in a single XML document, including our custom fields, such as . Tracing requirement keys to source code was therefore extremely simplified and required only that we apply a very short XSL formatting file. Also, Visual Studio's integrated support for embedded code comments made it easy to create complete hyperlinked source documentation in HTML format.
However, we also observed a couple of drawbacks. First, writing large documents such as the requirements specification in a simple text editor is not to everyone's taste. Unless you are familiar with enhanced and feature-rich editors, you can easily lose the overview while working on these plain text files. Furthermore, the previously encountered formatting errors in the documents created in Word have now been replaced by typing errors due to the missing spell-checker. We therefore need to analyze which tool can combine the advantages of both approaches. To do so, we should find out which text processing software allows export to structured XML documents or at least offers third-party tools to accomplish this. Another direction could be to investigate which text editors offer the integration of features such as spell-checking or highlighting to make the editing process easier.
Second, despite the pretty looks of the generated HTML source code documentation, there are many links to pages that contain very little useful information. Often, the majority of added functionality is based on the .NET Framework class library, which is not linked to the generated HTML documentation. The few comments from newly added methods are spread loosely throughout the documentation and usually do not give the reader a good understanding of what is implemented and how.
We can try to increase the usefulness of the generated HTML documentation by changing our processes. For example, we could make reviewing the generated documentation part of all code reviews, reviewing the generated HTML documentation before reviewing the actual implementation. The reviewers should then be able to understand the concepts of the implementation before looking at the code itself.
13.5.6 Automated Testing
A lot of effort has been spent throughout the entire project on implementing unit tests. Potential design issues, bugs, and even problems in third-party software can be identified early on. We have not experienced any major problems during the integration of the components and did not find any bugs during our integration/system testing. This result can be credited to the strict process that required unit tests for each iteration in the construction phase and was directly bound to the go/no-go criteria for these iterations. Using fully automated tests based on the NUnit framework allowed us to rerun all tests very frequently and thereby discover potential side effects. Even during the development of the Web application, the unit testing became a substantial factor in quality improvement.
In later projects we will definitely continue developing unit tests in the NUnit framework. The command line version of this framework should be used to introduce nightly test runs. This can be combined with a nightly build, ensuring that code changes do not break already implemented functionality.
The table format used in the integration/system test specification has proven to be good choice. Any tester in our organization was able to execute tests, and the format did not leave any room for interpretation. In future releases we want to start automating the collection and evaluation of test results by further refining the generated HTML test specification so that the test results are collected directly from the selected radio buttons. For this purpose the test specifications will be accessible through the intranet server to authenticated testers, who can then enter the test results online via a tablet PC.
13.5.7 Error Handling through Exceptions
Using exceptions to propagate fatal errors (instead of using function return status) greatly increased our productivity. Using the exception publisher, additional information about the error (including the source code location) was immediately available. When we used function return status instead of exceptions, the only information a tester had was that something had failed. It was then up to a developer to "dig in" and debug step-by-step through hundreds of lines of code until the area where the error occurred was reached.
13.5.8 Designing for Extensibility
Most software projects start small. However, in many cases numerous new features are added over time, often exceeding the original number of lines of code by a magnitude. It is therefore desirable to design a product for extensibility early on.
This may seem obvious now, but from our experience it is often neglected in the beginning stage of a project. Separating software not only logically but also physically (for example, into dynamically loaded plugins) can speed build time as well as help us reuse those modules in other projects. Modules that have not changed during a specific software release can be taken from an archive and do not necessarily need to be built from scratch or retested. In the photo editor application, all special effects and image filter operations are encapsulated into a dynamically loadable plugin, reducing build and testing effort in future releases and also allowing us to deploy bug fixes or performance improvements tailored to only one specific operation.
13.5.9 Code Reuse
Building software is a time-consuming, resource-intensive, and therefore very expensive process. To stay competitive, it is a good practice to write reusable software components. This must be considered when you plan a project because the development of reusable software initially demands greater efforts in design, documentation, and test.
.NET's Platform Invocation service (PInvoke) also allows the seamless integration of unmanaged legacy code such as COM objects. Although programmers tend to have reservations about using someone else's creation, there is no need to reinvent the wheel if an old COM object that has been stabilized and optimized over time still fulfills the requirements of a given task. PInvoke is an important feature that allows us to turn applications over to the .NET technology slowly without introducing major risks due to a rewrite of the existing code.