Validation is an essential part of engineering practice, and validation of documentation is no different. Recall the seventh rule of sound documentation: Review documentation for fitness of purpose. Validation enables you to make certain that the documentation you have put forth considerable effort to produce will in fact be useful to the communities you've aimed to serve. Validation is about making sure that the architecture you have designed is documented well enough so that people can understand and use it.
A review can answer a number of questions about the architecture documentation's consistency:
A review requires reviewers. Reviewing for the first questionabout consistency with stakeholder needsrequires stakeholders. Good architectural practice requires early identification of stakeholders, and their input can be solicited during the planning of the documentation suite. Ask themindividually, in small groups, or in a documentation planning workshopwhat their documentation concerns are. They can articulate them, using scenarios. "I want to know how to exercise the variability in an architecture," a stakeholder might say, "so that I can field a product in a product line." Or, "I will need to know how to change the architecture to accommodate the next generation of middleware," another might say. You can use these scenarios to help plan the documentation package and then later to validate it by asking the stakeholders to exercise their scenarios by walking through the documentation to carry them out.
This technique is a form of active design review, whereby reviewers are actively engaged to exercise the artifact they are reviewing, not just look it over and scan for defects. For all the questions cited, active design reviews are recommended. Here is what David Weiss, one of the creators of active design reviews, has this to say about them:
Starting in the early 1970s I have had occasion to sit in on a number of design reviews, in disparate places in industry and government. I had a chance to see a wide variety of software developers conduct reviews, including professional software developers, engineers, and scientists. All had one thing in common: the review was conducted as a (usually large) meeting or series of meetings at which designer(s) made presentations to the reviewers, and the reviewers could be passive and silent or could be active and ask questions. The amount, quality, and time of delivery of the design documentation varied widely. The time that the reviewers put in preparation varied widely. The participation by the reviewers varied widely. (I have even been to so-called reviews where the reviewers are cautioned not to ask embarrassing questions, and have seen reviewers silenced by senior managers for doing so. I was once hustled out of a design review because I was asking too many sharp questions.) The expertise and roles of the reviewers varied widely. As a result, the quality of the reviews varied widely. In the early 1980s Fagin-style code inspections were introduced to try to ameliorate many of these problems for code reviews. Independently of Fagin, we developed active design reviews at about the same time to ameliorate the same problems for design reviews.
Active design reviews are designed to make reviews useful to the designers. They are driven by questions that the designers ask the reviewers, reversing the usual review process. The result is that the designers have a way to test whether or not their design meets the goals they have set for it. To get the reviewers to think hard about the design, active reviews try to get them to take an active role by requiring them to answer questions rather than to ask questions. Many of the questions force them to take the role of users of the design, sometimes making them think about how they would write a program to implement (parts of) the design. In an active review, no reviewer can be passive and silent.
We focus reviewers with different expertise on different sets of questions so as to use their time and knowledge most effectively. There is no large meeting at which designers make presentations. We conduct an initial meeting where we explain the process and then give reviewers their assignments, along with the design documentation that they need to complete their assignments.
Design reviews cannot succeed without proper design documentation. Information theory tells us that error correction requires redundancy. Active reviews use redundancy in two ways. First, we suggest that designers structure their design documentation so that it incorporates redundancy for the purpose of consistency checking. For example, module interface specifications may include assumptions about what functionality the users of a module require. The functions offered by the module's interface can then be checked against those assumptions. Incorporating such redundancy is not required for active design reviews but certainly makes it easier to construct the review questions.
Second, we select reviewers for their expertise in certain areas and include questions that take advantage of their knowledge in those areas. For example, the design of avionics software would include questions about devices controlled or monitored by the software, to be answered by experts in avionics device technology, and intended to insure that the designers have made correct assumptions about the characteristics, both present and future, of such devices. In so doing, we compare the knowledge in the reviewers' heads with the knowledge used to create the design.
I have used active design reviews in a variety of environments. With the proper set of questions, appropriate documentation, and appropriate reviewers, they never fail to uncover many false assumptions, inconsistencies, omissions, and other weaknesses in the design. The designers are almost always pleased with the results. The reviewers, who do not have to attend a long, often boring, meeting, like being able to go off to their desks and focus on their own areas of expertise, with no distractions, on their own schedule. One developer who conducted an active review under my guidance was ecstatic with the results. In response to the questions she used she had gotten more than 300 answers that pointed out potential problems with the design. She told me that she had never before been able to get anyone to review her designs so carefully.
Of course, active reviews have some difficulties as well. As with other review approaches, it is often difficult to find reviewers who have the expertise that you need and who will commit to the time that is required. Since the reviewers operate independently and on their own schedule, you must sometimes harass them to get them to complete their reviews on time. Some reviewers feel that there is a synergy that occurs in large review meetings that ferrets out problems that may be missed by individual reviewers carrying out individual assignments. Perhaps the most difficult aspect is creating design documentation that contains the redundancy that makes for the most effective reviews. Probably the second most difficult aspect is devising a set of questions that force the reviewer to be active. It is really easy to be lured into asking questions that allow the reviewer to be lazy. For example, "Is this assumption valid?" is too easy. In principle, much better is "Give 2 examples that demonstrate the validity of this assumption, or a counterexample." In practice, one must balance demands on the reviewers with expected returns, perhaps suggesting that they must give at least one example but two are preferable.
Active reviews are a radical departure from the standard review process for most designers, including architects. Since engineers and project managers are often conservative about changes to their development processes, they may be reluctant to try a new approach. However, active reviews are easy to explain and easy to try. The technology transfers easily and the process is easy to standardize; an organization that specializes in a particular application can reuse many questions from one design review to another. Structuring the design documentation so that it has reviewable content improves the quality of the design even before the review takes place. Finally, reversing the typical roles puts less stress on everyone involved (designers no longer have to get up in front of an audience to explain their designs, and reviewers no longer have to worry about asking stupid questions in front of an audience) and leads to greater productivity in the review.
A Glossary Would Have Helped
A colleague told me recently about an architecture review he attended for a distributed military command-and-control system, a major function of which was the tracking of ships at sea. "A major topic of interest was how the common operational picture handled tracks," he wrote. "But it was clear that the word track was hopelessly overloaded. The person making the presentation caused some of this confusion by using the word track to mean the following:
The age, accuracy, and implicit history of each type of track mentioned is different. The person making the presentation was knowledgeable and easily changed context to answer questions as necessary. But the result was that people left the meeting with different impressions of the details of the system's capabilities and were somewhat confused as to how the common operational picture was to be displayed on each type of mobile platform and ground station."
A glossary was sorely needed, my colleague agreed. Even if everyone on your project has the identical vision for each of your specialized termswhich is highly unlikelyremember the wide audience of stakeholders for whom you're preparing your documentation. Taking the time to define your terms will reduce confusion and frustration later on, and the effort will more than likely pay for itself in saved time and rework.