Testing a Solution


Clarity Point

Testing during a Build Track involves assessing the state of quality of a solution from a specification compliance perspective (referred to as verification). Making sure a solution works as expected from a user perspective is performed during a Stabilize Track (referred to as validation).


As mapped out in a test plan, testing either precedes a build effort (e.g., test-driven development) or follows a build effort. Leading with testing is a form of further specifying a solution and validating a design. It also helps with establishing a shared vision because a solution is wholly described by a set of test cases. Many methodologies (e.g., eXtreme programming) exist and even more books describe them, so I do not explain them herein.

The other mode of testing trails a build effort. Depending on selected build methodology, testing could be performed in conjunction with building and designing as part of very small iterations (e.g., design a little, build a little, test a little) or longer iterations with more solution development between testing. As such, it is hard to provide guidance and best practices other than that commonly found across methodologies. Because most methodologies have the concept of a "test team," the following likewise references a test team, but keep in mind that it could also be a virtual team.

Either way, it is important to understand that testing is not just for testers. Testing can be performed by anyone helping a team member improve the completeness and/or quality of his or her work item deliverable. This includes a wide spectrum of testing options from informal testing such as peers checking each other's work (i.e., buddy testing) to formal testing where an independent team runs through a rigorous battery of tests. The goal of having everyone testing is to help expose issues, uncover design flaws, and identify unexpected behavior.

In the beginning of a Build Track, a test team starts to refine test scenarios drafted during planning and, in accordance with a Test Plan, develops the necessary elements needed for testing. This includes test cases with expected outcomes, test harnesses to drive testing, automated and manual test scripts, and test data for both positive and negative testing.[1] It means making test goals real and measurable for the various aspects of a solution. It means mapping out the details of how to implement the various types of testing to be performed in a Build Track as well as in the Stabilize and Deploy Tracks.

[1] Positive testing is testing within specified parameters (e.g., entering a realistic numerical age within an age field). Negative testing is testing outside of the specified parameters (e.g., entering letters or a negative number within an age field).

Testing basically involves two areas of activity: finding where gaps are between what was built and what was specified (later in a Stabilize Track, what was expected is also included); and managing gap remediation. Typically, gaps are identified by different types of testing (e.g., regression testing) and resolved through a structured process (e.g., issue tracking cycle). Each of these areas is discussed next.

Types of Tests

Gaps in what was built versus what was specified are identified throughout a build process by using different types of tests. Although so many testing philosophies, approaches, and ways to instrument testing exist, what needs to be accomplished through the different types of testssometimes called by different namesis basically the same. The types of tests typically involve team members testing their own work item deliverables (e.g., unit testing for software developers); integrating their work item deliverables into a larger collection of deliverables built by other team members (e.g., integration testing); and verifying that what was previously built is still functional with the addition of new solution elements (e.g., regression testing).

Although what is tested and how it is tested should be spelled out in a test plan, the concept of coverage testing is commonly used during a Build Track. Coverage testing indicates the collective volume of testing from a whole solution perspective (e.g., test cases encompass testing 83 percent of a solution). Coverage is achieved by collectively testing the various features and capabilities contained within a solution. The level of testing completeness can vary depending on what is called for in a test plan. Typically, critical areas of a solution are thoroughly tested while the less critical areas might call for only spot testing (i.e., sampling).

Lesson Learned

Teams typically find it relatively easy to assemble tests that cover up to 60 percent of a solution's features and capabilities. They often need to get increasingly creative to come up with test cases to raise the coverage to 95 percent. These test cases typically are situations that infrequently happen in production. To raise the coverage to 100 percent typically involves test cases that are not representative of normal production activities. These test cases often involve contrived situations.


Issue Tracking and Remediation

Testing identifies gaps between current solution behavior and specified behavior. These gaps are referred to by a variety of names (e.g., bugs, defects, issues, problems). For the purposes of this discussion, these gaps are referred to herein as solution issues. Issues also arise from project governance and are referred to as project issues, as depicted in Figure 9-2. All issues need to be tracked, and those deemed necessary to address and mitigate for a given release must be worked into a project work queue (e.g., referred to in software development as defects or bugs that must be addressed before solution release). This is to say that some issues are acknowledged but not addressed for a given solution releaseif at all.

Figure 9-2. Two sources of issues


As should be outlined in the Risk and Issue Management Plan, an issue-tracking process is needed to handle issues systematically, efficiently, and expediently. As with much within MSF, the process can be formal or informal. At its core, it needs to address the intent of the process steps shown in Figure 9-3.

Figure 9-3. Issue tracking process


Step 1: Report

Issues can be identified and logged into the issue-tracking tool by anyone. As expected, the issue should be described in as complete terms as possible to help the team understand the issue and the source or stimulus of the issue, and know its perceived impact. It sometimes helps if the originator assigns a perceived rating for the issue in terms of severity (i.e., the degree of perceived impact) and priority (i.e., the degree of perceived urgency).

Step 2: Prioritize and Assign

Once the issue has been submitted, it is reviewed and its ratings are revisited and adjusted, if necessary, to make sure they are consistent with established project ratings (i.e., the originator might perceive ratings differently than a project issue rating scale would). Like with risk impact ratings discussed in Chapter 5, "Managing Project Risks," issues are prioritized based on a combination of their severity and priority.

Issues are then assigned to an appropriate team member for resolution. As part of the assignment step, work queues for potential assignees are considered. Once an assignment is made, issue resolution is added to that person's work queue. Note that sometimes an issue is assigned to the appropriate team lead so that person can decide who on the team should resolve the issue.

Step 3: Resolve

It might be hard to believe, but sometimes the best course of action to resolve an issue is not to take any action. As such, an issue is first reviewed and a determination is made as to how to resolve it. The following are typical issue resolution recommendations:

  • Fix Resolve for this solution release

  • Duplicate Has already been identified

  • Postpone Defer to a later release

  • Can't reproduce Could not be reproduced in a controlled environment

  • By design Not an issue because solution performs as specified

  • Decide not to fix Known issue that will not be addressed (i.e., willing to live with issue)

  • Feature request Not an issue because it ends up being a new feature not currently supported by a solution

Step 4: Close

Once the recommended resolution is considered and approved, the assignee takes the appropriate action to close the issue for the current release. Those recommendations that defer action keep the issue active for consideration in the next release.

Step 5: Retire

Issues that are closed out (i.e., are resolved or no longer tracked for later consideration) are retired. If possible, lessons learned are extracted and shared with the team.

Lesson Learned

Use a project's issue database as a starting point for a help desk knowledge base.





MicrosoftR Solutions Framework Essentials. Building Successful Technology Solutions
Microsoft Solutions Framework Essentials: Building Successful Technology Solutions
ISBN: 0735623538
EAN: 2147483647
Year: 2006
Pages: 137

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net