One of the management tasks that run concurrently with your development includes test case management. Good code is derived from good tests. Moreover, as with any aspect of a software development project, careful planning will determine the effectiveness of your test approach. Truth be told, it's easy to get lost in all of Team System's testing capabilities. This section runs through a practical scenario and shows you how tests are managed end-to-end.
In this section, we are assuming you are familiar with Team System tests, have successfully run them, and so forth. If you are new to the Team System Test Framework, we greatly encourage that you read and try the demos and walkthroughs for Team Edition for Software Testers on the MSDN Web site (http://msdn2.microsoft.com/ en-us/library/ms182409.aspx) or look at Professional Visual Studio 2005 Team System (ISBN:0764584367) for a deeper exploration of the topic.
Test case management involves more than the organization of tests. You need to manage how to properly write test specifications, how to automate and monitor your test runs, how to write effective bug work items, and so forth. The diagram in Figure 13-31 shows one of the many ways tests cases can be managed within Team System.
As you can see, test cases are written in parallel with the development workflow to verify the functionality of each feature (or task). Let's look at the process if you are using MSF for Agile Software Development. At the beginning of the project, the project manager receives a list of scenarios and quality of service requirements from the business analyst. At this point, the project manager must break down the scenarios into development tasks and work out tests to test each scenario with the designated tester. You may want to divide your tests in classes - build verification tests (BVTs), iteration (or milestone) tests, and daily (or nightly) tests.
Let's say we are developing a Web front-end log-in page for an ASP.NET application. Most applications go through a mock-up before development. The target application has to have the same look and feel and functionality as the graphic (shown in Figure 13-32).
As a project manager, you are given the feature to build as a requirement (or scenario). In MSF, scenarios are written out very specifically. For example, "Harry, the manager, wants to access the administrative site. He clicks on the log-in button on the main page of the portal and is led to a page with a user name and password field. He visits the site quite often therefore he clicks on the Remember Me check box to remember his credentials. He then clicks on the log-in button to quickly enter the desired site." Some business analysts and project managers prefer to write more-functional generic specifications, such as "Build log-in screen to access administrative site." The project manager then has to decompose this scenario or requirement into a set of development tasks. Figure 13-33 shows our scenario as a summary task, and the subtasks in Microsoft Project.
As the project manager starts figuring out the development tasks, it's also a good time to work out the test process for each task (or feature). In Microsoft Excel, test scenarios and test tasks are created that correspond to our development scenarios and tasks (as shown in Figure 13-34). Note that you can create a new work item type called "test case" to manage all your tests. A question you might be asking is, why create a new work item for each scenario and task? Isn't it duplication? There are several answers to these questions: First, you are establishing workflow for your tester separate from your development workflow. (Team System does not support assigning a work item to two individuals - although you can assign a work item to an entire group in special cases.) Second, these test work items are used to fully document your test approach.
Sara Ford has a great blog post called "Developing a Test Specification" in which she describes the elements of a great test plan and specification. You can learn more by reading her post at http://blogs.msdn.com/saraford/archive/2004/10/28/249135.aspx.
As soon as features are implemented, tests are set up side-by-side to make sure that the application is functionally correct. In our scenario, we are building a Web application; therefore, the most logical way to test it is using Team System Web tests. In Figure 13-35, we've built Web tests for two test tasks:
Verify successful login
Verify unsuccessful login
In the real world, you will likely want to set up different log-in verification Web tests according to personas, using different credentials for each persona. We can roll up all these Web tests into a single Ordered test, which represents one of our main scenarios (also shown in Figure 13-35).
Note that you can use the Test Manager to manage, query, order, and filter your tests. You can access the Test Manager by selecting Test⇨Windows⇨Test Manager, or click the Test Manager button on the Test Tools toolbar. To run a test, you can right-click and select Run Checked Tests. You can disable a test by right-clicking and selecting Disable. Test Lists also allow you to filter, sort, and group tests. You can create a new test list by selecting Test⇨Create New Test List.
Manual testing is the process of testing software by hand and writing down the steps to reproduce bugs and errors. Before you start manually testing an application, you should formulate a solid set of test cases. Automated tests are predefined in scope, and can only confirm or deny the existence of specific bugs. However, an experienced tester has the advantage of being able to explore facets of the application and find bugs in the most unlikely places. Because software isn't completely written by machines (yet), human error can creep into the design and structure of an application. It takes the eye of a good tester to find these flaws and correct them. You can create and structure manual tests in the same way as your automated tests (as shown in Figure 13-36).
You can also use manual tests to document how to reproduce (repro) a bug. The Microsoft Word file can then be attached to (or associated to) a bug work item and then assigned to the developer.
What kind of tests can be automated? Team System provides several test types that support automation. These test types include:
Web tests - Web tests enable verification that a Web application's behavior is correct. They issue an ordered series of HTTP requests against a target Web application and analyze each response for expected behaviors. You can use the integrated Web test recorder to create a test by observing your interaction with a target Web site through a browser window. Once the test is recorded, you can then use that Web test to ensure that you get the same results every time. The great thing about a Web test is that you can perform regression testing on a Web site when a new feature is added without having to manually go through the steps. (The tester can intervene only when a bug or error is found.)
Unit tests - Unit testing involves writing code to verify a system at a lower and more granular level than with other types of testing. Unit tests are written to ensure that code performs as the programmer expects - you assert the functionality, inputs, and error handling capabilities of your application. If new code "breaks" the way the application is supposed to work, your unit test will generate errors.
Generic tests - Generic tests enable you to reach outside of the test framework by running external executable applications, which return results back to Team Test. Generic tests are useful for supporting current test processes that you may already have in place by wiring them to Team Test. They are also useful when there are complex steps to be performed by third-party tools, or when you want to create your own external testing tools.
Load tests - Load tests are used to verify that your application will perform as expected while under the stress of multiple concurrent users. You configure the levels and types of load you wish to simulate and then execute the load test. A series of requests will be generated against the target application and Team System will monitor the system under test to determine how well it performs.
Code analysis - Code analysis consists of a verification of your code against set rules and best practices. When dealing with unmanaged code, the code analysis tool is called PREfast. In Team System, the feature is referred to as "Unmanaged Code Analysis for C/C++." For managed .NET applications, the engine that drives code analysis is FxCop. You can consider code analysis as an automated code review of sorts. Nothing can really replace a human code review, but code analysis really excels at verifying that your code adheres to your guidelines and best practices, and at looking for issues with regards to nonfunctional requirements (for example, security, performance, and internationalization issues).
Performance tests - Profiling is the process of observing and recording metrics about the behavior of an application. Profilers are tools used to help identify application performance issues. Issues typically stem from code that performs slowly or inefficiently or code that causes excessive use of system memory. A profiler helps you to more easily identify these issues so they can be corrected. Profiling sessions can be automated alongside other types of tests.
Now that we've identified the kinds of tests we can automate, let's look at the different ways we can trigger test automation within Team System:
Code check-in integration - You can create a test check-in policy by right-clicking on your Team Project in Team Explorer, and selecting Team Project Settings⇨Source Control. If you click the Add button under the Check-in Policy tab and select Testing Policy, you are prompted for a metadata (.vsmdi) file. Once the policy is in place, every time the developer checks in code, your tests will run to verify said code. If the tests fail, Team System automatically generates a policy violation and the developer will have to fix the errors (or override the policy) to proceed. The disadvantage of this approach is that the violation is only immediately visible to the developer (although it is possible to capture policy violation overrides using the extensibility capabilities of Team Foundation Server and alert the project manager or tester).
Build integration - Tests can be automated as part of a build type, or triggered as a custom EXEC build task. The tester will then have to refer to the hourly, daily (or nightly) build logs to find any build breaks and track down testing errors.
Eventing service and extensibility - The Team Foundation Server Eventing service can be used to trigger test runs. Events can be triggered from work items, version control - almost any feature of Team System. Once an event is detected, a handler can launch your custom application or tool. You can also programmatically launch test runs.
Command-line integration - MSTEST.EXE, the command-line tool for Team Edition for Software Testers can be used to launch a test. You can use the Windows Scheduled Tasks (schtasks.exe) tool to execute Ordered tests using the command-line tool at predetermined times (such as a nightly test).
To get the benefit of check-in and build integration, you must place your test cases in an ordered list.
Let's take a closer look at the command-line testing tool (MSTEST.EXE). The command-line testing tool can trigger individual tests or .vsmdi files. The .vsmdi file contains your custom test lists, and refers to the tests within your lists. Here are the options available with MSTEST:
This option will run the tests in-process.
This option will publish the test results to a Team Foundation Server. You need to also specify the build you want to associate the results (publishbuild), the flavor (release or debug), the platform (x86 for example), and the Team Project (teamproject).
The resultsfile option will generate a file with the test results (in XML format). Simply specify what directory and filename you want (for example: C:\results.xml).
This denotes the path to the run configuration file you want to execute.
This specifies the test you want to load and run.
You can specify what tests you want to launch from a test list using the /testlist option. Write out the full path to the file in question.
You can use /testmetadata to specify what tests to load from the metadata file.
To publish anything to Team Foundation Server, you need at least one successful build in one of your Team Projects. To create a successful build, check-in some code, create a build type, and build it once. To learn why builds are required to publish tests, refer to Chapter 16.
Let's say we wanted to trigger the Ordered test corresponding to our test case "Build login screen to access administrative site." First, you need to open the Visual Studio 2005 Command Prompt (Start⇨All Programs⇨Microsoft Visual Studio 2005⇨Visual Studio Tools⇨Visual Studio 2005 Command Prompt). Then you specify the test you want to run (Verify Administrative Screen to Verify Administrative Screen.orderedtest) and then the file and directory you want to output the results (C:\ results.xml):
C:\Program Files\Microsoft Visual Studio 8\VC>mstest /testcontainer:"C:\ TestLoginSite\TestLoginSite\Verify Administrative Screen to Verify Administrative Screen.orderedtest" /resultsfile:"C:\results.xml"
Once you run the test, you'll notice that the results.xml file is quite long. If you scroll down the file a bit, you will find the result node with the results of your test. Notice that the Ordered test contains three tests, three tests were executed, and three passed:
<result type="Microsoft.VisualStudio.TestTools.Common.RunResultAndStatistics"> <runInfoList type="System.Collections.Generic.List`1[[ Microsoft.VisualStudio.TestTools.Common.RunInfo, Microsoft.VisualStudio.QualityTools.Common, Version=22.214.171.124, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a]]"> <_size type="System.Int32">0</_size> <_version type="System.Int32">0</_version> </runInfoList> <totalTestCount type="System.Int32">3</totalTestCount> <executedTestCount type="System.Int32">3</executedTestCount> <passedTestCount type="System.Int32">3</passedTestCount> <stdout type="System.String" /> <stderr type="System.String" /> <debugTrace type="System.String" /> <outcome type="Microsoft.VisualStudio.TestTools.Common.TestOutcome"> <value__ type="System.Int32">11</value__> </outcome> <counters type="System.Int32">0,0,0,0,0,0,0,0,0,0,3,0,6,0</counters> <isPartialRun type="System.Boolean">False</isPartialRun> </result>
Using the command line tools, you can copy over the XML results file in a virtual directory, read it with an ASP.NET page, and create a custom dashboard for your testers. CruiseControl.NET comes with an .xsl file called MsTestSummary.xsl. The file may save you some time in trying to format the output.
Once the tester has analyzed the results of the test, they can assign bugs, rerun tests and create tests (according to the requirements). Test Results (.trx) files can be attached or linked to a bug work item. There are many integration possibilities out there.
What about automated UI testing for WinForms? This is probably one of the biggest feature requests that keep coming up with Team System. Microsoft has included a UI Automation framework within the .NET Framework 3.0. Specifically, UI Automation is an accessibility feature of the Windows Presentation Foundation. To learn more about the UI Automation framework, visit the Windows SDK (http://windowssdk.msdn.microsoft.com/en-us/library/ms747327.aspx) and the UI Automation forums on the Microsoft MSDN Web site at http://forums.microsoft.com/MSDN/ShowForum.aspx?ForumID=352&SiteID=1&PageID=0.