Create a unit test plan.
Instrument and debug a Windows service, a serviced component, a .NET Remoting object, and an XML Web service.
Provide multicultural test data to components and applications.
Testing is the process of executing a program with the intention of finding errors ( bugs ). By "error," I mean any case in which the program's actual results failed to match the expected results. Expected results might include not only the correctness of the program, but also other attributes such as usability, reliability, and robustness. The process of testing can be manual, automated, or a mix of both techniques.
Correctness, Robustness, and Reliability Correctness refers to the capability of a program to produce expected results when the program is given a set of valid input data. Robustness is the capability of a program to cope with invalid data or operations. Reliability is the capability of a program to produce consistent results on every use.
In the world of increasing competition, testing is more important than ever. A software company cannot afford to ignore the importance of testing. If a company releases buggy code, not only will it end up spending more time and money in fixing and redistributing the corrected code, but it will also lose the goodwill and business of potential customers. In the Internet world, the competition is not even next door but is just a click away!
Create a unit test plan.
A test plan is a document that guides the process of testing. A good test plan should typically include the following information:
Which software component needs to be tested ?
What parts of a component's specification are to be tested?
What parts of a component's specification are not to be tested?
What approach needs to be followed for testing?
Who will be responsible for each task in the testing process?
What is the schedule for testing?
What are the criteria for a test to fail or pass?
How will the test results be documented and disseminated?
Incremental testing (sometimes also called evolutionary testing) is a modern approach to testing that has proven very useful for Rapid Application Development (RAD). The idea here is to test the system as you build it. Three levels of testing are involved:
Unit Testing ‚ Involves testing the elementary unit of the application (usually a class).
Integration Testing ‚ Tests the integration of two or more units or integration between subsystems of those units.
Regression Testing ‚ Usually involves the process of repeating the unit and integration tests whenever a bug is fixed to ensure that no old bugs have recurred and that no new bugs have been introduced. You should also run your regression tests when you have modified or added code to make sure that the new code does not have unintended consequences.
Units are the smallest building blocks of an application. In Visual Basic .NET, these building blocks are often a component or a class definition. Unit tests involve performing basic tests at the component level to ensure that each unique execution path in the component behaves exactly as documented in its specifications.
Often the same person who writes the component also does unit testing for it. Unit testing typically requires writing special programs that use the component or class under test. These programs are called test drivers; they are used throughout the testing process, but are not part of the final product.
NUnit NUnit is a simple framework that enables you to write repeatable tests in any .NET language. For more information, visit http://www.nunit.org/.
Some of the major benefits of unit testing are as follows :
It allows you to test parts of an application without waiting for the other parts to be available.
It allows you to test those exceptional conditions that are not easily reached by external inputs in a large integrated system.
It simplifies the debugging process by limiting the search for bugs to a small unit when compared to the complete application.
It avoids lengthy compile-build-debug cycles when debugging difficult problems.
It enables you to detect and remove defects at a much lower cost compared to other, later stages of testing.
Integration testing verifies that the major subsystems of an application work well with each other. The objective of integration testing is to uncover the errors that might result because of the way units integrate or interface with each other.
If you visualize the whole application as a hierarchy of components, integration testing can be performed in any of the following ways:
Bottom-up approach ‚ In this approach, testing progresses from the smallest subsystem and then gradually progresses up in the hierarchy to cover the whole system. This approach might require you to write a number of "test-driver" programs that test the integration between subsystems.
Top-Down approach ‚ This approach starts with the top-level system to test the top-level interfaces and gradually comes down and tests smaller subsystems. You might be required to write stubs ( dummy modules that just mimic the interface of a module but that have no functionality) for the modules that are not yet ready for testing.
Umbrella approach ‚ This approach focuses more on testing those modules that have a high degree of user interaction. Normally stubs are used in place of process- intensive modules. This approach enables you to release GUI-based applications early, allowing you to gradually increase functionality. It is called "umbrella" because when you look at the application hierarchy (as shown in Figure 9.1), the input/output modules are generally present on the edges, forming an umbrella shape.
Regression testing should be performed any time a program is modified, either to fix a bug or to add a feature. The process of regression testing involves running all the previous tests plus any newly added test cases to test the added functionality. Regression testing has two main goals:
Verify that all known bugs are corrected.
Verify that the program has no new bugs.
Limitations of Testing Testing can only show the presence of errors but can never confirm their absence. Various factors, such as the complexity of the software, requirements, such as interoperability with various software and hardware, and globalization issues, such as support for various languages and culture, can create excessive input data and too many execution paths to be tested. Many companies do their best to capture most of the test cases by using automation (using computer programs to find errors) and beta testing (involving product enthusiasts to find errors). Despite the effort, invested errors still exist in shipping products, as any user of software well knows .
Testing an application designed for international usage involves checking the country and language dependencies of each locale for which the application has been designed. While testing an international application, you need to consider the following guidelines:
Test the application's data and user interface to make sure that they conform to the locale's standards for date and time, numeric values, currency, list separators, and measurements.
If you are developing for Windows 2000 or Windows XP, test your application on as many language and culture variants as necessary to cover your entire market for the application. These operating systems support the languages used in more than 120 cultures/locales.
Prefer using Unicode for your application. Applications using Unicode will run fine without making any changes on Windows 2000 and XP. If instead your application uses Windows code pages, you will need to set the culture/locale of the operating system according to the localized version of application that you are testing and reboot after each change.
Carefully test your application's logic for setting an appropriate language. In Web-hosted applications, it might be difficult to determine the user's preferred language from his Web browser settings. You might want to allow the user to select a language instead.
While testing a localized version of an application, make sure that you use input data in the language supported by the localized version. This will make the testing scenario close to the scenario in which the application will be actually used.