Test Automation versus Manual Testing


All software development projects have a finite schedule and budget. It is up to the software test engineers and the project managers to decide what tests to implement to reach the goal of producing stable and reliable software.

There is quite a bit of debate within the developer community as to the merits of manual testing versus automated testing. Extreme Programming (XP) practitioners typically favor automated testing in the interest of saving time. Automated tests can verify results, but are constrained within a set of predefined problems. Manual testing can undoubtedly uncover problems that are not easily found using standard testing tools.

We will look at the features of both automated tests and manual tests. You can then make an informed decision as to what tests will give you the best results for the projects on which you are working.

Test automation

Automated tests are very handy because you can implement them as part of the build process using Team Foundation Build (you can learn more about Team Foundation Build in Chapter 24). For obvious reasons, manual tests are not designed to be used within an automated test environment. Team System has several built-in automated tests, including load tests, generic tests, and unit tests. Test automation makes a great deal of sense for medium to large software projects.

Automated tests are easily deployed, can quickly be executed by the build server, and are highly suited for repetitive tasks (you can run automated tests hundreds of thousands of times with ease). Therefore, it makes regression testing a snap. (Regression testing is the process of retesting your application to make sure you don't introduce bugs while in the process of fixing other bugs.) Automated tests hook into Team Foundation Build, enabling you to focus on other tasks. You can have a farm of build servers running through a battery of automated tests, enabling your testers to deal with "show-stopping" problems.

However, testing is an iterative process — you should constantly look at the effectiveness of your tests and adjust as needed during the lifetime of your project. By refining the test process, you can document best practices to carry forward in future projects.

Test automation works really well for long-term projects. One of the main reasons for this is cost — designing an automated test framework and elaborate test cases takes a lot of work hours to implement. This is not to say that it isn't a good idea to automate tests in smaller projects. In fact, many Agile practitioners swear by automation. In some cases, a manual bug fix may take just a few minutes to correct. In the end, the decision should be based on the effective prioritization of your time and cost.

Use automated tests to review your application structure and code. Automated tests are not really effective for usability (and other user-centric issues). It would be very difficult (and impractical) for a human tester to manually generate a code coverage report for an application with millions of lines of code (such as the Windows operating system). This is an area where automated tests really shine — machines can perform tests such as code coverage and static and dynamic code analysis with little to no personnel required.

There are, however, tools that bridge the gap. One example is the Framework for Integrated Testing (FIT) by Ward Cunningham and related tools (such as FITnesse). FIT excels at running acceptance and regression tests. FIT allows a customer or tester to use a Microsoft Word document to model how an application should behave. Using "fixtures" (code written by programmers), FIT compares the values in the HTML table with the software. It makes it easy for a nontechnical user to interact with a complex application. You can learn more by visiting the following website: http://www.fit.c2.com/.

Here is a short list of some of the processes you may want to automate:

  • Web and load testing: Web and load testing enable you to automatically test the performance of your web application using different browser, traffic, and network simulations, bandwidth profiles, server load scenarios, and other environmental factors. Web and load tests can also be used to implement functional and smoke testing. You can find more information about these tests in Chapter 15.

  • Structural testing: Some tests are best left to a computer. For example, in Chapter 10, you learned that the Application Verifier can dynamically detect memory-handling errors. Some of the errors are far from intuitive (a stack corruption may cause errors and instability in your system long after a defect has been injected in your code). Structural testing can help you uncover these errors and issues.

  • Performance testing: Performance testing (and tuning) is an essential part of any application development cycle. You can test your application using both the sampling and instrumentation methods, and the best part is that both can be automated. Performance testing is covered in depth in Chapter 12.

  • Regression testing: Using automated tests, you can easily verify that the code that is checked in meets a certain level of quality. If you uncover specific errors using automated tests, it is quite easy to rerun the tests to make sure the problems have been corrected.

  • Generic tests: Team System offers great extensibility features, including the capability to formulate new kinds of tests and bring in external test results that are handled with the same instrumentation and tools as your built-in tests. Generic testing is discussed at length in the next chapter.

  • Stress testing: You can simulate the stress of thousands of users accessing your application. This is not possible using manual testing (unless you have thousands of testers on staff!).

Manual testing

Manual testing is the process of testing software by hand and writing down the steps to reproduce bugs and errors. Before you start manually testing an application, you should formulate a solid set of test cases. Automated tests are predefined in scope, and can only confirm or deny the existence of specific bugs. However, an experienced tester has the advantage of being able to explore facets of the application and find bugs in the most unlikely places. Because software isn't completely written by machines (yet), human error can creep into the design and structure of an application. It takes the eye of a good tester to find these flaws and correct them.

Accuracy is one of the main challenges of running effective manual tests. If your tester isn't using careful documentation and a solid methodology, he or she may miss bugs. Manual tests are well suited for small or one-shot development projects. Again, cost becomes an issue; due to small budgets and lack of resources, it may be impractical to implement large-scale custom testing frameworks. Small projects usually have aggressive development schedules, so you have to make the best use of your time.

Even if you are an Agile developer, you shouldn't disregard manual testing. As mentioned before, some tasks shouldn't be automated. For example, the most effective way of tracking down user interface and usability problems in your application is by using manual testing techniques.

In most development environments, manual tests are performed as a prelude to automated tests. Manual testing practices may also be incorporated into the bug documentation process. Here are some of the tasks you can't easily automate:

  • Error management: Does your application behave well when an error is encountered? Are the error messages presented to the end users descriptive enough to help guide them through your application? Unit testing should be used as your first line of defense. Error management testing looks at the interaction between the user and your application once it has been compiled.

  • Deployment, configuration, and compatibility testing: How easily does your application deploy? Will your application work with all versions of the .NET Framework? Deployment testing involves testing an application with a variety of platforms, systems, and hardware configurations, and in production environments. The deployment manager role usually handles this task (see Chapter 22 for more details).

  • Quality assurance (QA)/user acceptance testing: In any development process, the client (or business analyst) has to sign off on features. To ensure that these requirements are met, user acceptance testing is required. Quality assurance testing involves ensuring that all procedures and requirements have been followed, tests have been performed, and the end product meets a set standard of quality.

  • Localization testing and requirements: Does your application behave correctly when it's localized in another language? Will fonts and words disappear or align incorrectly? Does it support bi-directional positioning? If you are writing software for a global market, you should be manually testing your application under many globalization scenarios and conditions.

  • User interface testing: A lot of work is being done with automated user interface test frameworks. However, nothing can replace a good tester who can evaluate whether the application meets accessibility and visual requirement guidelines.

  • Usability testing: Does the user have to perform unnecessary steps to access a feature? Is the application intuitive (or counterintuitive)? What's most important in usability testing is setting up a proper test methodology. Some of the tests that fall under this category include exploratory, assessment, comparison, and validation testing.

  • Black-box (or functional) testing: Black-box tests are performed by testers, and will sometimes yield unexpected results. How will your application behave if you enter a very large amount of text in a text field? All inputs in your application are tested without knowledge of the code or the expected output. One of the popular functional tests you can try on your application is a smoke test — a run-through of a piece of software after a major build or update. (The term "smoke test" comes from hardware testing — if your electronics project starts to spark or produce smoke, you know you have a problem!)

  • Recovery and fail-over testing: If an application crashes, is there any way for end users to recover their data? Recovery testing looks at how your application copes with disastrous scenarios.

  • Security testing: Will the application work under a least-privileged user account? Does your application violate basic Windows security principles (such as writing to HKEY_LOCAL_MACHINE)? The Application Verifier component of Team System can automatically uncover many types of security violations. The best way to implement security in your development environment is by enforcing best practices (using check-in policies), writing good inline documentation, and performing regular manual audits on your code.



Professional Visual Studio 2005 Team System
Professional Visual Studio 2005 Team System (Programmer to Programmer)
ISBN: 0764584367
EAN: 2147483647
Year: N/A
Pages: 220

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net