Section 13.4. Testing


13.4. Testing

Testing is probably the major quality control tool in any software development. The term testing in this context refers to systematic, automated, reproducible testing, rather than the ad-hoc testing approach that is still dominant in many software development efforts. This formal approach generates objective and measurable test results that can be used to obtain a measurement of the quality of the created software artifact.

Testing is best grouped into different categories, depending on the required objective and level of granularity. First, load testing and functional testing must be distinguished.

Load testing means testing a component under a specific load for a defined time. It is crucial to judge whether the software can meet any required SLAs. Load testing normally requires that the test be conducted against an environment where all the backend systems of the component are available and perform and scale as they will in the live environment. Otherwise, the response times or stability numbers don't mean much. For example, if a test is carried out against a simulation of a message queueing system, there is no knowing if systematic failures of the actual system will keep the performance of the testing component within the required range.

Functional testing means ensuring that the operational results of a software component are consistent with expectations. Functional tests that execute a single call with a given set of parameters and that then compare the result of these calls with the expected result are referred to as unit tests. Several unit tests can be chained into a testing series, testing several related and possibly sequential actions. In addition, test robots can automate tests of an entire application frontend by simulating user actions, again comparing results with expectation. Automated test tools can execute thousands of tests in short periods of time, usually far more than can be done manually. This special form of chained unit testing is commonly known as an end-to-end functional test. When a single componentsuch as an individual serviceis tested, functional testing might well allow for a certain part of the application to be simulated. For example, persistence using an object relational mapping library can be replaced using a simulation of the library. The upside of this approach is that database setup scripts and resources need not be available and initialized at time of testing, reducing testing time and speeding the quality assurance process. In contrast, when a component is functionally tested with all its backend components available, this is referred to integration testing for this component.

Of course, some overlap exists between the test types because load test tools often provide some mechanism for result checking and unit test tools provide some mechanism for generating increased load. Still, the test scenarios described remain different because they address different problems, often at different stages in the development lifecycle.

Systematic testing, in particular functional development time testing, has become widely popular with the advent of agile development methodologies such as extreme programming. However, it often poses a non-trivial problemdeciding which of the created artifacts justifies creation of a dedicated test. Test design is by no means easy because any functional test must be reproducible and must achieve as much coverage as possible. The danger of "testing the obvious" is real, and even large test sets have limited value if they break the first time the components are called with unexpected parameters. In addition, building tests is development work in its own right and might require building dedicated software components, for example a simulation of a backend or initialization scripts for databases. Still, tests must be as simple as possible to avoid the need to create a "test for the test."

The nature of SOAs can facilitate finding the most important functional test cases. Mission-critical enterprise applications might be rendered useless if one of the service components stops functioning properly after a new release. For this reason, the service component itself is the prime candidate for functional, integration, and load testing. This does not mean that end-to-end testing or testing of single libraries will no longer be required. It merely dedicates a large portion of the testing effort to testing services.

Consider the example in Figure 13-14, which shows a customer retention service that is composed from multiple services. Two of these services are shown in the figure: a printing service and a service that provides basic customer data. The customer retention service has multiple clients, among them a browser-based call center application that supports telephone marketing to the existing customer base and a number of batch programs that are used to create mailings to the customers. The system is based on various operating systems and programming languages.

Figure 13-14. The customer retention program consists of a customer retention service that is written in J2EE and deployed on a Windows platform. It relies on an existing mainframe-based customer service and a printing service based on a Unix platform. Call center clients connect using a Web application, and a number of Windows-based batch programs are used to create mass mailings.


As the new customer retention service and its client are created, testing is traditionally confined to ad-hoc testing. Call center agents would be testing the HTML frontend for the call center, while printouts from the print service would be manually checked.

To perform testing in a more meaningful manner, the test should be automated using a test driver. This is illustrated in Figure 13-15. In this case, the backend services are not real services but are simulations that behave in the way that the real time services would. This will enable us to test and debug the newly created business logic in the customer retention service without using valuable mainframe computing time or printing hundreds of sheets of paper. The driver tests the functioning of the customer retention service by comparing results to expectations. It also checks the results that are created in the printing service simulation.

Figure 13-15. Using a test client to test methods of the customer retention service. In this scenario, the service relies on a simulation on its dependent services.


In a second scenario shown in Figure 13-16, the HTML user interface is tested using a test robot. To start the test, the test robot initializes a mainframe customer service. The robot then performs various actions at the user interface, checking if the result is in accordance with expectations. Finally, it checks the database of the printing service to determine if the correct number and type of printouts have been created during the simulated call center interaction. Apart from the actual printing, this example provides almost a full end-to-end test scenario, where all the components are properly integrated.

Figure 13-16. In this scenario, the behavior of the graphical user interface is tested against the actual mainframe-based customer service. The printing service is still simulated in this test.


To create a satisfactory test suite, you will need many more tests than those illustrated here. In particular, test scenarios will include some load testing. Of course, the customer retention service and the print service will usually have their own tests in place. In addition, each test will include numerous calls with different parameter sets that simulate boundary conditions as well as invalid input.

The previous examples make clear that any test must be repeatable. For a functional test of a service, this means in particular that it must be repeatable with a new version of the service using the old parameter set for input and expected output. This ensures that a new software component is essentially "backward-compatible" with the older component. Such a test is referred to as regression test. Regression tests are mostly functional tests, but there might also be the need to create regression tests that are load testsensuring that the performance of a new software version still delivers appropriate response times and throughput.

When testing services, regression tests should cover all reusable services. Here, it will usually be necessary to test a number of calls in sequence to ensure proper operation. Regression tests should particularly be created on basic services so that basic services can be updated and tested on their own. Regression tests on services will often require actual backend operationnot only for load testing but also for functional testing. This is because, due to their very nature, a lot of services will be technology gateways or adapters whose main purposeand therefore the main objective in testing themlies in accessing various backend systems.

Tests will usually be conducted using a general test driveran environment that is capable of running tests and reporting the test results (see Figure 13-17). Tests are usually defined using scripts or mainstream programming languages. Test drivers often also provide mechanisms to trigger initialization of resources upon which the tests relyfor example, initializing a database in a certain state or providing a specific file for input. Generic test drivers are available for both load and unit testing. Popular load testing tools include Mercury Interactive LoadRunner, the Grinder, and Jmeter. To some extent, they can also be used as end-to-end tools, particularly when testing Web applications. End-to-end testing is traditionally the domain of test robots such as Rational Robot. Functional test tools include, for example, Junit and Nunit. However, in the unlikely event that none of the available tools meets the particular needs of the tester, most tools on the market can be easily extended and customized.

Figure 13-17. A generic test driver used in an end-to-end functional test for a service. The database is initialized into a defined state, and a sequence of tests is run against the service using the generic test driver. Results of the tests are logged and can later be analyzed.


Create a Regression Test Environment for Most Services

Every basic service and most other services should be complemented by a full regression test environment. Regression testing will create confidence for users and maintainers of the service.


Note that test definition should be maintained with the actual application and service source code in configuration management as it is a vital element of the final delivery. In fact, one might argue that the tests actually enable confident reuse of the service. Functional tests should be an integral part of any build process. In fact, some configuration management (e.g., Continuus) and various build tools (e.g., Jakarta Ant) provide out-of-the-box support for test generation and test result reporting.



    Enterprise SOA. Service-Oriented Architecture Best Practices
    Enterprise SOA: Service-Oriented Architecture Best Practices
    ISBN: 0131465759
    EAN: 2147483647
    Year: 2003
    Pages: 142

    flylib.com © 2008-2017.
    If you may any questions please contact us: flylib@qtcs.net