Test Cases


Much work has been done to make distributed system infrastructures as abstract as possible so that users have little to worry about with respect to the distribution semantics. Each vendor works to make its product conform to the standard it is addressing. These two factors make it possible to have a set of model-specific and then application-specific test cases.

Model-specific Tests

Each standard model results in its own set of design patterns. This in turn results in a set of test patterns.

Tests for the Basic Client/Server Model

We have already described a couple of types of tests for the client/server model; however, the basic client/server model has a number of variations. In the following test pattern, the design pattern under test is a widely used variant named distributed callbacks.

Problem: The synchronous messaging between two objects is modified to be asynchronous messaging by adding a Callback object. The client constructs a Callback object and sends it a request and the address of a server. The Callback object submits the request to the server synchronously. When an answer is received, the Callback object forwards the answer to the Client object.

Context: The code for the design under test is being used because the designer wants to be able to do other work while this message is being answered. Potentially, the original thread will complete its work before the answer is ready.

Forces: Functional tests may pass when executed once, but race conditions can lead to inconsistent results so repeating the same tests may obtain different results. Numerous factors affect the visibility of failures due to race conditions.

Solution: Construct test suites that execute each test case multiple times. The test suite should adjust factors to make race conditions more visible. The system should be set back to its original state after each test. The tests should include the following (see Figure 8.10):

  • A test in which the server returns the expected result almost immediately.

  • A test in which the client is deleted before the callback fires.

  • A test in which the server fires an exception.

  • A test in which the server is deleted before returning a value.

Figure 8.9. Adding callbacks to a client/server pattern

graphics/08fig09.gif

Figure 8.10. Testing the distributed callback pattern

graphics/08fig10.gif

Tests for the Generic Distribution Model

Now let's return to the generic distributed architecture and consider some tests. To organize this we have completed test plans for the Provider and the Requester objects, as shown in Figure 8.11 and Figure 8.12. These are not specific to the semantics of each, but they do address the general function of each component.

Figure 8.11. A component test plan for the provider

graphics/08fig11.gif

Figure 8.12. A component test plan for the requester

graphics/08fig12.gif

Testing Every Assumption

The different models of distribution make very different assumptions about the type of application or the deployment environment. These should be the focus of tests. Some of these should be done during the Guided Inspection phase while others will have to wait for an executable.

Language Dependence Issues

The RMI model assumes that the part of the system for which it is being used is completely written in Java. CORBA makes no assumptions about the languages of even two interacting requesters and providers. Specific tests should be designed where two components written in different languages interface even through the infrastructure. The code of the infrastructure is tested and will handle the transfer correctly; however, the application code may not be correct. Depending on the infrastructure, the programmers have some degree of control (hence the possibility of mistakes exists) and must manually be certain that the data types used in the two classes are compatible. The documentation for the infrastructure may do a less than perfect job of explaining what is possible. We recently experienced errors when passing an array between a Java requester and a C++ provider due to incorrect documentation. Test cases not only detected a failure, but they also provided a pointer to the cause of the problem.

During guided inspection, the inspectors should determine that the correct mappings are being used. CORBA, for example, uses a set of types that are very compatible with most C++ types. The variation between CORBA types and Java types is much greater. Java also does not directly provide support for "out" parameters. The inspection should determine that return objects are being handled properly.

Platform Independence Issues

Basically, all of the models of distribution are independent of the platform on which they run, although DCOM is used primarily on Intel-compatible platforms at this time. However, the bigger issue of deployment environment remains critical. Implicit requirements about the size of available memory or processor speed can still cause the software to work differently on one specific machine from another.

One technique we have found useful is to provide deployment tests with a product release. Each user can then run these tests after installation to determine whether the application is operating correctly. We will discuss this in more detail in Chapter 9.

Infrastructure Tests

The infrastructure delivered from a vendor is "trusted" code that will not be subjected to detailed testing. There are situations beyond the control of the vendor that can corrupt the infrastructure. For example, the stubs and skeletons needed in a CORBA implementation are produced automatically by a compiler for the IDL specification. Often developers will edit these default implementations. This is no longer trusted code. There should be tests that will at least exercise all of the modified code.

Compatibility Tests

When a new version of the infrastructure is released, compatibility tests should be run to determine if modifications in the application are required. This is usually done by a designated group on a project. This is the same type of testing needed for new versions of frameworks or even tools.

Testing the Recovery of Failures

One of the critical differences in a distributed system is the possibility of partial failures of the system due to the breakdown of one of the machines hosting the system. As a part of the deployment testing effort, the following type of test case should be built using the distributed test harness illustrated in Figure 8.8. A system configuration should be constructed in which there is a "main" machine on which the locator portion of the infrastructure is running (this may not be possible for all types of systems), and for which a server is instantiated on a specific machine. Once the server has registered with the infrastructure, the test driver on the machine containing the server should display a dialog box on that machine that requests that the tester remove the network cable from the machine. Once the user selects OK on the dialog box, this test driver would send a message to the main test driver. The main test driver would then initiate a sequence in which the application would attempt to contact the server that is now unavailable. The ability to recognize that the server is not available and to handle it gracefully is one of the implicit requirements we discussed in Implicit Tests, on page 288. The correctness of the implementation relies on the experience of the individual developer as opposed to detailed specifications.

Dynamic Modification of Infrastructure

CORBA infrastructure implementations provide the means by which it can be modified during program execution. One vendor, for example, provides the ability to add or remove "filters" from the pathway between requester and provider during execution. These modifications change the configuration of the system and can change the timing and execution path. Since these modifications usually occur in specific situations, tests should be constructed that exercise each possible configuration given the dynamic components that are available.

Logic-Specific Test Cases

The types of logic defects that can occur in a distributed system are not that different from a sequential system with a couple of exceptions.

Different Sequences of Events

With asynchronous messages between processes, events may occur in a variety of sequences. A requester may send several requests in a short period of time and not wait for any of them to complete. The order in which these requests return can vary considerably from one execution to another. If the design assumption is that it makes no difference in what order the replies are received, the testing obligation is to test as many of the combinations as possible. The statistical sampling techniques discussed in Chapter 6 can be used to determine the minimum number of possible tests.

Requested Object Unavailable

Many systems allow users to enter the names of providers or other resources. Users may misspell names, omit a portion of the name, or request a resource that once was available but no longer exists. This is certainly a common occurrence with Internet browsers. It is slightly different from the previous case in which the object is registered but not available due to machine failure. In the partial failure case the infrastructure returns a null pointer, whereas with this case the infrastructure may throw an exception. This type of fault and the test cases to detect it only make sense in the event that the provider's identification is acquired dynamically. The testing objective here is to determine whether the exception is caught in an appropriate location in the requester, and whether the application is able to abort the operation gracefully and give the user another chance to give the address of the provider or some other appropriate response.

Test Case Summary

Use the test suite giving the following coverages:

  1. every method of each standard interface

  2. every SYN-path,

  3. every logical control path

Apply the test suite repeatedly using variations in the following factors:

  1. load of applications running on the same systems

  2. load of user input into the overall system

  3. connections between machines

  4. configurations of the infrastructure



A Practical Guide to Testing Object-Oriented Software
A Practical Guide to Testing Object-Oriented Software
ISBN: 0201325640
EAN: 2147483647
Year: 2005
Pages: 126

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net