Testing the Pieces


Recall from Chapter 2, "The Software Development Process," the various models for software development. The big-bang model was the easiest but the most chaotic. Everything was put together at once and, with fingers crossed, the team hoped that it all worked and that a product would be born. By now you've probably deduced that testing in such a model would be very difficult. At most, you could perform dynamic black-box testing, taking the near final product in one entire blob and exploring it to see what you could find.

You've learned that this approach is very costly because the bugs are found late in the game. From a testing perspective, there are two reasons for the high cost:

  • It's difficult and sometimes impossible to figure out exactly what caused the problem. The software is a huge Rube Goldberg machine that doesn't workthe ball drops in one side, but buttered toast and hot coffee doesn't come out the other. There's no way to know which little piece is broken and causing the entire contraption to fail.

  • Some bugs hide others. A test might fail. The programmer confidently debugs the problem and makes a fix, but when the test is rerun, the software still fails. So many problems were piled one on top the other that it's impossible to get to the core fault.

Unit and Integration Testing

The way around this mess is, of course, to never have it happen in the first place. If the code is built and tested in pieces and gradually put together into larger and larger portions, there won't be any surprises when the entire product is linked together (see Figure 7.3).

Figure 7.3. Individual pieces of code are built up and tested separately, and then integrated and tested again.


Testing that occurs at the lowest level is called unit testing or module testing. As the units are tested and the low-level bugs are found and fixed, they are integrated and integration testing is performed against groups of modules. This process of incremental testing continues, putting together more and more pieces of the software until the entire productor at least a major portion of itis tested at once in a process called system testing.

With this testing strategy, it's much easier to isolate bugs. When a problem is found at the unit level, the problem must be in that unit. If a bug is found when multiple units are integrated, it must be related to how the modules interact. Of course, there are exceptions to this, but by and large, testing and debugging is much more efficient than testing everything at once.

There are two approaches to this incremental testing: bottom-up and top-down. In bottom-up testing (see Figure 7.4), you write your own modules, called test drivers, that exercise the modules you're testing. They hook in exactly the same way that the future real modules will. These drivers send test-case data to the modules under test, read back the results, and verify that they're correct. You can very thoroughly test the software this way, feeding it all types and quantities of data, even ones that might be difficult to send if done at a higher level.

Figure 7.4. A test driver can replace the real software and more efficiently test a low-level module.


Top-down testing may sound like big-bang testing on a smaller scale. After all, if the higher-level software is complete, it must be too late to test the lower modules, right? Actually, that's not quite true. Look at Figure 7.5. In this case, a low-level interface module is used to collect temperature data from an electronic thermometer. A display module sits right above the interface, reads the data from the interface, and displays it to the user. To test the top-level display module, you'd need blow torches, water, ice, and a deep freeze to change the temperature of the sensor and have that data passed up the line.

Figure 7.5. A test stub sends test data up to the module being tested.


Rather than test the temperature display module by attempting to control the temperature of the thermometer, you could write a small piece of code called a stub that acts just like the interface module by feeding "fake" temperature values from a file directly to the display module. The display module would read the data and show the temperature just as though it was reading directly from a real thermometer interface module. It wouldn't know the difference. With this test stub configuration, you could quickly run through numerous test values and validate the operation of the display module.

An Example of Module Testing

A common function available in many compilers is one that converts a string of ASCII characters into an integer value.

What this function does is take a string of numbers, or + signs, and possible extraneous characters such as spaces and letters, and converts them to a numeric valuefor example, the string "12345" gets converted to the number 12,345. It's a fairly common function that's often used to process values that a user might type into a dialog boxfor example, someone's age or an inventory count.

The C language function that performs this operation is atoi(), which stands for "ASCII to Integer." Figure 7.6 shows the specification for this function. If you're not a C programmer, don't fret. Except for the first line, which shows how to make the function call, the spec is in English and could be used for defining the same function for any computer language.

Figure 7.6. The specification sheet for the C language atoi() function.


If you're the software tester assigned to perform dynamic white-box testing on this module, what would you do?

First, you would probably decide that this module looks like a bottom module in the program, one that's called by higher up modules but doesn't call anything itself. You could confirm this by looking at the internal code. If this is true, the logical approach is to write a test driver to exercise the module independently from the rest of the program.

This test driver would send test strings that you create to the atoi() function, read back the return values for those strings, and compare them with your expected results. The test driver would most likely be written in the same language as the functionin this case, Cbut it's also possible to write the driver in other languages as long as they interface to the module you're testing.

This test driver can take on several forms. It could be a simple dialog box, as shown in Figure 7.7, that you use to enter test strings and view the results. Or it could be a standalone program that reads test strings and expected results from a file. The dialog box, being user driven, is very interactive and flexibleit could be given to a black-box tester to use. But the standalone driver can be very fast reading and writing test cases directly from a file.

Figure 7.7. A dialog box test driver can be used to send test cases to a module being tested.


Next, you would analyze the specification to decide what black-box test cases you should try and then apply some equivalence partitioning techniques to reduce the total set (remember Chapter 5?). Table 7.1 shows examples of a few test cases with their input strings and expected output values. This table isn't intended to be a comprehensive list.

Table 7.1. Sample ASCII to Integer Conversion Test Cases

Input String

Output Integer Value

"1"

1

"1"

1

"+1"

1

"0"

0

"0"

0

"+0"

0

"1.2"

1

"23"

2

"abc"

0

"a123"

0

and so on

 


Lastly, you would look at the code to see how the function was implemented and use your white-box knowledge of the module to add or remove test cases.

NOTE

Creating your black-box testing cases based on the specification, before your white-box cases, is important. That way, you are truly testing what the module is intended to do. If you first create your test cases based on a white-box view of the module, by examining the code, you will be biased into creating test cases based on how the module works. The programmer could have misinterpreted the specification and your test cases would then be wrong. They would be precise, perfectly testing the module, but they wouldn't be accurate because they wouldn't be testing the intended operation.


Adding and removing test cases based on your white-box knowledge is really just a further refinement of the equivalence partitions done with inside information. Your original black-box test cases might have assumed an internal ASCII table that would make cases such as "a123" and "z123" different and important. After examining the software, you could find that instead of an ASCII table, the programmer simply checked for numbers, and + signs, and blanks. With that information, you might decide to remove one of these cases because both of them are in the same equivalence partition.

With close inspection of the code, you could discover that the handling of the + and signs looks a little suspicious. You might not even understand how it works. In that situation, you could add a few more test cases with embedded + and signs, just to be sure.



    Software Testing
    Lessons Learned in Software Testing
    ISBN: 0471081124
    EAN: 2147483647
    Year: 2005
    Pages: 233

    flylib.com © 2008-2017.
    If you may any questions please contact us: flylib@qtcs.net