Interpreting Test Results

     

Perl has a wealth of good testing modules that interoperate smoothly through a common protocol (the Test Anything Protocol , or TAP ) and common libraries ( Test::Builder ). You'll probably never have to write your own testing protocol, but understanding TAP will help you interpret your test results and write better tests.


Note: All of the test modules in this book produce TAP output. Test:: Harness interprets that output. Think of it as a minilanguage about test successes and failures .

How do I do that?

Save the following program to sample_output.pl :

 #!perl     print <<END_HERE;     1..9     ok 1     not ok 2     #     Failed test (t/sample_output.t at line 10)     #          got: '2'     #     expected: '4'     ok 3     ok 4 - this is test 4     not ok 5 - test 5 should look good too     not ok 6 # TODO fix test 6     # I haven't had time add the feature for test 6     ok 7 # skip these tests never pass in examples     ok 8 # skip these tests never pass in examples     ok 9 # skip these tests never pass in examples     END_HERE 


Note: Using Windows and seeing an error about END_HERE? Add a newline to the end of sample_output. pl, then read perldoc perlfaq8 .

Now run it through prove (see "Running Tests," earlier in this chapter):

 $  prove sample_output.pl  sample_output....FAILED tests 2, 5         Failed 2/9 tests, 77.789 okay (less 3 skipped tests: 4 okay, 44.44%)     Failed Test      Stat Wstat Total Fail  Failed  List of Failed     ------------------------------------------------------------------------     sample_output.pl                9    2  22.22%  2 5     3 subtests skipped.     Failed 1/1 test scripts, 0.00% okay. 2/9 subtests failed, 77.79% okay. 

What just happened ?

prove interpreted the output of the script as it would the output of a real test. In fact, there's no effective difference ”a real test might produce that exact output.

The lines of the test correspond closely to the results. The first line of the output is the test plan. In this case, it tells the harness to plan to run 9 tests. The second line of the report shows that 9 tests ran, but two failed: tests 2 and 5, both of which start with not ok .

The report also mentions three skipped tests. These are tests 7 through 9, all of which contain the text # skip . They count as successes, not failures. (See "Skipping Tests" in Chapter 2 to learn why.)

That leaves one curious line, test 6. It starts with not ok , but it does not count as a failure because of the text # TODO . The test author expected this test to fail but left it in and marked it appropriately. (See "Marking Tests as TODO" in Chapter 2.)

The test harness ignored all of the rest of the output, which consists of developer diagnostics. When developing, it's often useful to look at the test output in its entirety, whether by using prove -v or running the tests directly through perl (see "Running Tests," earlier in this chapter). This prevents the harness from suppressing the diagnostic output, as found with the second test in the sample output.

What about...

Q:

What happens when the actual number of tests is different than expected?

A:

Running the wrong number of tests counts as a failure. Save the following test as too_few_tests.t :

 use Test::More tests => 3;     pass( 'one test'  );     pass( 'two tests' ); 

Run it with prove :

 $  prove too_few_tests.t  too_few_tests....ok 2/3# Looks like you planned 3 tests but only ran 2.     too_few_tests....dubious             Test returned status 1 (wstat 256, 0x100)     DIED. FAILED test 3             Failed 1/3 tests, 66.67% okay     Failed Test     Stat Wstat Total Fail  Failed  List of Failed     ------------------------------------------------------------------------ too_few_tests.t    1   256     3    2  66.67%  3     Failed 1/1 test scripts, 0.00% okay. 1/3 subtests failed, 66.67% okay. 

Test::More complained about the mismatch between the test plan and the number of tests that actually ran. The same goes for running too many tests. Save the following code as too_many_tests.t :

 use Test::More tests => 2;     pass( 'one test'    );     pass( 'two tests'   );     pass( 'three tests' ); 

Run it with prove :

 $  prove -v too_many_tests.t  too_many_tests....ok 3/2# Looks like you planned 2 tests but ran 1 extra.     too_many_tests....dubious             Test returned status 1 (wstat 256, 0x100)     DIED. FAILED test 3             Failed 1/2 tests, 50.00% okay     Failed Test      Stat Wstat Total Fail  Failed  List of Failed     ------------------------------------------------------------------------ too_many_tests.t    1   256     2    1  50.00%  3     Failed 1/1 test scripts, 0.00% okay. -1/2 subtests failed, 150.00% okay. 

This time, the harness interpreted the presence of the third test as a failure and reported it as such. Again, Test::More warned about the mismatch.



Perl Testing. A Developer's Notebook
Perl Testing: A Developers Notebook
ISBN: 0596100922
EAN: 2147483647
Year: 2003
Pages: 107

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net