Automating Test Runs

     

Improving code quality is the primary benefit of writing a large test suite, but there are several other benefits, such as encouraging more careful coding and better design. Well-written tests provide feedback on the state of the project. At any point, anyone can run the tests to find out what works and what has broken.

This is valuable enough that, besides encouraging developers to run the test suite at every opportunity while developing, many projects automate their test suites to run unattended at regular intervals, reporting any failures. This smoketesting is highly valuable, as it can catch accidental mistakes as they happen, even if developers forget to run the tests on their machines or check in all of the necessary changes.

How do I do that?

Save the following code as run_smoketest.pl :

 #!perl     use strict;     use warnings;     use constant SENDER    => 'testers@example.com';     use constant RECIPIENT => 'smoketester@example.com';     use constant MAILHOST  => 'smtp.example.com';     use Cwd;     use SVN::Client;     use Email::Send;     use Test::Harness::Straps;     my $path     = shift  die "Usage:\n 
 #!perl use strict; use warnings; use constant SENDER => 'testers@example.com'; use constant RECIPIENT => 'smoketester@example.com'; use constant MAILHOST => 'smtp.example.com'; use Cwd; use SVN::Client; use Email::Send; use Test::Harness::Straps; my $path = shift  die "Usage:\n$0 <repository_path>\n"; my $revision = update_repos( $path ); my $failures = run_tests( $path ); send_report( $path, $revision, $failures ); sub update_repos { my $path = shift; my $ctx = SVN::Client->new( ); return $ctx->update( $path, 'HEAD', 1 ); } sub run_tests { my $path = shift; my $strap = Test::Harness::Straps->new( ); my $cwd = cwd( ); chdir( $ path ); my @failures; for my $test (<t/*.t>) { my %results = $strap->analyze_file( $test ); next if $results{passing}; push @failures, { file => $test, ok => $results{ok}, max => $results{max}, }; } chdir( $cwd ); return \@failures; } sub send_report { my ($revision, $path, $failures) = @_; return unless @$failures; my $message = sprintf(<<END_HEADER, RECIPIENT, SENDER, $path, $revision); To: %s From: %s Subect: Failed Smoketest For %s at Revision %d END_HEADER for my $failure (@$failures) { $message .= sprintf( "%s:\n\tExpected: %d\n\tPassed: %d\n", @$failure{qw( file max ok )} ); } send( 'SMTP', $message, MAILHOST ); } 
<repository_path>\n"; my $revision = update_repos( $path ); my $failures = run_tests( $path ); send_report( $path, $revision, $failures ); sub update_repos { my $path = shift; my $ctx = SVN::Client->new( ); return $ctx->update( $path, 'HEAD', 1 ); } sub run_tests { my $path = shift; my $strap = Test::Harness::Straps->new( ); my $cwd = cwd( ); chdir( $path ); my @failures; for my $test (<t/*.t>) { my %results = $strap->analyze_file( $test ); next if $results{passing}; push @failures, { file => $test, ok => $results{ok}, max => $results{max}, }; } chdir( $cwd ); return \@failures; } sub send_report { my ($revision, $path, $failures) = @_; return unless @$failures; my $message = sprintf(<<END_HEADER, RECIPIENT, SENDER, $path, $revision); To: %s From: %s Subect: Failed Smoketest For %s at Revision %d END_HEADER for my $failure (@$failures) { $message .= sprintf( "%s:\n\tExpected: %d\n\tPassed: %d\n", @$failure{qw( file max ok )} ); } send( 'SMTP', $message, MAILHOST ); }


Note: By default, SVN:: Client uses cached credentials to log in to the Subversion repository. See its documentation to change this .

Note: The chdir( ) calls exist to set up the testing environment just as if you'd run make test or perl Build test on your own .

Be sure to install a recent version of Test::Harness , Email::Send , and Subversion with its Perl bindings. Modify the three constants at the top of the file to reflect your network setup.

Run the program, passing it the path to the working version directory of a Subversion repository. For example:


Note: If you receive svn_path_join errors, remove the trailing slash from the working directory path .
 $  perl run_smoketest.pl ~/dev/repos/Text-WikiFormat/trunk  

If any of the tests fail, you'll receive an email report about the failures:

 To: smoketest@example.com     From: smoketest_bot@example.com     Subect: Failed Smoketest at Revision 19     t/fail.t:         Expected: 3         Passed: 2 

What just happened ?

run_smoketest.pl is three programs at once, with a little bit of glue. First, it's a very simple Subversion client, thanks to the SVN::Client module. Second, it's a test harness, thanks to Test::Harness::Straps (see "Writing a Testing Harness," earlier in this chapter). Third, it's an email reporter, using Email::Send .

The program starts by pulling in the path to an existing Subversion repository. It then calls update_repos( ) which creates a new SVN::Client module and updates the repository with the absolute freshest code (denoted by the symbolic constant HEAD tag in CVS and Subversion), recursively updating all directories beneath it. It returns the number of this revision.


Note: Many other revision control systems have Perl bindings, but you can also use their command-line tools directly from your programs .

Next, run_tests( ) cycles through each file with the .t extension in the the repository's t/ directory. It collects the results of only the failed tests ”those for which the passing key is false ”and returns them in an array.

The program then calls send_report( ) to notify the recipient address about the failures. If there are none, the function returns. Otherwise, it builds up a simple email, reporting each failed test with its name and the number of expected and passing tests. Finally, it sends the message to the specified address, where developers and testers can pore over the results and fix the failures.

What about...

Q:

How do you run only specific tests? What if you have benchmarks and other long-running tests in a different directory?


Note: The Aegis software configuration management system (http:// aegis . sourceforge .net/) takes this idea further, requiring all checkins to include tests that fail before the modifications and that pass after them .
A:

Customize the glob pattern in the loop in run_tests( ) to focus on as many or as few tests as you like.

Q:

Is it possible to automate the smoketest?

A:

Because run_smoketest.pl takes the repository path on the command line, it can run easily from cron. Beware, though, that Test::Harness::Straps 2.46 and earlier spit out diagnostic information to STDERR . You may need to redirect this to /dev/null or the equivalent to avoid sending messages to yourself.

Q:

Could the report include other details, such as the diagnostics of each failed test?

A:

The limitation here is in what Test::Harness::Straps provides. Keep watching future releases for more information.

Q:

CVS and Subversion both provide ways to run programs when a developer checks in a change. Can this smoketest run then?

A:

Absolutely! This is an excellent way to ensure that no one can make changes that break the main branch.



Perl Testing. A Developer's Notebook
Perl Testing: A Developers Notebook
ISBN: 0596100922
EAN: 2147483647
Year: 2003
Pages: 107

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net