Automating Customer Tests


Although many tools can be used to perform test automation, we will focus our attention on FIT, which seems very well-suited to perform the tests we need.

FIT Overview

FIT uses HTML tables to structure customer tests. For example, the following table is a sample customer test for dividing numbers from the FIT Web site:

eg.Division

numerator

denominator

quotient ()

1000

10

100.0000

-1000

10

-100.0000

1000

7

142.85715

1000

.00001

100000000

4195835

3145729

1.3338196

The way to interpret this table (let s ignore the first two rows for a moment) is to read each row as a separate test. The first two columns are input to the software, and the third column is the expected result, in this case. The first row verifies the equation (1000/10 = 100.0000). So, even without understanding how FIT does what it does, you can read and understand the tests. You also do not need to know how FIT runs the test. When FIT runs the script, you get the following result:

eg.Division

numerator

denominator

quotient()

1000

10

100.0000

-1000

10

-100.0000

1000

7

142.85715

1000

.00001

100000000

4195835

3145729

1.3338196

As part of the running of the script, the code is exercised and the verification is performed. In this case, all the tests pass, so all the cells in the quotient() column are shaded, indicating success. If there is a failure, the table cell is colored red and the actual value is present along with the expected value, which allows the customer to look through the tests at a glance and see the tests that pass and the ones that don t. (If this were a four- color book, you d see the green shade of a cell for success and red for failure in the tables on this page.) Also, because this is HTML you can add additional content that explains what is being tested around the tests (which can be text or graphics). This content turns these HTML documents into the form of an executable specification.

Connecting FIT to the Implementation

To be able to specify your tests in this manner, you need a bridge between the FIT framework and your software. In the previous table, the first row specifies a class, eg.Division , which is used as the bridge between FIT and the actual implementation. The second row specifies methods that are called on the eg.Division class that FIT calls during the execution of the test. In the next section, we describe in detail how you can take the manual script from the previous section and convert it into an automated script run by FIT using our Web service.

Automation with FIT

The first step in the script is to verify that we can retrieve a recording given its id. Following is the translation of the manual step into the format of the table to be used in FIT.

fit.ActionFixture

start

CustomerTests.CatalogAdapter

 

enter

FindByRecordingId

4

check

Found

true

This test uses a class contained in the FIT framework named ActionFixture , which is a simple controller that parses each row and passes the values off to the appropriate handler. The header of the table defines the controller used to run the test. Each subsequent row in the table defines a step in the script. This test script uses the start , enter , and check commands that are part of the ActionFixture .

Start Command

The start step has a special meaning: it is used to initialize a class that is used to adapt the application classes to run inside the FIT framework. In this example, the name of this special class is CustomerTests.CatalogAdapter , which has to be written to support running the FIT tests and should not be considered a part of the application itself. Because it is strictly for FIT integration, it should not contain any application-specific logic beyond that needed to test.

Enter Action

The next row in the table begins with the enter action . This step is a simple action to be taken against the application via the adapter. In this case, the script specifies that the application must call the FindByRecordingId method on the CatalogAdapter to retrieve a recording with the recording id equal to 4. This method will change the state of the CatalogAdapter so that the retrieved recording is available for further steps.

Check Action

The last row in the table begins with the check action , which is the verification of the correctness of the application s response to previous user input. There are two parts to each check operation:

  • What to check This is the name of the method defined on the adapter to use for application response verification. In our example, we want to verify that the recording is correctly retrieved; our CatalogAdapter class defines a Found method that can be called to verify that the recording is successfully retrieved.

  • What value the user expects to receive from the application In our example, we expect the Found method to return True to indicate that the recording with the given recording id is found.

CatalogAdapter

The test script shown previously specified a class named CatalogAdapter . Its job is to adapt the calls that FIT will make into the actual calls to the Web service.

 namespace CustomerTests {    public class CatalogAdapter : Fixture     {       private CatalogGateway gateway = new CatalogGateway();       private static RecordingDto recording;       public void FindByRecordingId(long id)        {          recording = gateway.FindByRecordingId(id);       }       public bool Found()       {          return recording != null;        }    } } 

This class seems to fit the requirement in that it simply adapts our existing software to the FIT framework. Notice that the names of the methods have to match the names specified in the test script. Otherwise, the test results will show exceptions for these rows.

Running the Script

We use the FileRunner class in FIT to run the test script. Figure 7-2 demonstrates the sequence of messages exchanged between FIT, the CatalogAdapter , and the Web service during the execution of the test script:

click to expand
Figure 7-2: Message exchange

Running this test yields the following result:

fit.ActionFixture

start

CustomerTests.CatalogAdapter

 

enter

FindByRecordingId

4

check

Found

true

In addition, FIT provides a test summary section that gives statistics related to the execution of the test script, as you see at the top of the next page:

fit.Summary

run elapsed time

0:00.94

run date

12/30/2003 12:15:39 PM

output file

CatalogResult.html

input update

12/30/2003 11:53:30 AM

input file

CatalogTest.html

fixture path

.;build;build\bin;obj;bin;bin\Debug;bin\Release;C:\book-example\v2\
fit.runner\bin\Debug

counts

1 right, 0 wrong, 0 ignored, 0 exceptions

Continuing with the Automation

Now that we can retrieve recordings from the database using FIT scripts, we need to augment the script to also check the content of the recording.

Fit.ActionFixture

start

CustomerTests.CatalogAdapter

 

enter

FindByRecordingId

4

check

Title

The Rising

check

ArtistName

Bruce Springsteen

check

ReleaseDate

7/30/2002

check

LabelName

Sony

check

Duration

72:51

This script is very much like the previous one. To get this script to work, we have added the following methods to the CatalogAdapter : ArtistName , ReleaseDate , LabelName , and Duration . These methods simply return the appropriate value from the RecordingDto object that is retrieved by the enter action . The modifications result in the following changes to the CatalogAdapter :

 namespace CustomerTests {    public class CatalogAdapter : Fixture     {       private CatalogGateway gateway = new CatalogGateway();       private static RecordingDto recording;       public void FindByRecordingId(long id)        {          recording = gateway.FindByRecordingId(id);       }       public bool Found()       {          return recording != null;        }       public string Title()       {          return recording.title;       }       public string ArtistName()       {          return recording.artistName;        }              public string ReleaseDate()       {          return recording.releaseDate;        }              public string LabelName()       {          return recording.labelName;       }       public string Duration()       {          return recording.totalRunTime.ToString();       }    } } 

The adapter maintains its rather mundane existence. The only thing of interest or concern is that the type of the Duration implies a data conversion from an integer to a string. This situation seems odd, and in fact when we run the test we get the following result:

fit.ActionFixture

start

CustomerTests.CatalogAdapter

 

enter

FindByRecordingId

4

check

Title

The Rising

check

ArtistName

Bruce Springsteen

check

ReleaseDate

7/30/2002

check

LabelName

Sony

check

Duration

72:51 expected

   

4371 actual

There is a failure. The Duration specified in the test script was 72:51, but the number we get back from the adapter is 4371. This is clearly a problem that will need to be addressed. For now, though, let s continue with the rest of the conversion process, and we promise to come back later and correct this problem. There is no way we can forget because every time we run the test it fails.

Verifying Track Information

The next part in the manual script verifies track information. It turns out that FIT has a different type of fixture, a RowFixture, that works best for this type of test. From the FIT website: A RowFixture compares rows in the test data to objects in the system under test. Methods are invoked on the objects and returned values compared to those in the table. An algorithm matches rows with objects based on one or more keys. Objects may be missing or in surplus and are so noted. This type of fixture fits very well with looking for track information associated with a particular recording. The Track tests from the manual test script expressed in a RowFixture looks like this:

CustomerTests.TrackDisplay

Title()

ArtistName()

GenreName()

Duration()

Lonesome Day

Bruce Springsteen

Rock

4:08

Into The Fire

Bruce Springsteen

Rock

5:04

Waitin On A Sunny Day

Bruce Springsteen

Rock

4:18

The Nothing Man

Bruce Springsteen

Rock

4:23

Let s Be Friends

Bruce Springsteen

Rock

4:21

The Fuse

Bruce Springsteen

Rock

5:37

Further On (Up The Road)

Bruce Springsteen

Rock

3:52

Mary s Place

Bruce Springsteen

Rock

6:03

The Rising

Bruce Springsteen

Rock

4:50

My City Of Ruins

Bruce Springsteen

Rock

5:00

Empty Sky

Bruce Springsteen

Rock

3:34

Worlds Apart

Bruce Springsteen

Rock

6:07

Paradise

Bruce Springsteen

Rock

5:39

You re Missing

Bruce Springsteen

Rock

5:11

Countin On A Miracle

Bruce Springsteen

Rock

4:24

This takes a bit of explanation. We need to implement another adapter that inherits from RowFixture to retrieve track information from the recording. Unfortunately, this script does not stand alone; it requires that a recording was retrieved in a previous step. In this case, the script will be run after the previous one. The TrackDisplay class is shown as follows :

 public class TrackDisplay : RowFixture     {       protected override Type getTargetClass()        {          return typeof(CustomerTests.TrackDisplayAdapter);       }       public override object[] query()        {          TrackDto[] dtoTracks = CatalogAdapter.Tracks();                    TrackDisplayAdapter[] adapters =              new TrackDisplayAdapter[dtoTracks.Length];          for(int index = 0; index < dtoTracks.Length; index++)          {             adapters[index] =                 new TrackDisplayAdapter(dtoTracks[index]);          }          return adapters;       }    } 

This class returns an array of objects. The query method takes no parameters, but must somehow return the track information associated with a recording. In this case, it makes a call to the CatalogAdapter (which retrieved the recording in the previous test) to get the tracks associated with the recording. The query method also converts the TrackDto objects into TrackDisplayAdapter objects just like the CatalogAdapter adapts the RecordingDto .

 public class TrackDisplayAdapter    {       private TrackDto dto;              public TrackDisplayAdapter(TrackDto trackDto)       {          dto = trackDto;       }       public string Title()       {          return dto.title;       }       public string Duration()       {          return dto.duration.ToString();       }       public string GenreName()       {          return dto.genreName;       }       public string ArtistName()       {          return dto.artistName;       }    } 

When we run the tests, we get the following result:

CustomerTests.TrackDisplay

Title()

ArtistName()

GenreName()

Duration()

Lonesome Day

Bruce Springsteen

Rock

4:08 expected

     

248 actual

Into The Fire

Bruce Springsteen

Rock

5:04 expected

     

304 actual

Waitin On A Sunny Day

Bruce Springsteen

Rock

4:18 expected

     

258 actual

The Nothing Man

Bruce Springsteen

Rock

4:23 expected

     

263 actual

Let s Be Friends

Bruce Springsteen

Rock

4:21 expected

     

261 actual

The Fuse

Bruce Springsteen

Rock

5:37 expected

     

337 actual

Further On (Up The Road)

Bruce Springsteen

Rock

3:52 expected

     

232 actual

Mary s Place

Bruce Springsteen

Rock

6:03 expected

     

363 actual

The Rising

Bruce Springsteen

Rock

4:50 expected

     

290 actual

My City Of Ruins

Bruce Springsteen

Rock

5:00 expected

     

300 actual

Empty Sky

Bruce Springsteen

Rock

3:34 expected

     

214 actual

Worlds Apart

Bruce Springsteen

Rock

6:07 expected

     

367 actual

Paradise

Bruce Springsteen

Rock

5:39 expected

     

339 actual

You re Missing

Bruce Springsteen

Rock

5:11 expected

     

311 actual

Countin On A Miracle

Bruce Springsteen

Rock

4:24 expected

     

284 actual

All the duration checks failed, which seems similar to the previous test (in which the check of the duration of the recording failed). Just as we did in the previous test, we make a note of it and we will return to it as soon as the script is fully automated.

Verifying Review Information

The review information is similar to the track information. In fact, we did create a new RowFixture called ReviewDisplay and an associated adapter named ReviewDisplayAdapter . (Their implementations are left as an exercise for the reader.) The problem the test exposes is related to the variability of the data. In the previous test, the Recording and Track information does not change, which is not the case with the review information ”which is updated by reviewers all the time. Clearly, we do not want to write customer tests that need to be updated at random times. So what should we do? One option is to create a recording in the database as part of the script and subsequently remove it after the test is complete. Another option is to use a separate database that is a snapshot of the production database but not constantly modified by the users. The last option is to create a fake recording in the production database. This recording has well-known content and can t be changed by the users.

We choose to insert a well-known record in the database. It is the simplest alternative at the moment, and it allows us to use the actual deployment environment to run our tests. This solution is not immune from risk, however; someone can delete or change the fake recording in some way. And even though we believe the id will never be assigned, it is possible that someday it could overlap with a real recording, which would make our tests fail. There does not seem to be a perfect answer to this problem. If we choose a separate database, we have to employ some process to ensure that the schemas would stay synchronized, and so on. Given our solution, the script for verifying review information is as follows:

fit.ActionFixture

start

CustomerTests.CatalogAdapter

 

enter

FindByRecordingId

100001

check

Found

True

check

AverageRating

2

CustomerTests.ReviewDisplay

ReviewerName()

Content()

Rating()

Sample Reviewer

Inspiration was low

1

Example Reviewer

Could be better

3

Test Reviewer

I thought it was great

4

When we run the tests for reviews, they all pass.

Invalid ID Script

The last part of the script that needs to be automated is the one that attempts to retrieve a recording that is not present in the database. Just as in the review script, the id chosen does not exist in the database and should never be assigned to a valid recording. If this assumption remains true, our test never has to change. The following is the test script:

fit.ActionFixture

start

CustomerTests.CatalogAdapter

 

enter

FindByRecordingId

100002

check

Found

False

This script does not require any changes to the CatalogAdapter because it uses existing methods. When we run the test, it passes as well.

Automation Summary

We began with a manual test script, and we ended up with a set of automated FIT scripts that are much less time-consuming to execute and less error-prone . In fact, they can be run every time the software is built. Also, they have identified a few problems. There seems to be an issue with durations on tracks as well as the overall duration on recording. There seems to be a mismatch in expectations between the programmers and the customer. In the next section, we will reconcile those differences.




Test-Driven Development in Microsoft .NET
Test-Driven Development in Microsoft .NET (Microsoft Professional)
ISBN: 0735619484
EAN: 2147483647
Year: 2004
Pages: 85

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net