Testing


Once you've completed the implementation phase the next important phase you need to consider is the testing phase. Generally testing involves a quality assurance (QA) group that runs test cases to determine that the application behaves as expected and to try to catch any bugs. This testing phase is iterative. When QA returns a list of bugs the development team must work to fix any issues. However, when fixings bugs it's possible to introduce new bugs. If you have architected the application well, favoring composition over inheritance for building flexible structures, then the risk of introducing new bugs during this phase is minimized. However, it's is almost inevitable that some new bugs will be introduced during bug fixing and old fixed bugs will re-emerge. Because of the possibility of this introduction and re-introduction of bugs testing generally involves something called regression testingwhich basically means all tests that previously passed must be run again to ensure that changes didn't cause any of those tests to suddenly fail.

As you might imagine the introduction and re-introduction of bugs can be quite expensive during the testing phase if they go uncaught until the build is regression tested by a QA team. If a bug isn't caught until QA runs a regression test then it means that the development team must fix the bugs again and send yet another build to QA for regression testing.

If possible it's always best for developers to try to find new bugs and regressions before sending the build to QA. The difficulty with that strategy is that it requires the development team to be responsible for testing the application. If developers could handle testing in addition to development and bug fixes then there wouldn't be a need for a QA team in the first place, so it might almost seem ridiculous to suggest that developers should have to test an application. However, if developers can run automated tests that verify that an application continues to work correctly from a programmatic standpoint then that doesn't require a great deal more work on the part of the developer, and it enables developers to quickly identify errors before sending a build to QA. These programmatic tests are can be formalized into what is called a unit test.

Unit testing allows the developer to create programmatic tests that ensure that parts of the application behave in an expected way. For example, if you have a method that's supposed to convert a parameter value from radians to degrees and return that value then you want to make sure that if you pass it a value of Math.PI it returns 180 every time. Using this basic concept you can create a series of tests where you ensure that results of operations are as expected (i.e. Math.PI radians is always converted correctly to 180 degrees).

You can create unit tests without a formal unit test framework. However, using a formal framework for unit testing has several advantages. Specifically:

  • When you use an existing framework you don't have to reinvent the wheel, saving you time

  • An existing framework is likely to be tested so that bugs in the unit testing framework won't cause your tests to fail to work (which would negate the value of running unit tests in the first place.)

Although there may be additional unit testing frameworks for ActionScript 3.0 subsequent to the writing of this book the one existing unit testing framework we know of at this point is called FlexUnit. As the name implies, you can use FlexUnit for unit testing Flex applications. However, that doesn't mean that FlexUint is limited to unit testing applications that use the Flex framework. Even if you are working on a purely ActionScript 3.0 project you can use FlexUnit.

At the time of this writing FlexUnit is available for download at http://labs.adobe.com/wiki/index.php/ActionScript_3:resources:apis:libraries. If that URL changes you may not be able to find the downloads there. In such a case you can look to www.rightactionscript.com/aas3wdp for an updated URL.

Once you've located the correct URL you should download the archive containing the .swc file which contains the necessary FlexUnit framework libraries. You will want to extract the .swc file from the archive and then make sure that the .swc is included in the library path for your project for which you want to use unit tests.

If you want to write custom unit tests that don't rely on FlexUnit then you are welcome to do so. However, for the remainder of this section on unit testing we will be giving specific instructions for running unit tests using FlexUnit.

Creating Basic Unit Tests

In FlexUnit basic unit tests require the following elements:

  • Classes you want to test. These are the classes that comprise your application.

  • Test cases. Test cases are special classes that you write just for the purposes of unit testing.

  • Test runner. A test runner is a class (or MXML file) that actually runs all the test cases and reports the results.

The first category of elements isn't specific to unit tests. That category is simply comprised of the classes you've already written. They are part of unit testing because you are testing that they actually work the way you expect. For the basic test cases we'll test the following class.

package example {   public class SimpleConverter {     public function SimpleConverter() {}     public function convertToRadians(degrees:Number):Number {       return (degrees / 180) * Math.PI;     }     public function convertToDegrees(radians:Number):Number {       return (radians / Math.PI) * 180;     }    } }


Test cases and test runners, on the other hand, are unique to unit testing. Since test cases and test runners are likely new to you we'll look at how to create them in the next sections.

Writing Test Cases

A FlexUnit test case is an instance of a class that extends flexunit.framework.TestCase. The test case class constructor should always accept a string parameter and then call the super constructor, passing it the parameter value.

package tests {   import flexunit.framework.TestCase;   public class SimpleTest extends TestCase {     public function SimpleTest(method:String) {       super(method);     }   } }


The class should then define one or more methods that run a test. Each test should result in an assertion. An assertion is what actually determines the success of the test. You can run an assertion using any of the assert methods inherited by the Assert class which is the superclass of TestCase:

  • assertEquals(): Tests if all the parameters are equal (equivalent to an == operation)

  • assertStrictlyEquals(): Tests if all the parameters are strictly equal (equivalent to an === operation)

  • assertTrue(): Test if the parameter is true

  • assertFalse(): Test if the parameter is false (passes test if the parameter is false)

  • assertUndefined(): Test if the parameter is undefined (passes test if the parameter is undefined)

  • assertNull(): Test if the parameter is null (passes test if the parameter is null)

  • assertNotNull(): Test if the parameter is not null

  • fail(): Though technically not an assertion, the fail() method explicitly causes the test to fail, which can be useful when you need to test for a failure.

The following update to SimpleTest defines two test methods to test the conversions to and from degrees and radians.

package tests {   import flexunit.framework.TestCase;   import example.Simple;   public class SimpleTest extends TestCase {     public function SimpleTest(method:String) {       super(method);     }     public function testConvert0ToDegrees():void {       var simple:SimpleConverter = new SimpleConverter();       var degrees:Number = simple.convertToDegrees(0);       assertEquals(degrees, 0);     }     public function testConvertPIToDegrees():void {       var simple:SimpleConverter = new SimpleConverter();       var degrees:Number = simple.convertToDegrees(0);       assertEquals(degrees, 180);     }     public function testConvert0ToRadians():void {       var simple:SimpleConverter = new SimpleConverter();       var radians:Number = simple.convertToRadians(0);       assertEquals(radians, 0);     }     public function testConvert180ToRadians():void {       var simple:SimpleConverter = new SimpleConverter();       var radians:Number = simple.convertToRadians(180);       assertEquals(radians, Math.PI);     }   } }


Once you've created one or more test cases you next to create a test runner to run the tests and view the results.

Writing a Test Runner

Assuming you're using Flex you can use the FlexUnit test runner to run a suite of unit tests. First, you must create a runnable MXML document that does the following:

  • Add the flexunit.flexui.* namespace

  • Add an instance of TestRunnerBase, an MXML component

  • Create a flexunit.framework.TestSuite instance, and add all the test cases to it.

  • Assign the TestSuite instance to the test property of the TestRunnerBase instance.

    Call the startTest() method of the TestRunnerBase instance.

The following example MXML document runs all the tests from SimpleTest.

[View full width]

<?xml version="1.0" encoding="utf-8"?> <!-- Notice that the Application tag adds the flexui namespace prefix and maps it to flexunit.flexui.*. Also notice that it registers initializeHandler() as an event handler for the initialize event.--> <mx:Application xmlns:mx="http://www.adobe.com/2006/mxml" xmlns:flexui="flexunit.flexui.*" initialize="initializeHandler(event)"> <mx:Script> <![CDATA[ import flexunit.framework.TestSuite; import tests.SimpleTest; private function initializeHandler(event:Event):void { // Create a new TestSuite object. var suite:TestSuite = new TestSuite(); // Use the addTest() method to add each of // the four test cases to the suite. suite.addTest(new SimpleTest("testConvert0ToDegrees")); suite.addTest(new SimpleTest("testConvertPIToDegrees")); suite.addTest(new SimpleTest("testConvert0ToRadians")); suite.addTest(new SimpleTest("testConvert180ToRadians")); testRunner.test = suite; testRunner.startTest(); } ]]> </mx:Script> <flexui:TestRunnerBase width="100%" height="100%" /> </mx:Application>


Notice that each test case is an instance of SimpleTest with one of the test method names passed to the constructor. When you run the preceding test runner it should show all the tests as passing. If you make the following change to SimpleConverter you'll see that one of the tests fails.

package example {   public class SimpleConverter {     public function SimpleConverter() {}     public function convertToRadians(degrees:Number):Number {       return (degrees / 180) * Math.PI;       }       public function convertToDegrees(radians:Number):Number {         return 0;       }    } }


Note that since convertToDegrees() always returns 0 the testConvertPIToDegrees test will fail. Since the specific test fails you immediately know where the error is occurring, and you can fix the bug.

Another thing that can be useful when creating test cases is to add a static method to each TestCase subclass that returns a TestSuite of all the tests for that class. This allows you to simplify the test runner. The following is an example of such a method you could add to SimpleConverter.

public static function suite():TestSuite {    var suite:TestSuite = new TestSuite();    suite.addTest(new SimpleTest("testConvert0ToDegrees"));    suite.addTest(new SimpleTest("testConvertPIToDegrees"));    suite.addTest(new SimpleTest("testConvert0ToRadians"));    suite.addTest(new SimpleTest("testConvert180ToRadians"));    return suite; }


The test runner initializeHandler() method would then simplify to the following:

private function initializeHandler(event:Event):void {    testRunner.test = SimpleTest.suite();    testRunner.startTest(); }


Creating Asynchronous Unit Tests

Many unit tests are synchronousmeaning that you can immediately determine if a test has passed or failed. For example, the SimpleConverter test in the preceding section passed or failed a test immediately. However, it's possible that some tests may depend on asynchronous operations. For example, a class may need to make a request and wait for a response from a service method before a test can be verified properly. In such cases it's important to be able to run tests asynchronously. For an example consider the following class which loads data from a text file when calling the getdata() method.

package example {    import flash.events.EventDispatcher;    import flash.net.URLLoader;    import flash.events.Event;    import flash.net.URLRequest;    public class AsynchronousExample extends EventDispatcher {       private var _loader:URLLoader;       public function get data():String {          return _loader.data;       }       public function AsynchronousExample() {          _loader = new URLLoader();          _loader.addEventListener(Event.COMPLETE, onData);       }       public function getData():void {          _loader.load(new URLRequest("data.txt"));       }       private function onData(event:Event):void {          dispatchEvent(new Event(Event.COMPLETE));       }    } }


With a few simple changes it's possible to run FlexUnit tests asynchronously so you can test operations like getdata(). Asynchronous operations should use events to notify listeners when the operation has completed. Typically when you register a listener for a particular event you use the addEventListener() method, and you pass it a reference to the listener method. When writing test cases for asynchronous operations you should register a listener method to handle the event that signals a completed operation. However, rather than registering the listener directly, you should use an inherited TestCase method called addAsync(). The addAsync() method allows you to specify a listener method along with a time out in milliseconds. This allows you to specify what method should handle the event, but if the event doesn't occur within the timeout window then the test will fail. The event listener method should run the assertion. The following example uses these techniques. You'll see that the class extends TestCase just like a basic unit test. Furthermore, this test case class also accepts a method name as a parameter for the constructor, and it passes the parameter to the super constructor. What differs is that the test method registers a listener using addAsync() and defers the assertion to onData(). This example times out after 2000 milliseconds. That means that if the data loads in 2000 milliseconds or less then the assertion will run. However, if the data doesn't load in time then the test case assumes that it was due to a failure and the test fails.

package tests {    import flexunit.framework.TestCase;    import example.AsynchronousExample;    import flash.events.Event;    import flexunit.framework.TestSuite;    public class AsynchronousTest extends TestCase {       public function AsynchronousTest(method:String):void {          super(method);       }       public function testGetData():void {          var asynchronous:AsynchronousExample = new AsynchronousExample();          asynchronous.addEventListener(Event.COMPLETE, addAsync(onData, 2000));          asynchronous.getData();       }               private function onData(event:Event):void {          assertNotNull(event.target.data);       }       public static function suite():TestSuite {          var suite:TestSuite = new TestSuite();          suite.addTest(new AsynchronousTest("testGetData"));          return suite;       }    } }


The following test runner will run both the simple tests and the asynchronous test.

[View full width]

<?xml version="1.0" encoding="utf-8"?> <mx:Application xmlns:mx="http://www.adobe.com/2006/mxml" xmlns:flexui="flexunit.flexui.*" initialize="initializeHandler(event)"> <mx:Script> <![CDATA[ import flexunit.framework.TestSuite; import tests.SimpleTest; import tests.AsynchronousTest; private function initializeHandler(event:Event):void { var suite:TestSuite = new TestSuite(); suite.addTest(SimpleTest.suite()); suite.addTest(AsynchronousTest.suite()); testRunner.test = suite; testRunner.startTest(); } ]]> </mx:Script> <flexui:TestRunnerBase width="100%" height="100%"/> </mx:Application>





Advanced ActionScript 3 with Design Patterns
Advanced ActionScript 3 with Design Patterns
ISBN: 0321426568
EAN: 2147483647
Year: 2004
Pages: 132

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net