Tester Performance


You can implement some other measurements to encourage testers to find defects and give them a sense of pride in their skills. One of them is the "Star Chart." This chart is posted in the testing area and shows the accomplishments of each tester according to how many defects they find of each severity. Tester names are listed down one side of the chart and each defect is indicated by a stick-on star. The star's color indicates the defect's severity. For example, you can use blue for 1, red for 2, yellow for 3, and silver for 4. Points can also be assigned to each severity (for example, A=10, B=5, C=3, D=1), and a "Testing Star" can be declared at the end of the project based on who has the most points. In my experience, this chart has led to a friendly sense of competition between testers, increased their determination to find defects, promoted tester ownership of defects, and has caused testers to pay more attention to the severity assigned to the defects they find. This approach turns testing into a game for the testers to play while they're testing games . Did you follow that? Figure 9.10 shows what a Star Chart looks like prior to adding the testers' stars.


Figure 9.10: Empty Star Chart.

If you're worried about testers getting into battles over defects and not finishing their assigned tests fast enough, you can create a composite measure of each tester's contribution to test execution and defects found. Add up the total number of test defects found and calculate a percentage for each tester based on how many they found divided by the project total. Then do the same for tests run. You can add these two numbers for each tester. Whoever has the highest total is the "Best Tester" for the project. This may or may not turn out to be the same person who becomes the Testing Star.

Here's how this works for testers B, C, D, K, and Z for the Dev1 release:

  • Tester B executed 151 of the team's 570 Dev1 tests. This comes out to 26.5%. B has also found 9 of the 34 Dev1 defects, which is also 26.5%. B's composite rating is 53.

  • Tester C ran 71 of the 570 tests, which is 12.5%. C found 7 out of the 34 total defects in Dev1, which is 20.5%. C's rating is 33.

  • Tester D ran 79 tests, which is approximately 14% of the total. D also found 6 defects, which is about 17.5% of the total. D gets a rating of 31.5.

  • Tester K ran 100 tests and found 3 defects. These represent 17.5% of the test total and about 9% of the defect total. K has a 26.5 rating.

  • Tester Z ran 169 tests, which is about 29.5% of the 570 total. Z found 9 defects, which is 26.5% of that total. Z's total rating is 56.

  • Tester Z has earned the title of "Best Tester."

Note ‚  

When you have someone on your team who keeps winning these awards, take her to lunch and find out what she is doing so you can win some too!

Be careful to use this system for good and not for evil. Running more tests or claiming credit for new defects should not come at the expense of other people or the good of the overall project. You could add in factors to give more weight to higher-severity defects to discourage testers from spending all their time chasing and reporting low-severity defects that won't contribute as much to the game as a few very important high-severity defects.

Use this system to encourage and exhibit positive test behaviors. Remind your team (and yourself!) that some time spent automating tests could have a lot of payback in terms of test execution. Likewise, spending a little time up front to design your tests before you run off and start banging on the game controller will probably lead you to more defects. You will learn more about these strategies and techniques as you proceed to Parts IV and V of this book.




Game Testing All in One
Game Testing All in One (Game Development Series)
ISBN: 1592003737
EAN: 2147483647
Year: 2005
Pages: 205

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net