Chapter 14: What Our Analysis Tells Us and What s in Store in the Near Future

Overview

"So, what did your analysis tell you?" my manager asked.

"That we need to do more testing than we thought," the tester said.

"I expected as much," my manager said, smiling.

Table 14.1 shows the total MITs that were established through path and data analysis covered in Chapters 12 and 13. The actual number is larger than the estimate, as I predicted, but not only for the reason that I would have expected.

Table 14.1: The Total Tests Identified for the Effort through MITs Analysis

Tester's Paradise (Release 2.0)

Total Path Tests

Data Tests

MINs

 

Menu Option Paths

Exception Paths

Program Paths

MIPs

Data Sets

System

MIDs

 

Existing Tests

Project Information:

0

0

0

0

     

Fix For Error #123

0

0

3

3

    

7

Fix for Error #124

0

0

0

0

    

5

Tester's Paradise Main Menu

5

5

0

10

     

Our Best System Simulator

0

0

0

0

    

65

Message Data Flow Checker

0

0

0

0

    

61

Screen Comparison - Pixel Viewer

0

0

0

0

    

76

Portable System Monitor (New Function)

5

5

0

10

   

3

 

Specifics and Options

3

3

0

6

     

Add-on Platform Adapters

3

3

0

6

     

View Portable System Monitor

2

2

0

4

     

Display Portable System Monitor

0

0

4

4

   

3

 

Order Form

3

3

6

12

   

3

 

Method of Payment limited to 2 credit cards (Data Sets)

   

0

12

    

Purchase Option: Not Available in some states (data)

   

0

50

    

Minimum Order must be $30.00 (data)

   

0

3

    

Arrange Payment

2

2

5

9

     

Order Confirmation

1

1

0

2

     

Support Packages

3

3

0

6

     

Return to Main Menu

1

1

0

2

     

Cancel

1

1

0

2

     

Installation is automatic at logon

0

0

0

0

 

1

   

Totals

29

29

18

76

65

1

66

9

 
    

76

  

66

9

214

Total all tests MIPs + MIDs + MINs =

151

        

Existing Tests

214

        

Total Tests

365

        

The MITs path and data analysis yielded 38 more tests than we originally estimated. The real surprise was that it turned out there were more existing tests than we thought. And after the past performance of the first version, no one was sure which tests we could drop, safely.

So how good was the estimate? Actually, it was pretty good. Table 14.2 shows the raw worksheet with the MITs tests added in. Initially, the new numbers in the MITs column make it look like the test effort doubled, but we were able to contain the effort to the original estimated time line. Take a look at Table 14.2, and I will explain.

Table 14.2: The Testers Paradise Release 2.0 Sizing Worksheet with MITs

Tester's Paradise (Release 2.0)

Estimates

MITs

Total Tests for 100% coverage (T) from MITs Totals row on Test Calc. Sheet

315

 

MITs Recommended number of scripts

232.00

365.00

MITs Minimum number of scripts from MITs Totals Sheet

208.00

 

MITs estimate for recommended coverage - all code

74%

MITs estimate for minimum required coverage - all code

66%

Number of existing tests from Version 1

131.00

214.00

Total New Tests identified

113

151

Number of tests to be created

101.00

151

Average number of keystrokes in a test script

50

40

Est. script create time (manual script entry) 20 min. each -> (total new tests × 20/60) = person-hours total

32.58

50.33

Est Automated replay time total MITs (including validation) 4/60 hrs./script = replay hr./cycle total (For each test environment)

15.47

24.33

Est manual replay time for MITs tests (including validation) × (20/60) = hours/cycle (For each test environment)

77.33

121.67

LOC Approx. 10,000 C language, 2,000 ASM

12,000 lines

 

Est. Number of errors (3 errors/100 LOC) = 400

400 errors

Number of code turnovers expected

4

Number of complete test cycles est.

5

Number of test environments

6

Total Number of tests that will be run (against each environment) 4 complete automated cycles = Total MITs × 4

928

1,460

Total Tests - all environments in 5 cycles × Total MITs × 6 environments

6,960

10,950

Pre-Turnover: Analysis planning and design

80 hrs.

 

Post-Turnover:

 

Script creation & 1st test cycle (manual build + rerun old suites) = Hours

41.31

64.60

4 Automated Test cycles (time per cycle × 4) × Running concurrently on 6 environments (in Hours)

61.87

97.33

Total: Script run time with automation Running concurrently on 6 environments (1 manual + 4 automated) = weeks to run all tests through 5 cycles on 6 environments

7.22

11.35

Total: Script run time all Manual (5 manual cycles) = weeks for 6 environments - Best Recommendation for automating testing!

58

91.25

Error logs, Status etc. (est. 1 day in 5 for each environment) weeks

1.73

2.72

Total: Unadjusted effort Total Run Time + Bug Reporting (in Weeks)

8.95

14.07

Factor of Safety adjustment = 50% Total adjusted effort (Total effort In Weeks)

13.43

21.11

We added six test machines so we could run the test suites in half the time. Then we also decided to only run the tests for Release 1 two times: once at code complete and once after any bug fixes were integrated, just before shipping the code to production. The strategy worked very well, and we were able to implement the extra 38 tests for the new code and still fit the test effort into the original 14-week estimate.

The bugs we found in this application were not in the field processors of the user interface. They were embedded in the interactions of the system, and that leads me to my next topic: what testers will need to test in the next generation of software.

You are not going to be testing trivial field processors, and no, you won't be able to rerun every test from the last release. Most development shops are trying to be Agile in order to compete while they keep just enough of the trappings of the traditional effort so that they can claim their products are commercially hardened, reliable, viable, and whatever other "ables" marketing requires. If the test effort can't demonstrate its value, then it is likely to be cut.

Software development is still being treated as a commodity, driven by entrepreneurial forces in the market. Until we raise our expectations about safety and reliability, we will continue to build software that is not prepared to survive the events that will probably happen to it.



Software Testing Fundamentals
Software Testing Fundamentals: Methods and Metrics
ISBN: 047143020X
EAN: 2147483647
Year: 2005
Pages: 132

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net