Results of the Voice Survey

The voice survey was conducted during the workshop, with students responding with a show of hands. These results are general and approximate.

1. 

How long have you been testing?

united states-the vast majority were new to testing or had been testing for fewer than two years. united kingdom-more than half of the respondents had been testing for two to four years.

2. 

How many have a Bachelor of Science degree or a Computer Science degree?

united states-only one or two persons in 50 had a science or engineering degree. united kingdom-typically 50 percent to 70 percent of all students had science degrees.

3. 

Does your organization track the bugs you find?

everyone counted bugs.

4. 

Do you rank the bugs by severity?

ranking schemes were commonly used to identify the severity of each bug. they varied from two categories such as `must fix` and `would like to fix,` to five or six categories ranging from `critical` to `design issue.`

5. 

How do you track these bugs?

some organizations tracked bugs manually, on paper. most respondents reported using some sort of database. most were looking for a better tool.

6. 

Do you measure bug-find rate and bug-fix rate of the test effort?

between 25 percent and 30 percent said `yes.` many students expressed concern that such analysis would be used negatively by management.

7. 

Do you analyze fault or defect density or error distribution? If so, do you look at the bug densities by module or by development group to find out where the bugs are?

between 25 percent and 30 percent said `yes` to the fault analysis question. when questioned, it became clear that this analysis is generally accomplished by gut feel, not by counting the number of bugs or faults discovered. many students expressed concern that such analysis would be used negatively by management.

8. 

Do you measure the effectiveness, efficiency, or performance of the test effort?

only about 1 person in 100 answered `yes` to this question. of those, efficiency , or cost per unit of work, was generally cited as the metric used.

Answers

1. 

United States-The vast majority were new to testing or had been testing for fewer than two years.

United Kingdom-More than half of the respondents had been testing for two to four years.

2. 

United States-Only one or two persons in 50 had a science or engineering degree.

United Kingdom-Typically 50 percent to 70 percent of all students had science degrees.

3. 

Everyone counted bugs.

4. 

Ranking schemes were commonly used to identify the severity of each bug. They varied from two categories such as "Must fix" and "Would like to fix," to five or six categories ranging from "critical" to "design issue."

5. 

Some organizations tracked bugs manually, on paper. Most respondents reported using some sort of database. Most were looking for a better tool.

6. 

Between 25 percent and 30 percent said "yes." Many students expressed concern that such analysis would be used negatively by management.

7. 

Between 25 percent and 30 percent said "yes" to the fault analysis question. When questioned, it became clear that this analysis is generally accomplished by gut feel, not by counting the number of bugs or faults discovered. Many students expressed concern that such analysis would be used negatively by management.

8. 

Only about 1 person in 100 answered "yes" to this question. Of those, efficiency, or cost per unit of work, was generally cited as the metric used.



Software Testing Fundamentals
Software Testing Fundamentals: Methods and Metrics
ISBN: 047143020X
EAN: 2147483647
Year: 2005
Pages: 132

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net