The director asked the tester, "So you
The tester responded, "Yes, I tested it. It's ready to go."
The director asked, "Well, what did you test?"
The tester responded, "I tested it ."
In this conversation, I was the tester. It was 1987
and I had just completed my first test assignment on a commercial
I had spent six months working with
and learning from some very good testers. They were very good at
The experience made me resolve to never again be caught with such a poor answer. I could not always be so lucky as to have management that would accept it for an answer. All my training as a structural engineer had prepared me to give my management a much better answer than it.
Suppose the supervisor on a building project asked
if I tested the steel superstructure on Floor 34 and needed to know
if it was safe to build Floor 35. If I said "yes" and if the
supervisor then asked, "What did you test?" I would have a whole
checklist of answers on the clipboard in my hand. I would have a
list with every bolt connection, the patterns of those connections,
the specified torque wrench loading used to test the bolts, and the
results from every bolt I had touched. I would know exactly which
bolts I had touched because each would be
Why should software testing be any different? I
As per our agreement, we have tested 67 percent of the test inventory. The tests we ran represent the most important tests in the inventory as determined by our joint risk analysis. The bug find rates and the severity composition of the bugs we found were within the expected range. Our bug fix rate is 85 percent.
It has been three weeks since we found a Severity 1 issue. There are currently no known Severity 1 issues
open. Fixes for the last Severity 2 issues were regression-tested and approved a week ago. The testers have conductedsome additional testing in a couple of the newermodules. Overall, the system seems to be stable.
The load testing has been concluded. The system failed at 90 percent of the design load. The system
engineersbelieve they understand the problem, but they say they will need three months to implement the fix. Projections say the peak load should only be at 75 percent by then. If the actual loading goes above 90 percent, the system will fail.
Our recommendation is to ship on schedule, with the understanding that we have an exposure if the system utilization exceeds the projections before we have a chance to install the previously noted fix.
The thing that I find most amazing is that answers
like these are not widely used in the industry today. I regularly
hear testers and developers using
metrics. Throughout the 1990s I gave out a survey every time I
taught a testing course. Probably 60 percent of the students taking
these courses were new to testing, with less than one year as a
tester. About 20 percent had from one to five
The only type of metrics used regularly have
to do with counting bugs and ranking them by severity. Only a small
percentage of respondents measure the bug find rate or the bug fix
rate. No other metrics are widely used in development or testing,
even among the best-
The majority of testers taking the survey (76 percent) had had some experience with automated test tools. Today an even greater percent of testers report that they have used automated test tools, but test automation is also voted as the most difficult test technique to implement and maintain in the test effort.
The respondents who are not actively testing
provided the most accurate definitions of the testing terms. The
people performing the testing supplied the poorest definitions of
Overall, I believe that the quality of the tester's
level of awareness is improving, and, certainly, software testing
practices have improved in the commercial software development
sector. Today there are more
How did we get to this
 Commercial software is software that is commercially available and can be purchased by the public. This distinguishes it from safety-critical, proprietary, or military software.