Appendix B Testing Survey

Overview

"It's better to know some of the questions than all of the answers."

— James Thurber

The following survey was designed to be used at a testing conference to measure industry trends in test and evaluation process use. We've also found it to be a useful tool to baseline current practices within an organization.

Additionally, a gap analysis can be used to identify and prioritize process improvement opportunities. A large difference between the perceived value and usage value indicates a process that, if improved, could yield a large return on investment (ROI).



Test and Evaluation Practices Survey

Tables B-1 through B-4 contain a list of some test and evaluation activities that might be employed in your software activities. For each activity, indicate the degree of use in your division/area and how valuable or important you consider it to be towards producing good software. If you do not know the usage or have no opinion on the value, leave it blank. Table B-5 contains a list of some questions that pertain to trends and perspectives in testing.

 

Usage

 

Value


0

No Usage – Not used.

0

Unimportant – Not needed or waste of time.

1

Infrequent Use – Used some of the time.

1

Limited Value – Would be nice.

2

Common Use – Used most of the time.

2

Significant Value – Recommended practice.

3

Standard Use – Used all of the time.

3

Critical – Should be a standard practice for everyone.

Table B-1: Management and Measures
 

Description of Activity

Enter 0, 1, 2, or 3

Usage

Value

M1

An overall quality and/or test and evaluation plan is produced

_____

_____

M2

A person is responsible for the testing and evaluation process

_____

_____

M3

A capital budget is provided each year for the testing and evaluation process

_____

_____

M4

A record of the time spent on testing and evaluation is produced

_____

_____

M5

The cost of testing and reviews is measured and reported

_____

_____

M6

A record of faults and defects found in each review or test stage is produced

_____

_____

M7

A record of what is missed in each review or test stage is produced

_____

_____

M8

Test and review effectiveness/efficiency is measured and reported

_____

_____

M9

The cost of debugging is separated from testing

_____

_____

M10

Defect density (defects per thousand lines of code) ismeasured

_____

_____

M11

A person or department is responsible for managing the test environment and tools

_____

_____

M12

The pattern of faults and defects found is regularly analyzed

_____

_____

M13

Full-time testers are used for high-level testing (system and/or acceptance)

_____

_____

M14

Full-time testers are used for low-level testing (unit and object)

_____

_____

M15

Full-time reviewers are used in formal reviews and inspections

_____

_____

M16

Compliance/adherence to the test and evaluation process is monitored

_____

_____

 

Usage

 

Value


0

No Usage – Not used.

0

Unimportant – Not needed or waste of time.

1

Infrequent Use – Used some of the time.

1

Limited Value – Would be nice.

2

Common Use – Used most of the time.

2

Significant Value – Recommended practice.

3

Standard Use – Used all of the time.

3

Critical – Should be a standard practice for everyone.

Table B-2: Evaluation Process
 

Description of Activity

Enter 0, 1, 2, or 3

Usage

Value

E1

Review and inspection points are well-defined and documented

_____

_____

E2

Specialized training is provided for specific roles (moderator, recorder, reader)

_____

_____

E3

Requirements documents are formally reviewed and inspected

_____

_____

E4

Design documents are formally reviewed and inspected

_____

_____

E5

Code is formally reviewed and inspected

_____

_____

E6

Changes are formally reviewed and inspected

_____

_____

E7

Testing plans and documents are formally reviewed

_____

_____

E8

Guidelines are used to control review length and review item size

_____

_____

E9

A standard set of outcomes is used for formal reviews and inspections

_____

_____

E10

Statistics are kept for time spent by reviewer and review effectiveness

_____

_____

E11

Standard review reports are used for recording issues and summarizing results

_____

_____

E12

Defects and review issues missed are measured and tracked

_____

_____

E13

Risk analysis is formally performed

_____

_____

E14

Safety/hazard analysis is formally performed

_____

_____

E15

Specialized evaluation/analysis training is provided

_____

_____

E16

Defects are analyzed as to phase introduced and root cause

_____

_____

E17

Review process adherence/compliance is monitored and tracked

_____

_____

 

Usage

 

Value


0

No Usage – Not used.

0

Unimportant – Not needed or waste of time.

1

Infrequent Use – Used some of the time.

1

Limited Value – Would be nice.

2

Common Use – Used most of the time.

2

Significant Value – Recommended practice.

3

Standard Use – Used all of the time.

3

Critical – Should be a standard practice for everyone.

Table B-3: Testing Process and Activities
 

Description of Activity

Enter 0, 1, 2, or 3

Usage

Value

P1

Unit testing plans and specifications are documented

_____

_____

P2

Unit testing defects are tracked and analyzed

_____

_____

P3

Unit test summary reports are tracked and analyzed

_____

_____

P4

System-level test plans and specifications are documented

_____

_____

P5

System-level defects are tracked and analyzed

_____

_____

P6

System-level reports are produced

_____

_____

P7

Test objectives are systematically inventoried and analyzed

_____

_____

P8

Risk is acknowledged and used to design, organize, and execute tests

_____

_____

P9

Requirements test coverage is tracked and measured

_____

_____

P10

Design test coverage is tracked and measured (traced)

_____

_____

P11

Code coverage is analyzed or traced

_____

_____

P12

Tests are rerun when software changes

_____

_____

P13

Unit-level test sets are saved and maintained

_____

_____

P14

System-level test sets are saved and maintained

_____

_____

P15

Test cases and procedures are assigned unique names

_____

_____

P16

Tests are specified before the technical design of the software

_____

_____

P17

Test cases and procedures are formally documented

_____

_____

P18

Test documents and test programs are reviewed like software

_____

_____

P19

Defects found are analyzed as to phase introduced and root cause

_____

_____

P20

Test process adherence/compliance is monitored and measured

_____

_____

P21

Testware is considered an asset and assigned a value

_____

_____

 

Usage

 

Value


0

No Usage – Not used.

0

Unimportant – Not needed or waste of time.

1

Infrequent Use – Used some of the time.

1

Limited Value – Would be nice.

2

Common Use – Used most of the time.

2

Significant Value – Recommended practice.

3

Standard Use – Used all of the time.

3

Critical – Should be a standard practice for everyone.

Table B-4: Test and Evaluation Tools
 

Description of Activity

Enter 0, 1, 2, or 3

Usage

Value

T1

Comparator (file output) is used to support testing

_____

_____

T2

Simulators (hardware, software, or communications) are part of our test environment

_____

_____

T3

Capture/playback tools are used for retesting

_____

_____

T4

Coverage measurement tools are used in unit testing

_____

_____

T5

Coverage measurement tools are used in system testing

_____

_____

T6

Data or file generator (parameter or code-driven) tools are available

_____

_____

T7

Data analyzer tools are used to profile test sets and files

_____

_____

T8

A test database (bed of tests which simulate the test environment) is available

_____

_____

T9

Test case or procedure generator (parameter or code-driven) tools are available

_____

_____

T10

Static code analyzers are used to analyze risk and change

_____

_____

T11

Test management tools to track and record execution results are used

_____

_____

T12

Tools are used to estimate test and evaluation effort and/or schedule

_____

_____

Identify any major commercial tools that you or your division/area used regularly:

Tool

Vendor

   
   

Table B-5: Trends and Perspectives

Compared with several years ago…

Worse

About Same

A Little Better

A Lot Better

Don't Know

Our overall software effort and quality is

_____

_____

_____

_____

_____

The effectiveness of our reviews and inspections program is

_____

_____

_____

_____

_____

The effectiveness of our unit-level testing is

_____

_____

_____

_____

_____

The effectiveness of our build/integration level testing is

_____

_____

_____

_____

_____

The effectiveness of our system-level testing is

_____

_____

_____

_____

_____

The effectiveness of our acceptance-level testing is

_____

_____

_____

_____

_____

The use of automation/tools to support test and evaluation is

_____

_____

_____

_____

_____

Our choice of what to measure and track is

_____

_____

_____

_____

_____

Your estimate of the percentage of the total time spent in your division/area on software development and maintenance that is spent on…

Low%

Best Guess

High%

Don't Know

Quality management activities

_____

_____

_____

_____

Reviews and inspections (requirements, design, code)

_____

_____

_____

_____

Low-level testing (unit and integration)

_____

_____

_____

_____

High-level testing (system and acceptance)

_____

_____

_____

_____

The one thing I wish my division/area would do or change regarding our test and evaluation effort is:

_____________________________________________

_____________________________________________

_____________________________________________

_____________________________________________

_____________________________________________





Systematic Software Testing
Systematic Software Testing (Artech House Computer Library)
ISBN: 1580535089
EAN: 2147483647
Year: 2002
Pages: 114

Flylib.com © 2008-2020.
If you may any questions please contact us: flylib@qtcs.net