Appendix A Glossary of Terms

"The words I use are everyday words and yet are not the same."

— Paul Claudel

Glossary of Terms









Acceptance Testing

A level of test conducted from the viewpoint of the user or customer, used to establish criteria for acceptance of a system. Typically based upon the requirements of the system.

Ad Hoc Testing

Testing conducted without written or formal plans or test cases.

Alpha Test

An acceptance test conducted at the development site.


A description of how testing will be conducted. Includes any issues that affect the effectiveness or efficiency of testing.

See also Strategy.


A presumed activity or state. If the assumption is false, it's a planning risk.

See also Planning Risk.


A characteristic of the system that spans the breadth of the system (e.g., performance, usability).


A measurement of where your processes are at any given point in time. Used to compare the processes of one group at a given time to the same group at another point in time.


A measurement of where your processes are compared directly to other companies or to a static model such as the CMM.

Beta Test

An acceptance test conducted at a customer site.

Black-Box Testing

A type of testing where the internal workings of the system are unknown or ignored (i.e., functional or behavioral testing). Testing to see if the system does what it's supposed to do.

Boundary Value Analysis

Testing at or near the boundaries of a system or subsystem.


A group problem-solving technique that involves the spontaneous contribution of ideas from all members of the group.

Buddy Testing

A technique where two programmers work together to develop and test their code. Preventive techniques are used (i.e., the test cases are written prior to the code).


A flaw in the software with potential to cause a failure.

See also Defect.


The measurement of coverage of test cases against an inventory of requirements and design attributes.

Capability Maturity Model (CMM)

A framework used for evaluating the maturity of an organization's software engineering process. Developed by the Software Engineering Institute (SEI) at Carnegie Mellon University.


Any of a number of programs that lead to formal recognition by an institution that an individual has demonstrated proficiency within and comprehension over a specified body of knowledge.


An influence leader who's willing to serve as the on-site oracle for a new process.

Change Control Board (CCB)

A board typically composed of developers, testers, users, customers, and others tasked with prioritizing defects and enhancements. Also called Configuration Control Board (CCB).

Code Freeze

A time when changes to the system (requirements, design, code, and documentation) are halted or closely managed.

Configuration Control Board (CCB)

See Change Control Board (CCB).

Confirmation Testing

Rerunning tests that revealed a bug to ensure that the bug was fully and actually fixed (Derived from Rex Black).

Cohabiting Software

Applications that reside on the same platform as the software being testing.

Coincidental Correctness

A situation where the expected result of a test case is realized in spite of incorrect processing of the data.


An activity undertaken to eliminate or mitigate a planning risk.


A metric that describes how much of a system has been (or will be) invoked by a test set. Coverage is typically based upon the code, design, requirements, or inventories.

Cut Line (in software risk analysis)

The dividing line between features to be tested and features not to be tested.

Cyclomatic Complexity

A technique using mathematical graph theory to describe the complexity of a software module.


The isolation and removal or correction of a bug.

Decision Tables

Tables that list all possible conditions (inputs) and all possible actions (outputs).


A flaw in the software with potential to cause a failure.

See also Bug.

Defect Age

A measurement that describes the period of time from the introduction of a defect until its discovery.

Defect Density

A metric that compares the number of defects to a measure of size (e.g., defects per KLOC). Often used as a measure of defect quality.

Defect Discovery Rate

A metric describing the number of defects discovered over a specified period of time, usually displayed in graphical form.

Defect Removal Efficiency (DRE)

A measure of the number of defects discovered in an activity versus the number that could have been found. Often used as a measure of test effectiveness.

Defect Seeding

The process of intentionally adding known defects to those already in a computer program for the purpose of monitoring the rate of detection and removal, and estimating the number of defects still remaining. Also called Error Seeding.

Desktop Procedures

Simple instructions that describe all of the routine tasks that must be accomplished by a manager on a daily or weekly basis.


Modules that simulate high-level components.

Dry Run

Executing test cases designed for a current release of software on a previous version.


Number of uninterrupted hours versus number of body-present hours.

Entry Criteria

Metrics specifying the condition that must be met in order to begin testing at the next stage or level.

Environment (Test)

The collection of hardware, software, data, and personnel that comprise a level of test.

Equivalence Partitioning

A set of inputs that are treated the same by a system.


A defect that is undetected by an evaluation activity and is therefore passed to the next level or stage.


All processes used to measure the quality of a system. In the STEP methodology, these processes consist of testing, analysis, and reviews.

Exit Criteria

Metrics specifying the conditions that must be met in order to promote a software product to the next stage or level.

Exploratory Testing

A testing technique where the test design and execution are conducted concurrently.


Any deviation of a system that prevents it from accomplishing its mission or operating within specification. The manifestation of a defect.


A functional characteristic of a system.


A measure of how quickly test data becomes outdated.

Glass-Box Testing

See White-Box Testing (also known as Glass-Box, or Translucent-Box).

Global Code Coverage

The percentage of code executed during the testing of an entire application.

Hawthorne Effect

The observed phenomenon that showing concern for employees improves their productivity.


The Institute of Electrical and Electronic Engineers, Inc. Publisher of engineering standards.

Immersion Time

The amount of time it takes a person to become productive after an interruption.


The effect of a failure.


Any unusual result of executing a test (or actual operation).

Independent Testing

An organizational strategy where the testing team and leadership is separate from the development team and leadership.

Independent Verification and Validation (IV&V)

Verification and validation performed by an organization that's technically, managerially, and financially independent of the development organization (derived from IEEE Glossary of Terms).

Influence Leader

A person whose influence is derived from experience, character, or reputation, rather than by organizational charter.


A formal evaluation technique in which software requirements, design, or code are examined in detail by a person or group other than the author to detect faults, violation of development standards, and other problems (definition from IEEE Glossary of Terms).

Integrated Test Team

An organizational strategy where testers and developers both report to the same line manager.

Integration Testing

A level of test undertaken to validate the interface between internal components of a system. Typically based upon the system architecture.

Interface Testing

Testing to see if data and control are passed correctly between systems. Also called Systems Integration Testing.

International Organization for Standards (ISO)

A group of quality standards developed to help organizations assess their processes using a rigorous auditing model.


A list of things to test.

Inventory Tracking Matrix

A matrix that relates test cases to requirements and/or design attributes. It's used as a measure of coverage and to maintain test sets.

Latent Defect

An existing defect that has not yet caused a failure because the exact set of conditions has not been met.


A testing activity defined by a particular test environment.


The period of time from the conception of a system until its retirement.


The chance that an event will occur.

Masked Defect

An existing defect that hasn't yet caused a failure because another defect has prevented that part of the code from being executed.

Master Test Planning

An activity undertaken to orchestrate the testing effort across levels and organizations.

Maturity Level

A term coined by Watts Humphrey to denote the level of process use in software organizations, based on a five-tiered static model that he developed.


A quantified observation about any aspect of software (derived from Dr. Bill Hetzel).


Using an experienced person (tester) to help introduce a newer staff member to the processes, culture, and politics of an organization.


A measure of a measure. Usually used to measure the effectiveness of a measure, e.g., number of defects discovered per inspector hour (derived from Dr. Bill Hetzel).


A metric that acts as a trigger or threshold. That is, if some threshold is met, then an action is warranted, e.g., exit criteria (derived from Dr. Bill Hetzel).

Methodology (Test)

A description of how testing will be conducted in an organization. Describes the tasks, product, and roles.


A measurement used to compare two or more products, processes, or projects (derived from Dr. Bill Hetzel).


A major checkpoint or a sub-goal identified on the project or testing schedule.


An activity undertaken to reduce risk.

Model Office

An (acceptance) test environment created to closely mirror the production environment, including the use of real data.


An individual or group's state of mind.


The influences that affect behavior.

Mutation Analysis

Purposely altering a program from its intended version in order to evaluate the ability of the test cases to detect the alteration.

Negative Test

Testing invalid input.


A broad category of things to test. An objective is to testing, what a requirement is to software.

Orthogonal Arrays

A technique used to choose test cases by employing arrays of integers.

Parallel Implementation

Installing and using a new system (or a newer version of an existing system) at the same time the old system (or a previous version) is installed and running.

Parallel Testing

A type of testing where the test results of a new system (or a newer version of a previous system) are compared to those from an old or previous version of the system.

Pareto Principle

80% of the contribution comes from 20% of the contributors.

Phased Implementation

Shipping a product to the entire customer base in increments.


A production system installed at a single or small number of client sites.

Planning Risk

A risk that jeopardizes the (testing) software development schedule.


The methods or tactics involved in managing an organization.

Positive Test

Testing valid input.

Preventive Testing

Building test cases based upon the requirements specification prior to the creation of the code, with the express purpose of validating the requirements.


An original and usually working model of a new product or new version of an existing product, which serves as a basis or standard for later models.


Quality assurance. The QA group is responsible for checking whether the software or processes conform to established standards.


Conformance to requirements.

Quiet Time

A period of time set aside from all meeting and other interruption in order to improve productivity

Random Testing

Testing using data that is in the format of real data, but with all of the fields generated randomly.

Regression Testing

Retesting previously tested features to ensure that a change or bug fix has not affected them.


A particular version of software that is made available to a group or organization (i.e., a customer, the test group, etc.).

Requirements Traceability

Demonstrating that all requirements are covered by one or more test cases.

Resumption Criteria

Metrics that describe when testing will resume after it has been completely or partially halted.


Any type of group activity undertaken to verify an activity, process or artifact (i.e., walkthrough, inspection, buddy check, etc.).


The chance of injury, damage or loss; a dangerous chance or hazard.

Risk Management

The science of risk analysis, avoidance, and control.

Safety Critical (System)

A system that could cause loss of life or limb if a failure occurred.

Scaffolding Code

Code that simulates the function of non-existent components (e.g., stubs and drivers).


An automated test procedure.

Semi-Random Testing

Testing using data that's in the format of real data, but with the fields generated with minimally defined parameters.

Smoke Test

A test run to demonstrate that the basic functionality of a system exists and that a certain level of stability has been achieved. Frequently used as part of the entrance criteria to a level of test.


The requirements, design, code, and associated documentation of an application.

Software Configuration Management

A discipline of managing the components of a system. Includes library management and the process of determining and prioritizing changes.

Software Risk Analysis

An analysis undertaken to identify and prioritize features and attributes for testing.

Software Under Test (SUT)

The entire product to be tested, including software and associated documentation.

Span of Control

The number of people directly reporting to a manager.


(1) A metric that uses defect age and distribution to measure the effectiveness of testing. (2) According to Grady and Caswell, at Hitachi, spoilage means "the cost to fix post-release bugs."


Usually a senior manager who can help obtain resources and get buy-in.


The condition in which a system exists at a particular instance in time (e.g., the elevator is on the bottom floor).

State-Transition Diagram

A diagram that describes the way systems change from one state to another.

STEP (Systematic Test and Evaluation Process)

Software Quality Engineering's copyrighted testing methodology.


A description of how testing will be conducted. Includes any issues that affect the effectiveness or efficiency of testing.

See also Approach.

Stress Testing

Testing to evaluate a system at or beyond the limits of its requirements.


Modules that simulate low-level components.

Suspension Criteria

Metrics that describe a situation in which testing will be completely or partially halted (temporarily).


A reserve group of expert testers who can be rapidly called, in an emergency.

System Testing

A (relatively) comprehensive test undertaken to validate an entire system and its characteristics. Typically based upon the requirements and design of the system.

Systems Integration Testing

See Interface Testing.

TBD(To Be Determined)

A placeholder in a document.

Test Automation

Using testing tools to execute tests with little or no human intervention.

Test Bed

See Environment (Test).

Test Case

Describes a particular condition to be tested. Defined by an input and an expected result.

Test Coordinator

A person charged with organizing a testing group including people, infrastructure, and/or methodologies. Often used for a one-time or limited-time testing effort. An organizational style using a test coordinator.

Test Data

Data (including inputs, required results, and actual results) developed or used in test cases and test procedures.

Test Deliverable

Any document, procedure, or other artifact created during the course of testing that's intended to be used and maintained.

Test Design Specification

A document describing a group of test cases used to test a feature(s).

Test Effectiveness

A measure of the quality of the testing effort (e.g., How well was the testing done?).

Test Implementation

The process of acquiring test data, developing test procedures, preparing the test environment, and selecting and implementing the tools that will be used to facilitate this process.

Test Incident Report

A description of an incident.

Test Item

A programmatic measure of something that will be tested (i.e., a program, requirement specification, version of an application, etc.).

Test Log

A chronological record of relevant details about the execution of test cases.

Test Procedure

A description of the steps necessary to execute a test case or group of test cases.

Test Process Improvement (TPI)

A method for baselining testing processes and identifying process improvement opportunities, using a static model developed by Martin Pol and Tim Koomen.

Test Set

A group of test cases.

Test Suite

According to Linda Hayes, a test suite is a set of individual tests that are executed as a package in a particular sequence. Test suites are usually related by the area of the application that they exercise, by their priority, or by content.

Test Summary Report

A report that summarizes all of the testing activities that have taken place at a particular level of test (or the entire testing process in the case of a master test plan).


Concurrent lifecycle process of engineering, using, and maintaining testware in order to measure and improve the quality of the software being tested.

Testing Tool

A hardware or software product that replaces or enhances some aspect of human activity involved in testing.


Any document or product created as part of the testing effort.

Testware Configuration Management

The discipline of managing the test components of a system. Includes library management and the process of determining and prioritizing changes.

Turnover Files

Examples of reports, meeting minutes, contact lists, and other documents that, along with desktop procedures, facilitate a smooth transition from one manager to another.


A piece of code that performs a function, typically written by a single programmer. A module.

Unit Testing

A level of test undertaken to validate a single unit of code. Typically conducted by the programmer who wrote the code.

Usability Laboratory

A specially equipped laboratory designed to allow potential users of a system to "try out" a prototype of a system prior to its completion.


A use-case describes a sequence of interactions between an external "actor" and a system, which results in the actor accomplishing a task that provides a benefit to someone.


Any of a number of activities undertaken to demonstrate conformance to requirements (stated and implied) (i.e., building the right product). Often done through the execution of tests or reviews that include a comparison to the requirements.


Any of a number of activities undertaken to demonstrate that the results of one stage are consistent with the previous stage (i.e., the design is verified against the requirements specification). Typically done using reviews (i.e., doing the thing right).


A peer review of a software product that is conducted by sequentially "walking through" the product. A type of verification.

Waterfall Model

A model of software development based upon distinct, sequential phases.

White-Box Testing (also known as Glass-Box, or Translucent-Box)

Testing based upon knowledge of the internal (structure) of the system. Testing not only what the system does, but also how it does it (i.e., Structural Testing).

Systematic Software Testing
Systematic Software Testing (Artech House Computer Library)
ISBN: 1580535089
EAN: 2147483647
Year: 2002
Pages: 114 © 2008-2020.
If you may any questions please contact us: