|
Survey research studies large and small populations by selecting and studying samples chosen from the interested populations to discover the relative incidence, distribution and interrelations of the selected variables. This type of survey is called a sample survey.
Surveys are not new. They have been around since the eighteenth century (Campbell and Katona 1953, Chap. 1). However, surveys in the scientific sense are a twentieth-century development. To be sure, surveys are considered to be a branch of social scientific research, which distinguishes survey research from the status survey (survey research is generally a systematic study of the relationship among items belonging to a greater domain or different domains with the intent to follow with another survey at a later time, whereas status survey is a survey that studies an item without any plan for repetition). The procedures and methods of survey research have been developed mostly by psychologists, sociologists, anthropologists, economists, political scientists and statisticians. All of them have a tremendous influence on how we perceive the social sciences.
In the quality profession, on the other hand, even though we use survey instruments, the process of surveying is not quite clear. To be sure, a survey, by definition, links populations and samples. Therefore, the experimenter is interested in the accurate assessment of the characteristics of whole populations. For example, a typical survey study may be conducted to investigate how many suppliers of type A material qualify for the approved supplier list, given characteristics such as delivery, price, quality, capability and so on. Another example of the use of a survey by the quality professional is in the area of identifying and/or measuring the current culture, attitude and general perception of quality in a given organization.
In using a survey, it must be understood that the experimenter uses samples—very rarely, if ever, will they use populations—by which they will infer the characteristics of the defined universe. To be successful, such undertaking must depend on random samples and unbiased questions. (For the mechanics on how to construct a questionnaire, see Stamatis 1996 and Kerlinger 1973).
Surveys can be classified by the following methods of obtaining information:
Interviews. Although they are very expensive, they do provide for individualized responses to questions as well as the opportunity to learn the reasons for doing, or even believing, something.
Panel technique. A sample of respondents is selected and interviewed, and then reinterviewed and studied at later times. This technique is used primarily when one wants to study changes in behavior and/or attitudes over time.
Telephone survey. At least from a quality application, they have little to offer other than speed and low cost.
Mail questionnaire. These are quite popular in the quality field. They are used to self-evaluate your own quality system, culture of the organization and so on. Their drawback, unless used in conjunction with other techniques, is the lack of response, inappropriate questions (e.g., leading, biased or ambiguous) and the inability to verify the responses given.
It is beyond the scope of this book to address the details of survey methodology (for details see Kerlinger 1973, and his references). However, the most fundamental steps are:
Specify and clarify the problem. To do this, the experimenter should not just expect to simply ask direct questions and be done. It is imperative that the experimenter should also have specific questions to ask, aimed at various facets of the problem. In other words, plan the survey.
Determine the sample plan. Remember, the sample must be representative of the population in order to be effective. For a good source of how to do this see Kish (1965).
Determine the schedule and, if appropriate and applicable, what other measuring instruments are to be used. This is a very difficult task and it must not be taken lightly. Here the experimenter is to translate the research question into an interview instrument and into any other instruments constructed for the survey. For example, one of the problems in a quality survey may be how permissive and restrictive attitudes toward the new quality vision of our organization are related to perceptions of both employees and management. An example of the questions to be written to assess permissive and restrictive attitudes would be, "How do you feel about quality?"
Determine the data collection method. The experimenter's concern is how the information is going to be gathered. What method is to be used and to what extent should appropriate precaution be designed into the survey to help eliminate spurious answers and responses?
Conduct analysis. In this step of the survey, there are two issues that concern the experimenter: coding—the term used to describe the translation of question responses to specific categories for purposes of analysis—and tabulation—the recording of the numbers of types of responses in the appropriate categories, after which statistical analysis (i.e., percentages, averages, rational indices and appropriate tests of significance) follows. The analysis of the data is studied, collated, assimilated and interpreted. Never should the experimenter give the results without the rationale, assumptions and interpretation of the data and results.
Report the results. At this step, the results of the survey are reported to all concerned parties.
In this section, our intent is to give the reader a useful overview of the audit, rather than an exhaustive discussion. The reader is strongly encouraged to read Mills (1989); Arter (1994); Keeney (1995, 1995a); Russell (1995); Stamatis (1996a); and Parsowith (1995) for more detailed information.
Where the survey is a methodology to find and project the results of a sample to a population, an audit is a methodology that uses samples to verify the existence of whatever is defined as a characteristic of importance. That characteristic may be defined in terms of quality, finance and so on.
An audit from a quality perspective based on ISO 8402 and ISO 9000:2000 is a systematic and independent examination to determine whether quality activities and related results comply with planned arrangements, and whether these arrangements are implemental effectively and are suitable to achieve objectives (ANSI/ISO/ ASQC A8402-1994 and/or ISO 9000:2000). This definition implies, in no uncertain terms, that an audit is a human evaluation process to determine the degree of adherence to prescribed norms and results in a judgment. The norms, of course, are always predefined in terms of criteria, standards, or both.
The norms are defined by the management of the organization about to be audited. On the other hand, it is the responsibility of the auditor conducting the audit in any organization to evaluate the compliance of the organization to those norms. For this reason, the audit is usually performed as a pass/fail evaluation rather than a point system evaluation.
Any quality audit may be internal or external and take one of three forms:
First party audit. This audit is conducted by an organization on itself and may be carried out on the entire organization or just a part. It is usually called an internal audit.
Second party audit. This audit is conducted by one organization on another. It is usually an audit on a supplier by a customer, and it is considered an external audit.
Third party audit. This audit is conducted by an independent organization (the third party) on another organization. It can only be performed at the request, or the initiative, of the organization seeking an impartial evaluation of the effectiveness of their own programs. This audit is mandatory for those organizations that seek ISO 9001, ISO 14000, ISO/TS 16949 and/or a QS-9000 certification. It is always an external audit.
The requirements to start any audit, fall into three categories. They are:
Starting. Before starting the physical audit, an auditor must gain some knowledge about the organization and the audit environment.
Documenting. Because documentation relies on the auditor to write the information down, he or she has the responsibility to be prepared, fair, impartial, inquisitive, honest and observant.
Evaluating. An auditor's ultimate responsibility is to evaluate the quality system of a given organization based on the organization's appropriate standards and documentation (e.g., its quality manual, procedures and instructions). (If it is a registration or a surveillance audit, then the responsibility is to evaluate the compliance of the quality system to the standards and the documentation. If it is an internal audit, the responsibility is to find the gaps and nonconformities of the quality system against the standards and documentation, and to improve the quality system based on the findings. In either case, both activities of auditing are evaluating measures to improvement and compliance.)
On the other hand, the actual audit process follows a sequential path that has four phases. They are:
Prepare (pre-audit). This phase includes selecting the team, planning the audit and gathering pertinent information.
Perform (on-site visit). This phase begins with the opening meeting and finishes with the actual audit.
Report (post-audit). This phase includes the exit meeting and the audit report.
Close. This phase includes the actions resulting from the report and the documentation information.
Process auditing is an audit that looks at the inputs, energy transformation, output and their interfaces. It is much more time-consuming than a traditional audit, but it is much more value added. In the old way of conducting an audit, the questions had to do with a style of "do you...?" or "can you show me...?" Now the style has changed to "how do you...?" or "why is...?" or "is the process effective?" and so on. A typical process audit is shown in Figure 6.4.
Figure 6.4: A typical process audit
Just as in the DMAIC model, there are many individual tools that may be used to pursue the design for the six sigma methodology. Here, however, we present the tools in a table format, along with the individual stage of the DCOV model. There is not much discussion with the specific tools, since that information is quite abundant and may be found in many textbooks, including Michalski (2003), training materials, and so on. Some of the most common tools used in the process of implementing DFSS, in addition to the ones already mentioned in chapter 4, are shown in Table 6.4.
Tool/method | Define | Characterize | Optimize | Verify |
---|---|---|---|---|
Automated data acquisition | X | X | ||
Automation controls | X | |||
Calibration certifications | X | |||
Certification | X | |||
Customer documents | X | |||
Customer surveys | X | |||
Defect data | X | X | ||
Design simplification | X | |||
Discrete data | X | |||
Engineering changes | X | X | ||
Engineering specifications | X | X | ||
Expert judgment | X | X | X | X |
Graphing package | X | X | X | X |
Inspection history files | X | |||
Industry standards | X | X | X | |
Inspection and test procedure | X | |||
Literature reviews | X | X | X | |
Operator certification | X | |||
Operator training | X | |||
Poka-yoke methods | X | |||
Process audits | X | |||
Process simplification | X | |||
Qualitative interpretation | X | X | X | X |
Quality function deployment | X | X | ||
Quality program documents | X | |||
Test certification | X | |||
Vendor feedback surveys | X | X | X | |
Worst case analysis | X | X | X | |
2 level: fractional factorial design | X | X | ||
2 level: full factorial designs | X | X | ||
3 level: fractional factorial designs | X | X | ||
3 level: full factorial design | X | X | ||
Advanced control charts | X | |||
Analysis of covariance | X | |||
Analysis of variance | X | X | ||
Auto-correlation | X | X | ||
Chi square methods | X | X | ||
Correlation studies | X | X | X | |
Cross tabulation methods | X | X | ||
Cube plots | X | X | ||
Defect probability | X | X | ||
Distribution goodness-of-fit | X | X | X | |
EVOP designs | X | |||
F tests | X | X | ||
Finite element analysis | X | X | ||
Fractional factorials with outer arrays | X | X | ||
Full factorials with outer arrays | X | X | ||
GR&R (gauge repeatability and reproducibility) control chart method | X | X | X | |
GR&R statistical DOE | X | |||
Interaction plots | X | X | X | |
Mathematical models | X | X | ||
Mixture designs | X | |||
Monte Carlo simulation | X | |||
Multi-level full factorial designs | X | X | ||
Non-parametric tests | X | X | ||
OR programming methods | X | X | ||
Physical models | X | |||
Random sampling | X | X | X | |
Random strategy designs | X | X | X | |
Realistic tolerancing | X | X | X | |
Regression | X | X | X | |
Response surface designs | X | X | ||
Taguchi designs | X | X | ||
Time series analysis | X |
|