LOSS FUNCTION AND SIGNAL-TO-NOISE


This section discusses:

  1. The Taguchi loss function and its cost-oriented approach to product design

  2. A comparison of the loss function and the traditional approach to calculating loss

  3. The use of the loss function in evaluating alternative actions

  4. A comparison of the loss function and C pk and the appropriate use of each

  5. The relationship of the loss function and the signal-to-noise (S/N) calculation that Dr. Taguchi uses in design of experiments

LOSS FUNCTION AND THE TRADITIONAL APPROACH

In the traditional approach ” see Figure 9.12 ” to considering company loss, parts produced within the spec limits perform equally well, and parts outside of the spec limits are equally bad. This approach has a fallacy in that it assumes that parts produced at the target and parts just inside the spec limit perform the same and that parts just inside and just outside the spec limits perform differently.

click to expand
Figure 9.12: Traditional approach.

Statistical Process Control (SPC) and process capability calculations (C pk ) have brought to the manufacturing floor an awareness of the importance of reducing process variability and centering around the target. However, the question still remains, "How can this thought process carry over into product and process decision?"

The loss function provides a way of considering customer satisfaction in a quantitative manner during the development of a product and its manufacturing process. The loss function is the cornerstone of the Taguchi philosophy. The basic premise of the loss function is that there is a particular target value for each critical characteristic that will best satisfy all customer requirements. Parts or systems that are produced farther away from the target will not satisfy the customer as well. The level of satisfaction decreases as the distance from the target increases . The loss function approximates the total cost to society, including customer dissatisfaction, of producing a part at a particular characteristic value.

Taken for a whole production run, the total cost to society is based on the variability of the process and the distance of the distribution mean to the target. Decisions that affect process variability and centering or the range over which the customer will be satisfied can be evaluated using the common measurement of loss to society.

The loss function can be used when considering the expenditure of resources. Customer dissatisfaction is very difficult to quantify and is often ignored in the traditional approach. Its inclusion in the decision process via the loss function highlights a gold mine in customer-perceived quality and repeat purchases that would be hidden otherwise . This gold mine is often available at a relatively minor expense applied to improving the product or process.

Note  

Use of the loss function implies a total system that starts with the determination of targets that reflect the greatest level of customer satisfaction. Calculation of losses using nominals that were set using other methods may yield erroneous results.

CALCULATION OF THE LOSS FUNCTION

Dr. Taguchi uses a quadratic equation to describe the loss function. A quadratic form was chosen because:

  1. It is the simplest equation that fulfills the requirement of increasing as it moves away from the target.

  2. Taguchi believes that, historically, costs behave in this fashion.

  3. The quadratic form allows direct conversion from signal-to-noise ratios and decomposition used in analysis of experimental results.

The general form for the loss function is:

L ( x ) = k

where L(x) is the loss associated with producing a part at "x" value; k is a unique constant determined for each situation; x is the measured value of the characteristic; and m is the target of the characteristic.

When the general form is extended to a production of "n" items, the average loss is:

This can be simplified to:

L = k

where ƒ 2 the population piece-to-piece variance; ¼ is the population mean; and ( ¼ - m) is the offset of the population mean from the target.

In the Nominal-the-Best (NTB) situation shown in Figure 9.13, A is the cost incurred in the field by the customer or warranty when a part is produced A from the target. A is the point at which 50% of the customers would have the part repaired or replaced . A and ˆ define the shape of the loss function and the value of "k."

click to expand
Figure 9.13: Nominal the best.

The loss resulting from producing a part at m - ˆ is:

A

=

k ˆ 2

k

=

A / ˆ 2

In general, the loss per piece is:

The loss for the population is:

L = A / ˆ 2 *

start example

A particular component is manufactured at an internal supplier, shipped to an assembly plant, and assembled into a vehicle. If this component deviates from its target of 300 units by 10 or more, the average customer will complain, and the estimated warranty cost will be $150.00. In this case,

k

=

$150.00 / (10 units) 2

 

=

$1.50 per unit 2

SPC records indicate that the process average is 295 units and the variability is eight units 2 . The present total loss is:

Fifty thousand parts are produced per year. The total yearly loss (and opportunity for improvement) is $6.7 million.

Situation 1

It is estimated that a redesign of the system would make the system more robust, and the average customer would complain if the component deviated by 15 units or more from 300. In this case:

k

=

$150 /

 

=

$0.67 per unit 2

The total loss would be:

L

=

$0.67

 

=

$59.63 per part

The net yearly improvement due to redesigning the system would be:

Improvement

=

*5000

 

=

$3,693,500

This cost should be balanced against the cost of the redesign.

Situation 2

It is estimated that a new machine at the component manufacturing plant would improve the mean of the distribution to 297 units and improve the process variability to 6 units 2 . In this case, the total loss would be:

L

=

$1.50

 

=

$67.50 per part

The net yearly improvement due to using the new machine would be:

Improvement

=

50,000

 

=

$3,300,000

This cost should be balanced against the cost of the new machine.

end example
 

From these situations, it is apparent that the quality of decisions using the loss function is heavily dependent upon the quality of the data that goes into the loss function. The loss function emphasizes making a decision based on quantitative total cost data. In the traditional approach, decisions are difficult because of the unknowns and differing subjective interpretations. The loss function approach requires investigation to remove some of the unknowns. Subjective interpretations become numeric assumptions and analyses, which are easier to discuss and can be shown to be based on facts.

In the smaller-the-better (STB) situation illustrated in Figure 9.14, the loss function reduces to:

click to expand
Figure 9.14: Smaller the better.

L = k [1/ n ˆ‘ x 2 ]

For the larger-the-better (LTB) situation illustrated in Figure 9.15, the loss function reduces to:


Figure 9.15: Larger the better.

L = k [1/ n ˆ‘ 1/ x 2 ]

COMPARISON OF THE LOSS FUNCTION AND C pk

The loss function can be used to evaluate process performance. It provides an emphasis on both reducing variability and centering the process, since those actions have a net effect of reducing the value of the loss function. Process performance is normally evaluated using C pk . C pk is calculated using the following equation:

click to expand

where = the average of the process.

Both the loss function and C pk emphasize minimizing the variability and centering the process on the target. The relative benefits of the two can be summarized as follows :

Loss function

  • Provides more emphasis on the target

  • Relates to customer costs

  • Can be used to prioritize the effect of different processes

C pk

  • Is easier to understand and use

  • Is based only on data from the process and specifications

  • Is normalized for all processes

The loss function represents the type of thinking that must go into making strategic management decisions regarding the product and process for critical characteristics. C pk is an easily used tool for monitoring actual production processes.

Figure 9.16 shows C pk and the value of the loss function for five different cases. In each of these cases, the specification is 20 ± 4 and the value of k in the loss function is $2 per unit 2 .

click to expand
Figure 9.16: A comparison of C pk and loss function.

Both C pk and the loss function emphasize reducing the part-to-part variability and centering the process on target. The use of C pk is recommended in production areas to monitor process performance because of the ease of understanding the clear relationship of C pk and the other SPC tools. Management decisions regarding the location of distributions with small variability within a large specification tolerance should be based on a loss function approach. (See cases 2 and 5 in Figure 9.16.)

The loss function approach should be used to determine the target value and to evaluate the relative merits of two or more courses of action because of the emphasis on cost and on including customer satisfaction as a factor in making basic product and process decisions. These questions also lend themselves to the use of design of experiments. The relationship of the loss function to the signal-to-noise DOE calculations used by Dr. Taguchi will now be discussed.

SlGNAL-TO-NOISE (S/N)

Signal-to-Noise is a calculated value that Dr. Taguchi recommends to analyze DOE results. It incorporates both the average response and the variability of the data. S/N is a measure of the signal strength to the strength of the noise (variability). The goal is always to maximize the S/N. S/N ratios are so constructed that if the average response is far from the target, re-centering the response has a greater effect on the S/N than reducing the variability. When the average response is close to the target, reducing the variability has a greater effect. There are three basic formulas used for calculating S/N, as shown in Table 9.16.

Table 9.16: Formulas for Calculating S/N
 

Signal-to-Noise (S/N)

Loss Function (L)

Smaller the better (STB)

-10 log 10 click to expand

L = k

Larger the better (LTB)

-10 log 10

L = k

Nominal the best (NTB)

-10 log 10

L = k

where

S/N for a particular testing condition is calculated by considering all the data that were run at that particular condition across all noise factors. Actual analysis techniques will be covered later.

The relationships between S/N and loss function are obvious for STB and LTB. The expressions contained in brackets are the same. When S/N is maximized, the loss function will be minimized. For the NTB situation, the total analysis procedure of looking at both the raw data for location effects and S/N data for dispersion effects parallels the loss function approach. Examples of these analysis techniques are given in the next section. S/N is used in DOE rather than the loss function because it is more understandable from an engineering standpoint and because it is not necessary to compute the value of k when comparing two alternate courses of action.

S/N calculations are also used in DOE to search for "robust" factor values. These are values around which production variability has the least effect on the response.

MISCELLANEOUS THOUGHTS

Many statisticians disagree with the use of the previously defined S/N ratios to analyze DOE data. They do not recognize the need to analyze both location effects and dispersion (variance) effects but use other measures. Dr. George Box's 1987 report is recommended to the reader who wishes to learn more about this disagreement and some of the other methods that are available.

In brief, Dr. Box disagrees with the STB and LTB S/N calculations and finds the NTB S/N to be inefficient. The approach that he supports is to calculate the log (or ln) of the standard deviation of the data, log(s), at each inner array setup in place of the S/N ratio. The log is used because the standard deviation tends to be log-normally distributed. The raw data should be analyzed (with appropriate transformations) to determine which factors control the average of the response, and the log(s) should be analyzed to determine which factors control the variance of the response. From these two analyses, the experimenter can choose the combination of factors that gives the response that best fills the requirements.

The data in Table 9.17 illustrate some of the concerns with the NTB S/N ratio. The first three tests (A through C) have the same standard deviation but very different S/N, while the last three tests (C through E) have the same S/N but very different standard deviations. The NTB S/N ratio places emphasis on getting a higher response value. This approach might lead to difficulties in tuning the response to a specific target.

Table 9.17: Concerns with NTB S/N Ratio

Test

Raw Data (4 Reps.)

Standard Deviation

NTB

S/N

A

1

2

4

5

1.83

3.89

B

15

11

12

14

1.83

17.03

C

18

21

19

22

1.83

20.78

D

24

24

28.12

28.12

2.38

20.78

E

42.55

42.8

50

50

4.23

20.78

It should be noted that Taguchi does discuss other S/N measures in some of his works that have not been widely available in English. An alternate NTB S/N ratio is available in the computer program ANOVA-TM, which is distributed by Advanced Systems and Designs, Inc. (ASD) of Farmington Hills, Michigan and is based on Taguchi's approach. This S/N ratio is:

NTB S / N = -10 log( s 2 ) = -20 log( s )

Maximizing this S/N is equivalent to minimizing log(s). Examples using this S/N ratio will be developed later.




Six Sigma and Beyond. Design for Six Sigma (Vol. 6)
Six Sigma and Beyond: Design for Six Sigma, Volume VI
ISBN: 1574443151
EAN: 2147483647
Year: 2003
Pages: 235

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net