METHODS OF TESTING FOR UNDERLYING DISTRIBUTIONS


METHODS OF TESTING FOR UNDERLYING DISTRIBUTIONS

There are many ways to test for underlying distributions and especially for normality. Here, we identify 14 such methods. However, we discuss only the most common ones, and in Volume III of this series, we discussed some of the advanced tests. Some of these tests are not discussed because they (1) are not common and (2) are considered to be too technical for average applications. However, the reader may want to pursue such reading in Cochran (1952), Duncan (1986), Geary (1947), Shapiro (1980), Stephens (1974), Wilk et al. (1962), Williams (1950), and other statistical books found in the bibliography at the end of this chapter.

Common goodness of fit tests are the following:

  • Pearson's method of moments

  • Kolmogorov-Smirnov

  • Kuiper V

  • Pyke's C

  • Brunk's B

  • Durbin's B

  • Durbin's M

  • Watson's U 2

  • Fisher's

  • Hartley-Pfaffenberger S 2

  • Cramer-von Mises W 2

  • Geary's Z

  • Shapiro-Wilk W

  • D'Agnostino's D

PROBABILITY PLOTTING

Although not an objective test per se, probability plotting can be used to generally evaluate distributional assumptions based on sample data. Probability paper is widely available for normal, exponential, Weibull, lognormal, and extreme value distributions.

The basic concept underlying this method is that the scale has been structured so that for each type of paper, the cumulative distribution of the data will plot as a straight line if the assumed model applies. Deviations from absolute linearity will exist but are diminished as sample sizes increase. If the distribution is not applicable , however, the plotted values will fluctuate in a nonlinear but systematic fashion. A judgment related to the approximate "degree of linearity" of the points is used, therefore, to assess the applicability of the assumed distribution.

In determining the degree of linearity for the plotted values, three factors should be considered:

  1. The plotted, or observed , values represent continuous or discrete random variables and will therefore never all fall on a straight line.

  2. The values have been rank ordered and are therefore not statistically independent. The implication of this factor is that the observed values will not fall randomly around the line; one should anticipate runs of observed data around the line of best fit.

  3. Based on the concept of statistical regression, the variances of the data at the extremes of the distribution will be less than those values at the center (or near the mean) for the normal distribution. For the distribution model, this will be reversed . When plotting the line of best fit, therefore, appropriate weight should be given to the center (observed) points as compared with those at the extreme ends of the distribution.

The major disadvantage evident in the use of the probability plotting techniques for testing for normality (or the appropriateness of any other distribution) is the lack of objectivity due to the judgment of relative linearity. Two individuals might look at the same data and come to different conclusions regarding the appropriateness of the model. Indeed, two individuals rarely develop lines of best fit over the same data with exactly uniform slopes and therefore will arrive at different conclusions based on the analysis of the plot.

As a result, there are numerous other objective methods for the determination of the best fit for an underlying distribution or model. The remainder of this unit is devoted to a review of these procedures.

REGRESSION TESTS

The use of regression tests for underlying model assumptions is actually an extension of the probability plotting techniques. Specifically, these tests statistically determine the degree to which the observed data fit the expected values from the assumed distribution, as represented by the line of best fit. This is done by comparing s 2 (the variance of the observed data) with the slope of the simple regression line (line of best fit); the degree to which s 2 represents the squared slope of the line (this is referred to as the scaling parameter) is an indication of the appropriateness of the tested model.

The Shapiro-Wilk W Test for Normality

Performed over an ordered set of data, this test requires one table of constant values (designated as ± [alpha] values) and one table of critical values ( W cr ) for W. The value of W is computed as follows :

where

k

=

n/ 2 if n is even

k

=

(n - 1)/2 if n is odd

b

=

slope of the regression line

x i

=

individual observation/value

Unlike the case with a number of other hypothesis tests, if the computed value of W is less than the critical value of W (alpha) selected, then the hypothesis of normality is rejected.

The Shapiro-Wilk W(E) Test for Exponentiality

One of the distributions that occurs frequently, in the event of the rejection of normality, is the exponential distribution. This distribution has the following density function:

where µ = population/process mean and appears as in Figure 5.7 where 63% of the distribution falls below the mean, and 37% falls above the mean (unlike the normal distribution, you will note, which is symmetrical).


Figure 5.7: Negative exponential distribution.

The Shapiro-Wilk W(E) test statistic is calculated as previously:

W(E) = b 2 / s 2

where

If the calculated value falls outside of the minimum and maximum W (alpha) critical values (this is a two-tailed test), the hypothesis of exponentiality (i.e., that the process distribution may be approximated by the exponential model) is rejected.

Both the Shapiro-Wilk W and W(E) tests reflect good power efficiency, and both may be used with relatively small samples. For the actual table of criticality, the reader may see Shapiro and Wilk (1965) and Shapiro, Wilk, and Chen (1968). If computer software is used, there is no need for the table, as the outcome of the analysis will provide you with probability significance.

SHAPIRO-WILK W(E) TEST FOR EXPONENTIALITY
start example

An engineer, conducting a part approval study at a supplier's plant, gathers the following preliminary data based on 100 consecutive parts . (The measures presented represent deviations from a specified location for leads that were to be soldered to ceramic substrate pads. The deviation values were extracted as the distance of deviation from a datum point on the part.)

Mean = .9529558

STD DEV. = .1364507E-02

Variance = .1861881E-05

SUMS = 95.29550

Median = .9525000

Mode = .9515000

Maximum = .956000

Minimum = .9515000

STD. ERR. of MEAN = .1364507E-03

Skewness = .6930661

COEF. of Variation = .1431870

Performing a standard capability analysis (in which ±3 sigma and a normal model applies) over these data would be of little use because a moment analysis would reject the hypothesis of normality. If we were to carry this analysis forward, however, it would suggest that 99.73% of the parts would be expected to fall between .9570 and .9489! Although .9570 as an expected upper value might not be considered unreasonable at first glance, note that the minimum observed value in the distribution was .9515. According to this analysis, a value of .9515 has a z score of -1.07, and 14.46% of the scores in the distribution would be expected to fall below this value.

By illustrating the data on a histogram (Figure 5.8), the degree of spuriousness of these data becomes evident:

click to expand
Figure 5.8: Histogram of deviations.

Obviously, (1) 15% of the distribution did not and will not fall below .9515 from this (controlled) process; and (2) the use of ±3 (sigma) for assessing capability is inappropriate (as shown by the skewness index).

A visual inspection of the histogram would lead us to suspect that an exponential model might be appropriate for this process. Applying the W(E) test, the obtained values are as follows:

s 2

=

.000185

b

=

.001455

Then W(E) = .11443

Using an alpha level of .10, the critical values for W(E) and n = 100 are .0074 and .0139, which indicate that there is insufficient statistical evidence to reject the hypothesis of exponentiality. Any capability analysis performed over these data would, therefore, use the exponential distribution function, not the normal.

end example
 

CHI-SQUARE GOODNESS-OF-FIT TEST

The Chi-square ( ) goodness-of-fit test is probably the oldest and most widespread procedure employed for the analysis of underlying distributions. Although it does not possess sufficient power to justify its uses with small samples ( n < 125), it is of considerable use in the following instances:

  1. Samples are large, and power distinctions with comparative analyses are small.

  2. Data are discrete.

  3. Only tabulated (e.g., grouped frequency distributions) data are available and the original measurements are unknown.

In essence, the ( ) goodness-of-fit test is a discrepancy analysis; it allows us to compare (quantitatively) the observed distribution of data with the expected distribution had any particular model been appropriate. Two factors are of importance to note in conjunction with this analysis:

  1. The greater the number of class intervals used in the goodness-of-fit test, the more sensitive the analysis.

  2. The goodness-of-fit test may be used to test any underlying distribution of model, as it is a true nonparametric test statistic.

The degree of freedom ( v ) employed with this analysis is K -3 (where K is the number of class intervals), and there should be no fewer than five expected frequencies for any class interval. If this should occur, intervals should be collapsed to obtain the necessary (expected) frequency.

Standard Chi-Square Goodness-of-Fit Analysis

The general form of the ( ) test statistic is as follows:

where

k

=

total number of class intervals

f o

=

observed frequencies in each class interval

fe

=

expected frequencies in each class interval based on the model tested

Duncan (1986) provides some guidelines to follow in performing this analysis, for testing a hypothesis of normality:

  1. Order the data in a grouped frequency distribution (column 1, Table 5.2).

    Table 5.2: Goodness-of-Fit Test: Basic Tabulations

    (1) Upper Limit of Class Interval

    (2) Z ( X / ƒ )

    (3) Relative of (- ˆ to Z)

    (4) Relative f e /Interval

    (5) f e (Expected)

    (6) f (Actual/Observed

    .8215

    -1.68

    .0465

    .0465

    6.8

    9

    .8245

    -1.17

    .1210

    .0745

    10.8

    5

    .8275

    -.66

    .2546

    .1336

    19.4

    14

    .8305

    -.15

    .4405

    .1858

    26.9

    21

    .8335

    .36

    .6406

    .2002

    29.0

    55

    .8365

    .86

    .8051

    .1645

    23.9

    23

    .8395

    1.37

    .9147

    .1096

    15.9

    7

    .8425

    1.88

    .9700

    .0553

    8.0

    6

    ˆ

    ˆ

    1.000

    .0300

    4.3

    5

           

    145.0

    145

  2. Calculate the z scores associated with the upper limit of each class interval (column 2, Table 5.2).

  3. Using a table of density (probability) values for the normal distribution (because that is the model we are testing), identify the cumulative relative frequency of the distribution from (- ˆ ) to the z value of each upper limit (column 3, Table 5.2).

  4. Calculate the relative frequency of each class interval (column 4, Table 5.2).

  5. Multiplying the relative interval frequency of each interval by the sample size ( n ), calculate the expected frequency for each cell (column 5, Table 5.2). Up to this point, the data appear in Table 5.2.

  6. For each class interval, calculate and add the ( ) component value derived from the difference between the observed frequencies ( f o ; column 6, Table 5.2) and the expected frequencies ( f e ; column 5, Table 5.2).

    Compare the calculated value of ( ) to a critical value of ( ) at an appropriate level of significance (e.g., ± = .10). If the calculated value exceeds the critical value, the hypothesis (of normality, in this case) is rejected:

The hypothesis of normality is, therefore, rejected.

This same procedure could be employed for the analysis of any other underlying model; the only change would be in calculating the relative expected frequency for each class interval. For other models, z scores would not be used, and other density tables would have to be available for the probability functions.

Cochran's Procedure for the Goodness-of-Fit Test

The major weakness in the standard goodness-of-fit, as previously described in the section "Standard Chi-Square Goodness-of-Fit Analysis," is the problem of assuring adequate f e values in each cell (interval), while concurrently using a grouped frequency distribution with enough intervals to provide adequate test sensitivity at a given sample size. It is important to recall that as the number of cells in a goodness-of-fit test decreases, so does the sensitivity of the test.

In Duncan's example, the value was calculated at 6 df; many conservative statisticians would have recommended collapsing the last two intervals (UCL of .8425 and ˆ ) to achieve a minimum f e of 5 in every cell. Duncan chose not to take this step because of concern with the loss in sensitivity to the analysis.

An optional procedure that should be employed if the data are available in the form of individual measures or observations is Cochran's procedure. The first step in this procedure is to employ Williams' method for determining the optimum number of cells for a goodness-of-fit test:

( k = 4[0.75( n - 1) 2 ] 1/5 )

where

n

=

sample size

k

=

optimum number of cells

Should n/k equal a value less than 5, this procedure should not be used, as it is employed to ensure that the f e for any cell (interval) is at least 5. In contrast to Duncan's procedure in the section "Standard Chi-Square Goodness-of-Fit Analysis," employing Cochran's procedure would have resulted in:

k

=

4(0.75) n - 1) 2 ) 1/5

 

=

4[0.75(145 - 1) 2 ] 1/5

 

=

28 cells

rather than in the 9 cells used, with one of the 9 reflecting an inadequate f e value. This increased sensitivity is accomplished by calculating the cell boundaries such that the probability that a value or observation selected at random will fall in any cell is equal. This will, of course, result in intervals of unequal sizes, in the context of score boundaries.

Testing for Normality

As an example of how this procedure would be employed for testing for the appropriateness of a normal distribution, we will use the data in Table 5.3 and Figure 5.9; these data were gathered from the lead placement discrepancy analysis for attachment to soldering pads on a ceramic substrate. The 135 ( n ) measures were recorded as distances measured from a datum point. There are 1 variable and 135 observations.

Table 5.3: Data from the Lead Placement Analysis for Attachment to Soldering Pads on a Ceramic Substrate

Mean = .9536319

Std dev = .197421E-02

Maximum = .9590000

Std err of mean = .1 699741 E-03

Median = .9529000

Variance = .3900312E-05

Minimum =.9515000

Skewness = 1.1244521

Mode = .9517000

Sums = 128.7483

Coeff of var = .2070947

 
click to expand
Figure 5.9: The range and the histogram data from the lead placement analysis for attachment to soldering pads on a ceramic substrate.
  • Step 1 : Calculate the optimum number of cells.

    k

    =

    4[0.75(135-1)2] 1/5

     

    =

    26 cells at an f e of 5.19

  • Step 2 : Calculate the upper cell limit (UCL) for each interval.

  • If the probability of an observation falling into any cell is to be 1/k, the limits are calculated as:

where

=

estimated mean of the process

=

estimated standard deviation of the process

i

=

interval case (number)

  • In our example, = .9536 and = .0020. The upper limit of the first interval ( i = 1), therefore, would be calculated as follows:

    UCL i

    =

    .9536 + z 1/26 (.002) = .9536 + z .0385 (.002)

     

    =

    .9536 + (-1.77)(.002)

     

    =

    .9501

  • Following this procedure for the remainder of the data, the frequency distribution with an observed frequency breakdown would appear as in Table 5.4.

    Table 5.4: The Cell, the UCL, and the Observed Frequency

    Cell

    UCL

    f o

    Cell

    UCL

    F o

    1

    .9501

    14

    .9538

    3

    2

    .9507

    15

    .9540

    4

    3

    .9512

    16

    .9542

    4

    4

    .9516

    10

    17

    .9544

    3

    5

    .9519

    16

    18

    .9546

    3

    6

    .9521

    10

    19

    .9548

    2

    7

    .9524

    14

    20

    .9551

    4

    8

    .9526

    5

    21

    .9553

    2

    9

    .9528

    7

    22

    .9556

    3

    10

    .9530

    9

    23

    .9560

    3

    11

    .9532

    5

    24

    .9565

    5

    12

    .9534

    2

    25

    .9571

    2

    13

    .9536

    5

    26

    ˆ

    12

  • Step 3: Calculate the test statistic.

  • Given that the f e for each cell is k/n, then the test statistic for the discrepancy between observed to expected frequencies is as follows:

where

k

=

number of cells

n

=

sample size

=

sum of the squared observed frequencies

For this example:

  • Step 4 : Assess for statistical significance.

  • Comparing the calculated value of (83.59) with a critical value of where v (df) = 23 at an ± level of 0.10, we find that our calculated value exceeds the maximum value we would expect (32.0) if the discrepancy between observed and expected frequencies were due to chance alone. We have, therefore, sufficient evidence to reject the hypothesis of normality.

Testing for Exponentiality

To show how this procedure would be used to test for other underlying distributions, the data tested for normality will now be tested for a hypothesis of exponentiality.

Assuming that the origin parameter of the distribution is unknown or not equal to 0, the upper cell limit (UCL) values are calculated as follows:

where

k

=

number of cells

i

=

each case ( i = 1, 2,...k - 1)

ln( z )

=

natural log of z

»

=

the parameter of the exponential distribution

and where = x min ( X min = minimum value or observation)

For our example,

The first UCL would be calculated as:

where the k value was obtained from Williams' formula, as previously used. For our data, this UCL would be:

UCL 1 = (-0021)(0.0392) + 0.9515 = 0.95158

Repeating this procedure for the remainder of the cells, we generate Table 5.5. The test statistic value for the observed frequencies (recall that the f e for each cell is n/k ) in this case is as follows:

Table 5.5: The Points for UCL and the f

Cell

UCL

f

Cell

UCL

f

1

.9515836

5

14

.9531483

6

2

.9516706

5

15

.9533338

3

3

.9517614

6

16

.9535378

5

4

.9518561

5

17

.9537616

5

5

.9519553

5

18

.9540127

5

6

.9520593

4

19

.9542974

4

7

.9521687

6

20

.9546260

6

8

.9522839

5

21

.9550147

5

9

.9524058

9

22

.9554904

4

10

.9525650

1

23

.9561037

5

11

.9526726

4

24

.9569681

6

12

.9528197

7

25

.9584458

8

13

.9529777

6

26

ˆ

5

DISTANCE TESTS

A number of tests for underlying distributional analysis are based on the concept that by using a probability integral transformation, sample data may be converted and compared with a hypothesized distribution. The concept of distance is drawn from the procedure whereby the selected test statistic evaluates the degree of discrepancy between the transformed sample and the uniform distribution.

The most well-known distance test for distributional analysis is the Kolmogorov-Smirnov (D) test. Shapiro (1980) points out, however, that this test is of little use in any area other than theoretical studies because of the following factors:

  • In most cases, the actual parameters of the distribution are unknown, and D depends on these parameters for calculation purposes.

  • The power of this test is significantly lower than that for similar procedures.

Stephens (1974) found that of all the distance tests, the Anderson-Darling (A 2* ) test statistic generally possesses the highest relative power efficiency in testing for normality when the distribution parameters are unknown. An additional advantage of this statistic is that it may be used in conjunction with relatively small (i.e., n 25) samples. One minor disadvantage of this test is that a distributional table is required (see Shapiro) for critical values of A 2* ).

The Anderson-Darling test for normality would be conducted according to the following procedure. As an applications example, the following data have been collected as associated with the Watch Dog Timer Test for a set of ten EEC IV modules:

19.6541

19.6589

19.6606

19.6575

19.6490

19.6634

19.6555

19.6539

19.6523

19.6538

To test a hypothesis of normality, the following steps would be taken:

  • Step 1: Order the data and calculate the X and s.

    19.6490

    19.6555

    19.6523

    19.6575

    19.6538

    19.6689

    19.6539

    19.6606

    19.6541

    19.6634

    Xbar = 19.6559

    s = .0042

  • Step 2: Calculate the standardized variates and cumulative probabilities for each observation.

    Z i = ( x i - X bar)/ s for ( i = 1, 2,... n )

where

x i

=

the ith ordered observation/value

z i

=

the standardized variates for each observation

  • For example, the z 1 for our data would be calculated as

    Z 1 = (19.6490 - 19.6559)/.0042 = -1.6249

  • Converting a z score of -1.6249 to a cumulative normal probability value, we would find that p c (-1.6249 z ) = .0526. Repeating this procedure for the remainder of the data set generates the values in Table 5.6.

    Table 5.6: z and P c Values

    X

    z

    P c

    19.6490

    -1.62

    .0526

    19.6523

    -.85

    .1977

    19.6538

    -.50

    .3085

    19.6539

    -.47

    .3192

    19.6541

    -.42

    .3372

    19.6555

    -.09

    .4641

    19.6575

    .38

    .6480

    19.6589

    .71

    .7611

    19.6606

    1.11

    .8665

    19.6634

    1.77

    .9616

  • Step 3: Calculate the A 2 value for the data set:

where

  • In( p c ) = natural log of the cumulative proportion.

  • For our data, this is equivalent to:

    A 2 = -[(-102.0342)/10] - 10 = 0.2034

  • Step 4: Calculate the Anderson-Darling statistic ( A 2* )

    ( A 2 * = A 2 (1 + 0.75/ n + 2.25/ n 2 )

  • For our example data, then, this value is equivalent to:

    A 2 * = 0.2034(1 + 0.75/10 + 2.25/100) = 0.2232

  • Step 5. Assess the significance of the A 2 * value.

  • This test is a one-tailed analysis for the hypothesis associated with testing for an underlying distribution. If the computed value of A 2 * exceeds the critical value at the selected ± level, then the hypothesis (in this case, of normality) is rejected. The critical values for this test are reported in Table 5.7.

    Table 5.7: Critical Values for the A 2 *

    ±

    A 2* Critical

    .25

    .472

    .20

    .509

    .15

    .561

    .10

    .631

    .05

    .752

    .025

    .873

    .01

    1.035

    .005

    1.159

Noting that our calculated value of 0.2232 does not exceed the critical value of 0.631 at a significance level of 0.10, the hypothesis of normality is accepted.

There is a variation of the Anderson-Darling test for exponential distribution analysis ( B 2 * ), but it is limited to those cases in which the origin parameter is known. Given that this is not typically the case in industrial applications, this test will not be reviewed here. For information concerning its use, refer to Shapiro (1980) or Stephens (1974).

MOMENT TESTS FOR THE NORMAL DISTRIBUTION

The theoretical normal distribution has a third standardized moment as follows:

which provides a measure of the skewness of a distribution. For a normal distribution, ³ 1 will be equal to 0; positive values indicate that the distribution has a greater tendency to tail to the right (positively skewed). Negative values indicate a tailed distribution to the left (negatively skewed).

As with all other tests, the skewness analysis is based on a hypothesis test. Specifically, it assesses whether the sample data is so skewed that it is not probable that it was randomly drawn from a distribution that can be approximated by a normal model. Although a number of formulas exist for this purpose, a common calculational formula is as follows:

g 1 = k 3 / s 3

where

g 1

=

skewness index for sample data

s 3

=

the standard deviation of the sample raised to the third power

k 3

=

The fourth standardized moment is a measure of the peakedness, or kurtosis , of the distribution. The normal distribution is said to be mesokurtic, or intermediate, in nature. Distributions that are narrow with high peaks are said to be leptokurtic. Flattened, broad distributions with high "shoulders" are said to be platykurtic. The standardized fourth moment has the following theoretical form:

An important note should be raised at this point related to the kurtosis index. Many statisticians use calculation formulas derived from the formula above, which results in a calculated index of 0 for normal distributions. In other cases, however, calculation formulas are derived from the form:

which results in a kurtosis index of 3 for the theoretical normal distribution. Neither approach is more or less correct, or more powerful, than the other. The only possible advantage of the first approach (which is used in this section) is that while skewness is considered more desirable, it is essential that the basis for the formula used in a computer package or from a text reference is known in order to properly evaluate the results of the kurtosis analysis.

A common calculational formula for kurtosis (where E ( ³ 2 ) = 0) is

g 2 = k 4 / s 4

where

g 2

=

kurtosis index for the sample data

s 4

=

standard deviation of the sample, raised to the 4 th power

k 4

=

click to expand

Testing the hypothesis of normality using the moment test is made relatively simple, given that most computer statistical packages provide a measure of the skewness (g 1 ) and kurtosis(g 2 ) of the sample data.

Using the lead placement data from the section "Chi-Square Goodness-of-Fit Test" in this chapter, the Frequencies subprogram from SPSS was used to evaluate the data set. The statistics generated were as follows:

  • Mean = 0.954

  • Mode = 0.952

  • Median = 0.953

  • Variance = 0.000

  • Range = 0.008

  • Skewness = 1.137

  • Kurtosis = 0.416

  • Minimum = 0.951

  • Maximum = 0.959

  • Standard error = 0.000

  • Standard deviation = 0.002

  • Valid cases = 135 Missing cases = 0

The next step is a determination of the significance of the calculated skewness (g 1 = 1.137) and kurtosis (g 2 = 0.416) indices. Skewness, being a symmetrical characteristic, has one set of critical values, for positive and negative indices. Kurtosis, which is not a symmetrical departure from normality, has one table of critical values for those cases in which g 2 is positive and a second table of such values for cases in which g 2 is negative.

The tables available for this test indicate that a rejection of the hypothesis of normality is appropriate if g 1 exceeds the critical value in a positive direction, if g 2 falls above the positive critical value, or if g 2 falls below the negative critical value. (If a computer software package is used, the actual p value is generated and the decision is made based on the probability.)

For our sample data, the relevant critical values for a significance value of 0.05 (0.10 was not available) are as in Table 5.8.

Table 5.8: Relevant Critical Values

n

³ 1

³ 2

125

.350

.71

150

.321

.65

As shown by these data, we may reject the hypothesis of normality on the basis of the skewness index (i.e., 1.137 > 0.350). This was, of course, not totally unexpected because we previously concluded that the distribution could be approximated with an exponential model. Note also that as a function of the critical values available, we do not have sufficient statistical evidence to reject the hypothesis of normality on the basis of calculated kurtosis at the 0.05 level of significance. However, it is important to note that critical values for this test at 0.10 were not available, and therefore if we had tested these data at that level, we might have rejected the hypothesis at that level based on the smaller critical value.




Six Sigma and Beyond. Statistical Process Control (Vol. 4)
Six Sigma and Beyond: Statistical Process Control, Volume IV
ISBN: 1574443135
EAN: 2147483647
Year: 2003
Pages: 181
Authors: D.H. Stamatis

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net