SIMULATED SAMPLING


The sampling method, known generally as Monte Carlo, is a simulation procedure of considerable value.

Let us assume that a product is being assembled by a two-station assembly line. There is one operator at each of the two stations . Operation A is the first of the two operations. The operator completes approximately the first half of the assembly and then sets the half-completed assembly on a section of conveyor where it rolls down to operation B. It takes a constant time of 0.10 minute for the part to roll down the conveyor section and be available to operator B. Operator B then completes the assembly. The average time for operation A is 0.52 minute per assembly and the average time for operation B is 0.48 minute per assembly. We wish to determine the average inventory of assemblies that we may expect (average length of the waiting line of assemblies) and the average output of the assembly line. This may be done by simulated sampling as follows :

  1. The distributions of assembly time for operations A and B must be known or procured.

    Usually this is done through historical data, sometimes with surrogate. A study was taken for both operations, and two frequency distributions were constructed (not shown here). In the case of operation A, the value 0.25 minute occurred three times, 0.30 occurred twice, and so on. For operation A, the mean was 0.52 min with N = 167 and for operation B the mean was 0.48 with N = 115. The two distributions do not necessarily fit mathematical distributions but this is not important.

  2. Convert the frequency distributions to cumulative probability distributions.

    This is done by summing the frequencies that are less than or equal to each performance time and plotting them. The cumulative frequencies are then converted to percents by assigning the number 100 to the maximum value. The cumulative frequency distribution (not shown here) for operation A began at the lowest time, 0.25 minute; there were three observations. Three is plotted on the cumulative chart for the time 0.25 minute. For the performance time 0.30 minute, there were two observations, but there were five observations that measured 0.30 minute or less, so the value five is plotted for 0.30 minute. For the performance time 0.35 minute, there were 10 observations recorded, but there were 15 observations that measured 0.35 minute or less. When the cumulative frequency distribution was completed, a cumulative percent scale was constructed on the right, by assigning the number 100 to the maximum value, 167 in this case, and dividing the resulting scale into equal parts . This results in a cumulative probability distribution. We can use this distribution to say for example that 100 percent of the time values were 0.85 minute or less, 55.1 per cent were 0.50 minute or less and so on.

  3. Sample at random from the cumulative distributions to determine specific performance times to use in simulating the operation of the assembly line.

    We do this by selecting numbers between 0 and 100 at random (representing probabilities or percents). The random numbers could be selected by any random process, such as drawing numbered chips from a box, using a random number table, or using computer-generated random numbers. For small studies, the easiest way is to use a table of random numbers .

    The random numbers are used to enter the cumulative distributions in order to obtain time values. In our example, we start with the random number 10. A horizontal line is projected until it intersects the distribution curve; a vertical line projected to the horizontal axis gives the midpoint time value associated with the intersected point on the distribution curve, which happens to be 0.40 minute for the random number 10. Now we can see the purpose behind the conversion of the original distribution to a cumulative distribution. Only one time value can now be associated with a given random number. In the original distribution, two values would result because of the bell shape of the curve.

    Sampling from the cumulative distribution in this way gives time values in random order, which will occur in proportion to the original distribution, just as if assemblies were actually being produced. Table 4.1 gives a sample of 20 time values determined in this way from the two distributions.

    Table 4.1: Simulated Samples of 20 Performance Time Values for Operations A and B

    Operation A

    Operation B

    Random Number

    Performance Time from Cumulative Distribution for Operation A

    Random Number

    Performance Time from Cumulative Distribution for Operation B

    10

    0.40

    79

    0.60

    22

    0.40

    69

    0.50

    24

    0.45

    33

    0.40

    42

    0.50

    52

    0.45

    37

    0.45

    13

    0.35

    77

    0.60

    16

    0.35

    99

    0.85

    19

    0.35

    96

    0.75

    4

    0.30

    89

    0.65

    14

    0.35

    85

    0.65

    6

    0.30

    28

    0.45

    30

    0.40

    63

    0.55

    25

    0.35

    9

    0.40

    38

    0.40

    10

    0.40

    0.25

    7

    0.35

    92

    0.70

    51

    0.50

    82

    0.60

    2

    0.30

    20

    0.35

    1

    0.25

    40

    0.40

    52

    0.50

    44

    0.45

    7

    0.35

    25

    0.35

    Totals

    9.75

     

    8.20

  4. Simulate the actual operation of the assembly line.

    This is done in Table 4.2, which is very similar to waiting (queuing) line problems. The time values for operation A (Table 4.1) are first used to determine when the half-completed assemblies would be available to operation B. The first assembly is completed by operator A in 0.40 minute. It takes 0.10 minute to roll down to operator B, so this point in time is selected as zero. The next assembly is available 0.40 minute later, and so on. For the first assembly, operation B begins at time zero. From the simulated sample, the first assembly requires 0.60 minute for B. At this point, there is no idle time for B and no inventory. At time 0.40 the second assembly becomes available, but B is still working on the first so the assembly must wait 0.20 minute. Operator B begins work on it at 0.60. From Table 4.1, the second assembly requires 0.50 minute for B. We continue the simulated operation of the line in this way.

    Table 4.2: Simulated Operation of the Two-Station Assembly Line when Operation A Precedes Operation B

    Assemblies Available for Operation B at

    Operation B Begins at

    Operation B Ends at

    Time in Operation B

    Idle Waiting Time of Assemblies

    Number of Parts in Line, Excluding Assembly Being Processed in Operation B

    0.00

    0.00

    0.60

    0.40

    0.60

    1.10

    0.20

    1

    0.85

    1.10

    1.50

    0.25

    1

    1.35

    1.50

    1.95

    0.15

    1

    1.80

    1.95

    2.30

    0.15

    1

    2.40

    2.40

    2.75

    0.10

    3.25

    3.25

    3.60

    0.50

    4.00

    4.00

    4.30

    0.40

    4.65

    4.65

    5.00

    0.35

    5.30

    5.30

    5.60

    0.30

    5.75

    5.75

    6.15

    0.15

    6.30

    6.30

    6.65

    0.15

    6.70

    6.70

    7.10

    0.05

    7.10

    7.10

    7.35

    7.45

    7.45

    8.15

    0.10

    7.95

    8.15

    8.75

    0.20

    1

    8.25

    8.75

    9.10

    0.50

    1

    8.50

    9.10

    9.50

    0.60

    2

    9.00

    9.50

    9.95

    0.50

    2

    9.35

    9.95

    10.30

    0.60

    2

    Idle time in operation B

    = 2.10 minutes

    Waiting time of parts

    = 3.15 minutes

    Avenge inventory of assemblies between A and B

    = 3.15/9.35 = 0.34 assemblies

    Average production rate of A

    = [20 — 60]/9.75 = 123 pieces/ hour

    Average production rate of B (while working)

    = [20 — 60]/8.20 = 146 pieces/hour

    Average production rate of A and B together

    = [20 — 60]/10.30 = 116.5 pieces/hour

    Note: In the above computations , 20 is the total number of completed assemblies; 9.75 is the total work time of operation A for 20 assemblies from Table 4.1; 8.20 is the total work time, exclusive of idle time, for operation B for 20 assemblies from Table 4.1.

    The sixth assembly becomes available to B at time 2.40, but B was ready for it at time 2.30. He therefore was forced to remain idle for 0.10 minute because of lack of work. The completed sample of 20 assemblies is progressively worked out ” see Table 4.2.

The summary at the bottom of Table 4.4 shows the result in terms of the idle time in operation B, the waiting time of the parts, the average inventory between the two operations, and the resulting production rates. From the average times given by the original distributions, we would have guessed that A would limit the output of the line since it was the slower of the two operations. Actually, however, the line production rate is less than that dictated by A (116.5 pieces per hour compared to 123 pieces per hour for A as an individual operation). The reason is that the interplay of performance times for A and B does not always match up very well, and sometimes B has to wait for work. B's enforced idle time plus B's total work time actually determine the maximum production rate of the line.

A little thought should convince us that, if possible, it would have been better to redistribute the assembly work so that A is the faster of the two operations. Then the probability that B will run out of work is reduced. This is demonstrated by Table 4.3, which assumes a simple reversal of the sequence of A and B. The same sample times have been used and the simulated operation of the line has been developed as before. With the faster of the two operations being first in the sequence, the output rate of the line increases and approaches the rate of the limiting operation, and the average inventory between the two operations increases . With the higher average inventory there, the second operation in the sequence is almost never idle owing to lack of work. Actually, this conclusion is a fairly general one with regard to the balance of assembly lines; that is, the best labor balance will be achieved when each succeeding operation in the sequence is slightly slower than the one before it. This minimizes the idle time created when the operators run out of work because of the variable performance times of the various operations. In practical situations, it is common to find safety banks of assemblies between operations in order to absorb these fluctuations in performance.

Table 4.3: Simulated Operation of the Two-Station Assembly Line when Operation B Precedes Operation A

Assemblies Available for Operation A at

Operation A Begins at

Operation A Ends at

Time in Operation A

Idle Waiting Time of Assemblies

Number of Parts in Line, Excluding Assembly Being Processed in Operation A

0.00

0.00

0.40

0.50

0.50

0.90

0.10

0.90

0.90

1.35

1.35

1.35

1.85

1.70

1.85

2.30

0.15

1

2.05

2.30

2.90

0.25

1

2.40

2.90

3.75

0.40

1

2.70

3.75

4.50

1.05

2

3.05

4.50

5.15

1.45

2

3.35

5.15

5.80

1.80

3

3.75

5.80

6.25

2.05

3

4.10

6.25

6.80

2.15

4

4.50

6.80

7.20

2.30

4

4.75

7.20

7.60

2.45

5

5.45

7.60

7.95

2.15

5

6.05

7.95

8.45

1.90

5

6.40

8.45

8.75

2.05

5

6.80

8.75

9.00

1.95

6

7.25

9.00

9.50

1.75

5

7.60

9.50

9.85

1.90

6

Idle time in operation A

= 0.10 minute

Waiting time of parts

= 25.75 minutes

Average inventory of assemblies between A and B

= 25.75/7.60 = 3.4 assemblies

Average production rate of A (while working)

= [20 — 60]/9.75 = 123 pieces/hour

Average production rate of B

= [20 — 60]/8.20 = 146 pieces/hour

Average production rate of A and B together

= [20 — 60]/9.85 = 122 pieces/hour

We may have wanted to build a more sophisticated model of the assembly line. Our simple model assumed that the performance times were independent of other events in the process. Perhaps in the actual situation, the second operation in the sequence would tend to speed up when the inventory began to build up. This effect could have been included if we had knowledge of how inventory affected performance time.

If we have followed this simulation example through carefully , we may be convinced that it would work but that it would be very tedious for problems of practical size . Even for our limited example, we would probably wish to have a larger run on which to base conclusions, and there would probably be other alternatives to test. For example, there may be several alternative ways to distribute the total assembly task between the two stations, or more than two stations could be considered . Which of the several alternatives would yield the smallest incremental cost of labor, inventory costs, etc.? To cope with the problem of tedium and excessive person-hours to develop a solution, the computer may be used. If a computer were programmed to simulate the operation of the assembly line, we would place the two cumulative distributions in the memory unit of the computer. Through the program, the computer would select a performance time value at random from the cumulative distribution for A in much the same fashion as we did by hand. Then it would select at random a time value from the cumulative distribution for B, make the necessary computations, and hold the data in memory. The cycle would repeat, selecting new time values at random, adding and subtracting to obtain the record that we produced by hand. A large run could be made easily and with no more effort than a small run. Various alternatives could be evaluated quickly and easily in the same manner.




Six Sigma and Beyond. Design for Six Sigma (Vol. 6)
Six Sigma and Beyond: Design for Six Sigma, Volume VI
ISBN: 1574443151
EAN: 2147483647
Year: 2003
Pages: 235

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net