44.

[Top] [Next] [Prev]

2.4 CAD Tools for Simplification

The algorithm we presented in the previous section for extracting essential prime implicants from a K-map could form the basis for computer-based tools. In this section, we examine computer-based algorithms for two-level simplification in more detail. We begin with the Quine-McCluskey method, the first complete algorithm for simplification. We complete this section by covering the methods used in espresso, a popular tool for two-level minimization. While not guaranteed to find the best two-level expression, espresso uses several tricks to find a good solution as fast as possible.

2.4.1 Quine-McCluskey Method

Except in special cases or for particularly sparse truth tables, the K-map method simply breaks down beyond six variables. We need a more systematic algorithm. The Quine-McCluskey method, developed in the mid-1950s, finds the minimized representation of any Boolean expression. It provides a systematic procedure for generating all prime implicants and then extracting a minimum set of primes covering the on-set.

The method finds the prime implicants by repeatedly applying the uniting theorem, just as we did earlier in this section. The contribution of Quine-McCluskey is to provide a tabular method that ensures that all prime implicants are found. To understand how it works, let's use the same example as in Figure 2.50: F = Â m(4,5,6,8,9,10,13) + d(0,7,15).

Finding Prime Implicants The first step is to list all elements of the on-set and don't-care set in terms of their min-term indices, represented as a binary number. The elements are grouped according to the number of 1's in the binary representation.

Table 2.1 shows the structure of a Quine-McCluskey implicant table. The first column contains the min-terms of the on-set, that is, single points in the Boolean space. In the example, each of these represents a four-variable product term (a min-term). As a result of applying the method, the second column will contain implicants that form edges in the Boolean space: three-variable product terms. After another iteration of the method, the third column will contain larger implicants that represent planes in the Boolean space: two-variable terms. A third iteration will find implicant cubes in the space: one-variable terms.

We begin the method by filling in the first column of the table as follows. Each group of the min-term indices of the on-set and don't care set is separated by a blank line. The first group has no 1's in the indices, the second has one 1 in each index, the third has two 1's in each index, and so on.

To apply the uniting theorem systematically, compare the elements in the first group against each element in the second. If they differ by a single bit, it means that the min-terms the numbers represent are adjacent in n-dimensional Boolean space. For example, 0000 = and 0100 = can be combined into the implicant according to the uniting theorem. The latter term is represented symbolically by 0-00. Every time a new implicant is formed, it is placed in the next column. Since each group differs in its 1's count by one, it is sufficient to restrict comparisons to adjacent groups to detect when the uniting theorem can be applied.

Let's apply the Quine-McCluskey algorithm to the whole first column. We begin with the first group (no 1's) against the second group (one 1's). 0000 is compared against 0100 and 1000, yielding terms for the second column of 0-00 and -000 respectively. Every time a term con-tributes to a new implicant, it receives a check mark. This means that the implicant is not prime: it can be combined with some other element to form a larger implicant.

We repeat for the second group against the third group. 0100 combines with 0101 and 0110, giving 010- and 01-0 in the second column. 1000 combines with 1001 and 1010, resulting in 100- and 10-0.

Now we try the third group against the fourth group. 0101 combines with 0111 and 1101 to give 01-1 and -101. 0110 combines with 0111 to yield 011-. 1001 combines with 1101 to give 1-01. 1010 does not combine with any element of the fourth group.

When we compare the fourth to the fifth group, two additional terms are added: -111 and 11-1.

The procedure is repeated for the groups in column II. For the elements to be combined, they must differ by a single bit and must have their "-" in the same bit position. The elements of the first group do not combine with any elements of the second group. We mark these with asterisks because they are prime implicants: we have expanded them as much as possible.

In the second and third groups, 010- can be combined with 011-, yielding 01-- for the third column. 01-0 and 01-1 are combined to derive the same condensed term. 100- and 10-0 cannot be combined further and are prime implicants.

Between the third and fourth groups only the additional term -1-1 is added to the third column, derived from the combinations of -101 and -111, and 01-1 and 11-1.

The elements of the third column cannot be reduced further. Both are marked as prime implicants. Since no new implicants are added, we have found all prime implicants, and the first phase of the algorithm can now terminate.

The algorithm has found the following prime implicants:

0-00 =    -000 = 100- = 10-0 = 1-01 = 01-- = -1-1 =
These are shown circled in the K-map of Figure 2.53.

They are exactly the same as the prime implicants we found in the previous section. Note that in this portion of the procedure, the don't cares have been treated as though they are 1's.

Finding the Minimum Cover The second step of the method is to find the smallest collection of prime implicants that cover the complete on-set of the function. This is accomplished through the prime implicant chart (as opposed to the implication chart used in the first phase), as shown in Figure 2.54.

The prime implicant chart is organized as follows. The columns are labeled with the min-term indices of the on-set. Note that don't cares are not included in this stage. The rows are labeled with the min-terms covered by a given prime implicant. This is done by taking the indices of the prime implicant representation and replacing each "-" by all possible combinations of 1's and 0's. For example, -1-1 becomes 0101, 0111, 1101, 1111, which are the indices of the min-terms m5, m7, m13, and m15. An X is placed in the (row, column) location if the min-term represented by the column is covered by the prime implicant associated with the row. The initial configuration is given in Figure 2.54(a).

Next, we look for essential prime implicants. These are immediately apparent whenever there is a single X in any column. This means that there is a min-term that is covered by one and only one prime implicant. The essential prime implicants must participate in the final cover. We place a line through the column and row in which the essential prime implicant has been found and place a box around the prime. This is shown in Figure 2.54(b).

The essential prime implicants usually cover additional min-terms. We cross out any columns that have an X in a row associated with an essential prime. These min-terms are already covered by the essential primes. This is shown in Figure 2.54(c).

In the example, two min-terms are still uncovered, represented by the columns 9 and 13. The final step is to find as few primes as possible that cover the remaining min-terms. In our example, the single prime implicant 1-01 covers both of these. Adding it to the two essential prime implicants already found completes the cover. This is shown in Figure 2.54(d). The solution found here is identical to the one we found earlier in Figure 2.50.

2.4.2 Espresso Method

Unfortunately, the number of prime implicants grows very quickly as the number of inputs increases. It can be shown that the upper bound on the number of prime implicants is 3n/n. Finding a minimum set cover is also known to be a very difficult problem, a so-called NP--complete problem. This means that there are not likely to be any efficient algorithms for solving it.

Thus, much of the work in logic minimization has concentrated on heuristic methods to perform these two steps more quickly, finding a minimal solution rapidly rather than guaranteeing a minimum solution will be found. The primary technique avoids generating all prime implicants by judiciously computing a subset of primes that still cover the on-set. In this subsection, we will examine the algorithms and techniques used in espresso.

Algorithms Used in Espresso Espresso is a program for two-level Boolean function minimization, readily available from the University of California, Berkeley. It combines many of the best heuristic techniques developed in earlier programs, such as mini and presto. Although a detailed explanation of the operation of espresso is beyond the scope of this book (see Brayton et al. in the chapter references), the basic ideas employed by the program are not difficult to understand. They are as follows:

    1. Rather than start by generating all implicants and then finding those that are prime, espresso expands implicants into their maximum size. Implicants that are covered by an expanded implicant are removed from further consideration. This process is called EXPAND. How well this works depends critically on the order and direction in which implicants are expanded. Much of the power of espresso lies in its methods for directing and ordering the expansion.
    2. An irredundant cover (that is, one for which no proper subset is also a cover) is extracted from the expanded implicants. The method is essentially the same as the Quine-McCluskey prime implicant chart method. This step is called IRREDUNDANT COVER.
    3. At this point, the solution is usually pretty good, but under certain conditions it can still be improved. There might be another cover with fewer terms or the same number of terms but fewer literals. To try to find a better cover, espresso first shrinks the prime implicants to the smallest size that still covers the logic function. This process is called REDUCE.
    4. Since reduction yields a cover that is typically no longer prime, the sequence of steps REDUCE, EXPAND, and IRREDUNDANT COVER are repeated in such a fashion that alternative prime implicants are derived. Espresso will continue repeating these steps as long as it generates a cover that improves on the last solution found.
    5. A number of other strategies are used to improve the result or to compute it more quickly. These include (a) early identification and extraction of essential prime implicants, so they need not be revisited during step 4; (b) using the function's complement to check efficiently whether an EXPAND step actually increases the coverage of the function (the min-terms covered by an expanded implicant may already be covered by another expanded implicant, so the newly expanded implicant should not be placed in the cover); and (c) a special "last gasp" step which guarantees that no single prime implicant can be added to the cover in such a way that two primes can then be eliminated.
Espresso's input is an encoded truth table. For the example of Figure 2.50, the input is shown in Figure 2.55.

The first two lines describe the number of inputs and outputs of the function. The next two describe the input symbol names and the name of the output function. The next line gives the number of product terms used in specifying the function. In this case, there are 10 terms: seven 1's and three don't cares. Each term is listed by its min-term index, encoded in binary, followed by a 1 or a "-." The latter indicates a don't care. The last line indicates the end of the list of product terms.

The output minimized truth table is shown in Figure 2.56.

The first four lines have the same meaning as before. The encoding for the minimum cover is identical to that used in the Quine-McCluskey method. 1-01 corresponds to , 10-0 to , and 01-- to .

To see how the iteration of REDUCE, IRREDUNDANT COVER, and EXPAND can improve the cover, consider the four-variable K-map of Figure 2.57.

Figure 2.57(a) shows the primes as found by espresso after executing steps 1 and 2 for the first time. It has four prime implicants and is an irredundant cover, but this is not the minimum cover possible.

The results of the REDUCE step are shown in Figure 2.57(b). The prime implicant has been reduced to the implicant (no longer prime) , and has been reduced to (also no longer prime). The particular choice of reductions depends on heuristics and espresso's order of execution. The result of the second iteration of EXPAND is shown in Figure 2.57(c). Espresso retains the last irredundant cover, and its expansion algorithms guarantee that it never generates the same cover twice. The IRREDUNDANT COVER extracted by espresso is given in Figure 2.57(d). The three-product-term cover is indeed an improvement on the original result.


[Top] [Next] [Prev]
This file last updated on 06/23/96 at 19:47:39.
randy@cs.Berkeley.edu;


What is Sarbanes-Oxley[q]
What is Sarbanes-Oxley[q]
ISBN: 71437967
EAN: N/A
Year: 2006
Pages: 101

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net