16.

[Top] [Next] [Prev]


1.3 Multiple Representations of a Digital Design

A digital designer must confront the many alternative ways of thinking about digital logic. In this section, we examine some of the commonly encountered representations of a digital design. We examine them -bottom-up, starting with the switch representation and proceeding through Boolean algebra, truth table, gate, waveform, blocks, and behaviors (see Figure 1.13).

Behaviors are the best representation for understanding a design at its highest level of abstraction, perhaps during the earliest stages of the design. On the other hand, switches may be the most appropriate representation if you are implementing the design directly as an integrated circuit. We will cover each of these representations in considerably more detail in subsequent chapters.

1.3.1 Switches

Switches are the simplest representation of a hardware system we will examine. While providing a convenient way to think about logic statements, switches also correspond directly to transistors, the simple electronic devices from which all hardware systems are constructed.

Basic Concept A switch consists of two connection terminals and a control point. The connection terminals are denoted by small circles; the control point is represented by an arrow with a line through it.

Switches provide a rather idealized abstraction of how transistors actually behave. If you apply true (a logic 1 voltage) to the control point, the switch is closed, tying together the two connection terminals. Setting false (a logic 0 voltage) on the switch control causes the switch to open, cutting the connection between the terminals. We call such switches normally open: the switch is open until the control point becomes true.

There are actually two different kinds of transistors. So just as there are normally open switches, there are also normally closed switches. The notation is shown in Figure 1.14. A normally closed switch is closed until its control point becomes true.

Switches can represent logic clauses by associating the logic clause with the control point of the switch. A normally open switch is closed when the clause associated with the control point is true. We say that the clause is asserted. The switch is open, and the connection path is broken, when the clause is false. In this case, we say that the clause is unasserted. Normally closed switches work in a complementary fashion.

Switch Representation of Logic Statements Switch circuits suggest a possible implementation of the switching networks presented in the previous section. The network computes a logical function by routing "true" through the switches, controlled by logic clauses.

Let's illustrate the concept with the example of the car in the garage. Figure 1.15 gives a switching network for determining whether it is safe to back the car out. The three conditions, car in garage, garage door open, and car running, control the switching elements. The input to the network is "true"; the output is true only when all three conditions are true. In this case, a closed switch path exists from the true input to the output.

But what happens if the car is not in the garage (or the garage door isn't open or the car isn't running)? Then the path between true and the output is broken. What is the output value in this case? Actually, it has no value-it is said to be floating.

Floating outputs are not desirable in logic networks. We really should have a way to force the output to false when the car can't be backed out. Every switch output should have a path to true and a complementary path to false.

This is shown in Figure 1.16 for the implementations of AND and OR operations. Note how placing normally open switches in series yields an AND. When A and B are true, the switches close and a path is established from true to the output. If either A or B is false, the path to true is broken, and one of the parallel normally closed switches establishes a connection from false to the output.

In the OR implementation, the parallel and series connections of the switches are reversed. If either A or B is true, one or both of the normally open switches will close, making a connection between true and the output. If both A and B are false, this path is broken, and a new path is made between false and the output.

The capabilities of switching networks go well beyond modeling such simple logic connectives. They can be used to implement many unusual digital functions, as we shall see in Chapter 4. In particular, there is a class of functions that can best be visualized in terms of directing inputs to outputs through a maze of switches. These are excellent candidates for direct implementation by transistor switches. For some examples, see Exercise 1.8.

1.3.2 Truth Tables

The switch representation is close to how logic functions are actually implemented as transistors. But this is not always the simplest way to describe a Boolean function. An alternative, more abstract representation tabulates all possible input combinations with their associated output values. We have already seen the concept of a truth table in Figure 1.6.

Let's consider a simple circuit with two binary inputs that generates sum and carry outputs (there is no carry-in input). This circuit is called a half adder. Its truth table is shown in Figure 1.17. The network has two inputs, A and B, and two outputs, Sum and Carry. Thus the table has four columns, one for each input and output, and four rows, for each of the 22 unique binary combinations of the inputs.

Figure 1.18 shows the truth table for the full adder, a circuit with two data inputs A and B, a carry-in input Cin, and the Sum and Cout outputs of the half adder. The three inputs have 23 unique binary combinations, leading to a truth table with eight rows.

Truth tables are fine for describing functions with a modest number of inputs. But for large numbers of inputs, the truth table grows too large. An alternative representation writes the function as an expression over logic operations and inputs. We look at this next.

1.3.3 Boolean Algebra

Boolean algebra is the mathematical foundation of digital systems. We will see that an algebraic expression provides a convenient shorthand notation for the truth table of a function.

Basic Concept The operations of a Boolean algebra must adhere to certain properties, called laws or axioms. One of these axioms is that the Boolean operations are commutative: you can reverse the order in which the variables are written without changing the meaning of the Boolean expression. For example, OR is commutative: X OR Y is identical to Y OR X, where X and Y are Boolean variables.

The axioms can be used to prove more general laws about Boolean expressions. You can use them to simplify expressions in the algebra. For example, it can be shown that X AND (Y OR NOT Y) is the same as X, since Y OR NOT Y is always true. The procedures you will learn for optimizing combinational and sequential networks are based on the principles of Boolean algebra, and thus Boolean expressions are often used as input to computer-aided design tools.

Boolean Operations Most designers find it a little cumbersome to keep writing Boolean expressions with AND, OR, and NOT operations, so they have developed a shorthand for the operators. If we use X and Y as the Boolean variables, then we write the complement (inversion, negation) of X as one of X', !X, /X, or \X. The OR operation is -written as X + Y, X # Y, or X | Y. X AND Y is written as X & Y, X Y, or more simply X Y. Although there are certain analogies between OR and PLUS and between AND and MULTIPLY, the logic operations are not the same as the arithmetic operations.

Complement is always applied first, followed by AND, followed by OR. We say that complement has the highest priority, followed by AND and then OR. Parentheses can be used to change the default order of evaluation. The default grouping of operations is illustrated by the following examples:

Equivalence of Boolean Expressions and Truth Tables A Boolean expression can be readily derived from a truth table and vice versa. In fact, Boolean expressions and truth tables convey exactly the same information.

Let's consider the structure of a truth table, with one column for each input variable and a column for the expression's output. Each row in which the output column is a 1 contributes a single ANDed term of the input variables to the Boolean expression. This is called a product term, because of the analogy between AND and MULTIPLY. Looking at the row, we see that if the column associated with variable X has a 0 in it, the expression is part of the ANDed term. Otherwise the expression X is part of the term. Variables in their asserted (X) or complemented () forms are called literals.

There is one product term for each row with a 1 in the output column. All such product terms are ORed together to complete the expression. A Boolean expression written in this form is called a sum of products.

Example Deriving Expressions from Truth Tables Let's go back to Figure 1.17 and Figure 1.18, the truth tables for the half adder and the full adder, respectively. Each output column leads to a new Boolean expression, but defined over the same variables associated with the input columns. The Boolean expressions for the half adder's Sum and Carry outputs can be written as:

The half adder Sum is 1 in two rows: A = 1, B = 0 and A = 0, B = 1. The half adder Carry is 1 in only one row: A = 1, B = 1.

The truth table for the full adder is considerably more complex. Both Sum and Cout have four rows with 1's in the output columns. The two functions are written as

:

As we shall see in Chapter 2, we can exploit Boolean algebra to simplify Boolean expressions. By applying some of the simplification theorems of Boolean algebra, we can reduce the expression for the full adder's Cout output to the following:

Such simplified forms reduce the amount of gates, transistors, wires, and so on, needed to implement the expression. Simplification is an extremely valuable tool.

You can use a truth table to verify that the simplified expression just obtained is equivalent to the original. Start with a truth table with filled-in input columns but empty output columns. Then find all rows of the truth table for which the product terms are true, and enter a 1 in the associated output column. For example, the term A Cin is true wherever A = 1 and Cin = 1, independent of the value of B. A Cin spans two truth table rows: A = 1, B = 0, Cin = 1 and A = 1, B = 1, Cin = 1.

Figure 1.19 shows the filled-in truth table and indicates the rows spanned by each of the terms. Since the resulting truth table is the same as that of the original expression (see Figure 1.18), the two expressions must have the same behavior.

1.3.4 Gates

For logic designers, the most widely used primitive building block is the logic gate. We will see that every Boolean expression has an equivalent gate description, and vice versa.

Correspondence Between Boolean Operations and Logic Gates Each of the logic operators has a corresponding logic gate. We have already met the inverter (NOT), AND, and OR functions, whose corresponding gate-level representations are given in Figure 1.20. Logic gates are formed internally from transistor switches (see Exercise 1.12).

Correspondence Between Boolean Expressions and Gate Networks Every Boolean expression has a corresponding implementation in terms of interconnected gates. This is called a schematic. Schematics are one of the ways that we capture the structural view of a design, that is, how the design is constructed from the composition of primitive components. In this case, the composition is accomplished by physically wiring the gates together.

This leads us to some new terminology. A collection of wires that always carry the same electrical signal, because they are physically connected, is called a net. The tabulation of gate inputs and outputs and the nets to which they are connected is called the netlist.

Example Schematics for the Half and Full Adder A schematic for the half adder is shown in Figure 1.21. The inputs are labeled at the left of the figure as A and B. The outputs are at the right, labeled Sum and Carry. Sum is the OR of two ANDed terms, and this maps directly onto the three gates as shown in the schematic. Similarly, Carry is described in terms of a single AND operation and thus maps onto a single AND gate. The figure also shows two different nets, labeled Net 1 and Net 2. The former corresponds to the set of wires carrying the input signal A; the latter are the wires carrying signal B.

The full adder's schematic is given in Figure 1.22(a). Each of the outputs is the OR of four 3-variable product terms. The OR operator is mapped into a four-input OR gate, while each product term becomes a three-input AND gate. Each connection in the schematic represents a physical wire among the components. Therefore, there can be a real implementation advantage, in terms of the resources and the time it takes to implement the network, in reducing the number of gates and using gates with fewer inputs.

Figure 1.22(b) shows an alternative implementation of the Cout function using fewer gate and wires. The design procedures in the next chapter will help you find the simplest, easiest to implement gate-level representation of a Boolean function.

Schematics Rules of Composition The fan-in of a logic gate is its number of inputs. The schematics examples in Figures 1.21 and 1.22 show gates with fan-ins of 1, 2, 3, and 4. The fan-out of a gate is the number of inputs to which the gate's output is connected. The schematics examples have many gates with fan-outs of 1, that is, gates whose output connects to exactly one input of another gate. However, some signals have much greater fan-out. For example, signal A in the full adder schematic connects to six different gate inputs (one NOT and five AND gates).

The schematic representation places no limits on the fan-ins and fan-outs. Nevertheless, fan-in and fan-out are restricted by the underlying technology from which the logic gates are constructed. It is difficult to find OR gates with greater than 8 inputs, or technologies that allow gates to fan out to more than 10 or 20 other gates. A particular technology will impose a set of rules of composition on fan-ins and fan-outs to ensure proper electrical behavior of gates. In Chapter 3 you will learn how to compute the maximum allowable fan-out of a gate.

1.3.5 Waveforms

Waveforms describe the time-varying behavior of hardware systems, while switches, Boolean expressions, and gates tell us only about the static or "steady-state" system behavior. We examine what waveforms can tell us in this subsection.

Static Versus Dynamic Behavior The representations we have examined so far are static in nature. They give us no intuition about how a circuit be-haves over time. For the beginning digital designer, such as yourself, under-standing the dynamic behavior of digital systems is one of the hardest skills to develop. Most problems in digital implementations are related to timing.

Gates require a certain propagation time to recognize a change in their inputs and to produce a new output. While settling into this long-term state, a digital system's outputs may look quite different from their final so-called steady-state values. For example, using the truth table representation in Figure 1.17, if we knew that the inputs to the half adder circuit were 1 and 0, we would expect that the output would be 1 for the Sum and 0 for the Carry.

Waveform Representation of Dynamic Behavior The waveform representation provides a way to capture the dynamic behavior of a circuit. It is similar to an oscilloscope trace. The X axis displays time, and the values (0 or 1) of multiple signal traces are represented by the Y axis.

Figure 1.23 shows a waveform produced for the half adder circuit. There is a trace for each of the inputs, A and B, and the outputs, Sum and Carry, arranged along the time axis. Each time division represents 10 -abstract time units, and we have assumed that all gates experience a 10-time-unit delay.

From the figure, you can see that the inputs step through four possible input combinations:
Time



0-50

A is 0, B is 0

51-100

A is 0, B is 1

101-150

A is 1, B is 0

151-200

A is 1, B is 1



Let's look closely at the Sum and Carry waveforms. You can see that, in general, they follow the pattern expected from the truth table, but are offset in time. The Sum output changes through 0 to 1 to 1 to 0. Note that something strange happens during the second transition-we will examine this later when we discuss glitches. The Carry stays 0 until the last set of changes to the inputs, at which point it changes to 1. Let's look at an example signal trace through the half adder in more detail.

Example Tracing Propagation Delays in the Half Adder Looking in particular at time step 51, the B input changes from 0 to 1. Twenty time units later, the Sum output changes from 0 to 1. Why does this happen?

Figure 1.24 shows a logic circuit instrumented with probes on every one of the circuit's nets. The probes display the current logic value on a par-ticular net. The nets with changing values are highlighted in the figure.

Let's examine the sequence of events and the detailed propagation of signals more closely, using Figure 1.24. Figure 1.24(a) shows the initial input conditions (A and B are both 0) and the values of all internal nets.

The situation immediately after B changes from 0 to 1 is shown in Figure 1.24(b). The set of wires attached to the input switch now carry a 1. At this point, the topmost AND gate has both of its inputs set to 1. One gate delay (10 time units) later, the output of this gate will change from 0 to 1. This is shown in Figure 1.24(c).

Now one of the inputs to the OR gate is 1. After another 10-time-unit gate delay, the Sum output changes from 0 to 1 (see Figure 1.24(d)). Since the propagated signal passes through two gates, the logic incurs two gate delays, or a delay of 20 time units, as shown in the waveform of Figure 1.23. The propagation path that determines the delay through the circuit is called the critical path.

Glitches Propagation delays are highly pattern sensitive. Their duration depends on the specific input combinations and how these cause different propagation paths to become critical within the network. It takes 30 time units (three gate delays) for Sum to change after Y changes at time step 151. Did you notice the strange behavior in the Sum waveform between time steps 120 and 130? The input combination is 1 and 0, but for a short time Sum actually generates a zero output! This behavior is called a glitch, a short time during which the output goes through a logic value that is not its steady-state value. This can lead to anomalous behavior if you are not careful. You will learn how to design circuits to avoid glitches in Chapter 3.

1.3.6 Blocks

The block diagram representation helps us understand how major pieces of the hardware design are connected together. We examine it in this subsection.

Basic Concept A block represents a design component that performs some well-defined, reasonably high-level function. For example, it would not make much sense to associate a block with a single logic gate, but several interconnected logic gates could be associated with a block. The full adder is a good example of a block-sized function.

Each block clearly identifies its inputs and outputs. A block can be directly realized by some number of gates, as in Figure 1.22, or it can be constructed from a composition of gates and more primitive blocks.

Example Full Adder Block Diagram Figure 1.25 shows how two instances of the same building block, the half adder, can be used to implement a full adder. As you can see from the output waveforms of Figure 1.26, the composed circuit behaves like the full adder constructed directly from logic gates.

Compare the waveforms to the truth table of Figure 1.18. For the most part, the truth table is mirrored in the waveforms, with slight timing variations. The Sum waveform also includes some glitches, highlighted in the figure.

The block diagram representation reduces the complexity of a design to more manageable units. Rather than dealing with the full complexity of the full adder schematic and its 13 gates, you need only understand the much simpler half adder (just six gates) and its simple composition to implement the full adder.

Waveforms capture the timing relationships between input and output signals. But they are not always the best way to understand the abstract function being implemented. This is where behavioral representations play an important role. We examine them next.

1.3.7 Behaviors

The behavioral description focuses on how the block behaves. Waveforms are a primitive form of behavioral description. Other, more sophisticated behavioral representations depend on specifications written in hardware description languages.

Basic Concept The behavioral description focuses on block behavior. In a way, the waveform representation provides a primitive behavioral description, but the cause-and-effect relationships between inputs and outputs are not obvious from the waveforms. Waveforms are sometimes augmented with timing relationships to place specific performance constraints on a block. For example, a timing constraint might be "the Sum output must change no later than 30 time units after an input changes," and this may have a significant influence on the choice of the block's implementation.

Hardware Description Languages The most common method of specifying behavioral descriptions is through a hardware description language. These languages look much like conventional high-level computer programming languages. However, conventional programming languages force you to think of executing a single statement of the program at a time. This is unsuitable for hardware, which is inherently parallel: all gates are constantly sampling their inputs and computing new outputs. The high degree of parallelism is one of the things that makes hardware design difficult.

By way of examples, we will introduce two different hardware description languages, ABEL and VHDL. ABEL is commonly used as a specification language for programmable logic components. It is a simple language, suitable for digital systems of moderate complexity. VHDL is more complete, and it is capable of describing systems of considerable complexity.

ABEL ABEL can specify the relationship between circuit inputs and outputs in several alternative ways, including truth tables and equations. The basic language unit is the module, which corresponds to a block in our terminology. Pins on integrated circuit packages are connections to the outside world. Part of the ABEL specification associates input and output variables with specific pins on the programmable logic component chosen to implement the function.

The truth table form of the half adder looks like this:

MODULE half_adder;
A, B, Sum, Carry PIN 1, 2, 3, 4;
TRUTH_TABLE ([A, B] -> [Sum, Carry])    [0, 0] -> [0, 0];    [0, 1] -> [1, 0];    [1, 0] -> [1, 0];    [1, 1] -> [0, 1];
END half_adder;
The specification is nothing more than a tabulation of conditions on inputs A and B and the corresponding output values that should be associated with Sum and Carry.

Another way to describe the relationship between inputs and outputs is through Boolean equations. The module definition would be written as -follows:

MODULE half_adder;
A, B, Sum, Carry PIN 1, 2, 3, 4;
EQUATIONS    Sum = (A & !B) # (!A & B);    Carry = A & B;
END half_adder;
Here, & is the AND operator, ! is negation, and # is OR.

VHDL VHDL is a widely used language for hardware description, based on the programming language ADA. It is used as the input description for a num-ber of commercially available computer-aided design systems. Although we are not concerned with the details of the VHDL syntax, it is worthwhile to look at a sample description of the half adder, just to see the kinds of capabilities VHDL provides. A VHDL "model" for the half adder follows:

-- ***** inverter gate model ***** -- external ports ENTITY inverter_gate;    PORT (A: IN BIT;  Z: OUT BIT); END inverter_gate;
-- internal behavior ARCHITECTURE behavioral OF inverter_gate IS BEGIN    Z <= NOT A AFTER 10 ns; END behavioral;
-- ***** and gate model ***** -- external ports ENTITY and_gate;    PORT (A, B: IN BIT;  Z: OUT BIT); END and_gate;
-- internal behavior ARCHITECTURE behavioral OF and_gate IS BEGIN    Z <= A AND B AFTER 10 ns; END behavioral;
-- ***** or gate model ***** -- external ports ENTITY or_gate; PORT (A, B: IN BIT;  Z: OUT BIT); END or_gate;
-- internal behavior ARCHITECTURE behavioral OF or_gate IS BEGIN Z <= A OR B AFTER 10 ns; END behavioral;
-- ***** half adder model ***** -- external ports ENTITY half_adder; PORT (A, B: INPUT;  Sum, Carry: OUTPUT); END half_adder;
-- internal structure ARCHITECTURE structural of half_adder IS -- component types to use COMPONENT inverter_gate PORT (A: IN BIT;  Z: OUT BIT); END COMPONENT; COMPONENT and_gate PORT (A, B: IN BIT;  Z: OUT BIT); END COMPONENT; COMPONENT or_gate PORT (A, B: IN BIT;  Z: OUT BIT); END COMPONENT;
 -- internal signal wires SIGNAL s1, s2, s3, s4: BIT; BEGIN -- one line for each gate, describing its  -- type and its connections i1: inverter_gate PORT MAP (A, s1); i2: inverter_gate PORT MAP (B, s2); a1: and_gate PORT MAP (B, s1, s3); a2: and_gate PORT MAP (A, s2, s4); a3: and_gate PORT MAP (A, B, Carry); o1: or_gate PORT MAP (s3, s4, Sum); END structural;
The entity declarations declare the block diagram characteristics of a component-that is, the way it looks to the outside world. The architecture declarations define the internal operation. Usually these are written in a form that looks much like programming statements. The inverter, AND, and OR behaviors are so simple that they do not do justice to VHDL's capability of expressing hardware behavior. This definition of the half adder is entirely structural: it is merely a textual form of specifying the schematic, shown in Figure 1.27.

[Top] [Next] [Prev]


This file last updated on 05/19/96 at 09:31:39.
randy@cs.Berkeley.edu;


What is Sarbanes-Oxley[q]
What is Sarbanes-Oxley[q]
ISBN: 71437967
EAN: N/A
Year: 2006
Pages: 101

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net