Electronic Design


Computers are thought fearsome because they are based on electrical circuits. Electricity can be dangerous, as the anyone struck by lightning will attest. But inside the computer, the danger is low. At its worst, it measures 12 volts , which makes the inside of a computer as safe as playing with an electric train. Nothing that's readily accessible inside the computer will shock you, straighten your hair, or shorten your life.

Personal computers could not exist ”at least in their current, wildly successful form ”were it not for two concepts: binary logic and digital circuitry . The binary approach reduces data to its most minimalist form, essentially an information quantum. A binary data bit simply indicates whether something is or is not. Binary logic provides rules for manipulating those bits to allow them to represent and act like real-world information we care about, such as numbers , names , and images. The binary approach involves both digitization (using binary data to represent information) and Boolean algebra (the rules for carrying out the manipulations of binary data).

The electrical circuits mimic the logic electrically. Binary logic involves two states, which a computer's digital logic circuitry mimics with two voltage levels.

The logic isn't only what the computer works with; it's also what controls the computer. The same voltages used to represent values in binary logic act as signals that control the circuits. That means the signals flowing through the computer can control the computer ”and the computer can control the signals. In other words, the computer can control itself. This design gives the computer its power.

Digital Logic Circuitry

The essence of the digital logic that underlies the operation of the microprocessor and motherboard is the ability to use one electrical signal to control another.

Certainly there are a myriad of ways of using one electrical signal to control another, as any student of Rube Goldberg can attest. As interesting and amusing as interspersing cats, bellows, and bowling balls in the flow of control may be, most engineers have opted for a more direct system that uses a more direct means based on time-proved electrical technologies.

In modern digital logic circuitry, the basis of this control is amplification , the process of using a small current to control a larger current (or a small voltage to control a larger voltage). The large current (or voltage) exactly mimics the controlling current (or voltage) but is stronger or amplified. In that every change in the large signal is exactly analogous to each one in the small signal, devices that amplify in this way are called analog . The intensity of the control signal can represent continuously variable information ”for example, a sound level in stereo equipment. The electrical signal in this kind of equipment is therefore an analogy to the sound that it represents.

In the early years of the evolution of electronic technology, improving this analog amplification process was the primary goal of engineers. After all, without amplification, signals eventually deteriorated into nothingness . The advent of digital information and the earliest computers made them use the power of amplification differently.

The limiting case of amplification occurs when the control signal causes the larger signal to go from its lowest value, typically zero, to its highest value. In other words, the large signal goes off and on ”switches ”under control of the smaller signal. The two states of the output signal (on and off) can be used as part of a binary code that represents information. For example, the switch could be used to produce a series of seven pulses to represent the number 7. Because information can be coded as groups of such numbers (digits), electrical devices that use this switching technology are described as digital . Note that this switching directly corresponds to other, more direct control of on/off information, such as pounding on a telegraph key, a concept we'll return to in later chapters.

Strictly speaking, an electronic digital system works with signals called high and low , corresponding to a digital one and zero. In formal logic systems, these same values are often termed true and false . In general, a digital one or logical true corresponds to an electronic high. Sometimes, however, special digital codes reverse this relationship.

In practical electrical circuits, the high and low signals only roughly correspond to on and off. Standard digital logic systems define both the high and low signals as ranges of voltages. High is a voltage range near the maximum voltage accepted by the system, and low is a voltage range near (but not necessarily exactly at or including) zero. A wide range of undefined voltages spreads between the two, lower than the lowest edge of high but higher than the highest edge of low. The digital system ignores the voltages in the undefined range. Figure 4.1 shows how the ranges of voltages interrelate.

Figure 4.1. Significance of TTL voltage levels.

graphics/04fig01.gif

Perhaps the most widely known standard is called TTL (for transistor- transistor logic). In the TTL system, which is still common inside computer equipment, a logical low is any voltage below 0.8 volts. A logical high is any level above 2.0 volts. The range between 0.8 and 2.0 volts is undefined.

As modern computers shift to lower voltages, the top level of the logical high shifts downward ”for example, from the old standard of 5.0 volts to the 3.3 volts of the latest computer equipment (and even lower voltages of new power-conserving microprocessors) ”but the relationship between the high and low voltages, along with the undefined range in between, remains the same. Modern systems usually retain the same low and undefined ranges ”they just lop the top off the figure showing the TTL voltage levels.

Electronics

Modern electronics are filled with mysterious acronyms and even stranger-sounding names. Computer are filled with circuits made from these things, unfamiliar terms such as CMOS and NMOS, semiconductors, and integrated circuits. A bit of historical perspective will show you what the names means and where (and how) the technologies they describe originated.

Over the years, electrical engineers have developed a number of ways one signal can control another. The first approach to the electrical control of electrical flow evolved from the rattling telegraph key. When a telegrapher jammed down on his key to make dots and dashes, he actually closed an electrical circuit, which sent a voltage down the telegraph line. At the far end of the connection, this signal powered an electromagnet that snapped against a piece of iron to create the dot and dash sound.

In 1835 Joseph Henry saw that the electromagnet could be adapted to operate a switch rather than just pull on a hunk of iron. The electromagnetically actuated switch allowed a feeble current to switch on a more powerful current. The resulting device, the electrical relay , was key to the development of the long-distance telegraph. When telegraph signals got too weak from traveling a great distance, a relay could revitalize them.

In operation, a relay is just a switch that's controlled by an electromagnet. Activating the electromagnet with a small current moves a set of contacts that switch the flow of a larger current. The relay doesn't care whether the control current starts off in the same box as the relay or from a continent away. As simple in concept as the relay is, its ability to use one signal to control another proved very powerful ”powerful enough that relays served as the foundation of some of the first computers (or electrical calculators ), such as Bell Lab's 1946 Mark V computer. Relays are still used in modern electrical equipment.

Vacuum Tubes

The vacuum tube improved on the relay design for computers by eliminating the mechanical part of the remote-action switch. Using electronics only, a tube could switch and perform logic operations faster, thousands of times faster, than relays.

Vacuum tubes developed out of Thomas Edison's 1879 invention of the incandescent light bulb. After the public demonstration of the bulb, Edison continued to work with and improve it. Along the way, he made a discovery in 1883 (which some historians credit as Edison's sole original contribution to pure scientific research) of what has come to be called the Edison Effect . Edison noted he could make a current flow through the vacuum in the bulb from the filament to a metal plate he had introduced inside.

The Edison Effect remained a curiosity until 1904 when John Ambrose Fleming created the diode vacuum tube. Fleming found that electrons would flow from the negatively charged hot filament of the light bulb to a positively charged cold collector plate, but not in the other direction. Fleming made an electrical one-way street that could operate as a rectifier to change alternating current into direct current or as a detector that pulled modulation from carrier waves (by stripping off the carrier wave's alternating cycles).

In 1907 Lee De Forest created the Audion, now known as the triode tube. De Forest introduced an element of control to the bulb-cum-diode. He interposed a control grid between the hot filament (the cathode ) and the cold plate (the anode). De Forest found that he could control the electron flow by varying the voltage he applied to the control grid.

The grid allowed the Audion to harness the power of the attraction of unlike electrical charges and the repulsion of like charges, enabling a small charge to control the flow of electrons through the vacuum inside the tube. In the Audion, as with the relay, a small voltage could control a much larger voltage. De Forest created the first electronic amplifier , the basis of all modern electronics.

The advantage of the vacuum tube over the relay in controlling signals is speed. The relay operates at mechanical rates, perhaps a few thousand operations per second. The vacuum tube can switch millions of times per second. The first recognizable computers (such as ENIAC) were built from thousands of tubes, each configured as a digital logic gate.

Semiconductors

Using tube-based electronics in computers is fraught with problems. First is the space-heater effect: Tubes have to glow like light bulbs to work, and they generate heat along the way, enough to smelt rather than process data. And, like light bulbs , tubes burn out. Large tube-based computers required daily shutdown and maintenance as well as several technicians on the payroll. ENIAC was reported to have a mean time between failures of 5.6 hours.

In addition, tube circuits are big. ENIAC filled a room, yet the house- sized computers of 1950s vintage science fiction would easily be outclassed in computing power by today's desktop machines. In the typical tube-based computer design, one logic gate required one tube that took up considerably more space than a single microprocessor with tens of millions of logic gates. Moreover, physical size isn't only a matter of housing. The bigger the computer, the longer it takes its thoughts to travel through its circuits ”even at the speed of light ”and the more slowly it thinks.

Making today's practical computers took another true breakthrough in electronics: the transistor , first created at Bell Laboratories in 1947 and announced in 1948 by the team of John Bardeen, Walter Brattain, and William Shockley. A tiny fleck of germanium (later, silicon) formed into three layers, the transistor was endowed with the capability to let one electrical current applied to one layer alter the flow of another, larger current between the other two layers . Unlike the vacuum tube, the transistor needed no hot electrons because the current flowed entirely through a solid material ”the germanium or silicon ”hence, the common name for tubeless technology, solid-state electronics .

Germanium and silicon are special materials ”actually, metals ”called semiconductors . The term describes how these materials resist the flow of electrical currents. A true electrical conductor (such as the copper in wires) hardly resists the flow of electricity, whereas a non-conductor (or insulator, such as the plastic wrapped around the wires) almost totally prevents the flow of electricity. Semiconductors allow some ”but not much ”electricity to flow.

By itself, being a poor but not awful electrical conductor is as remarkable as lukewarm water. However, infusing atoms of impurities into the semiconductor's microscopic lattice structure dramatically alters the electrical characteristics of the material and makes solid-state electronics possible.

This process of adding impurities is called doping . Some impurities add extra electrons ( carriers of negative charges) to the crystal. A semiconductor doped to be rich in electrons is called an N-type semiconductor . Other impurities in the lattice leave holes where electrons would ordinarily be, and these holes act as positive charge carriers. A semiconductor doped to be rich in holes is called a P-type semiconductor .

Electricity easily flows across the junction between the two materials when an N-type semiconductor on one side passes electrons to the holes in a P-type semiconductor on the other side. The empty holes willingly accept the electrons. They just fit right in. But electricity doesn't flow well in the opposite direction, however. If the P-type semiconductor's holes ferry electrons to the N-type material at the junction, the N-type semiconductor will refuse delivery. It already has all the electrons it needs. It has no place to put any more. In other words, electricity flows only in one direction through the semiconductor junction, just as it flows only one way through a vacuum-tube diode.

The original transistor incorporated three layers with two junctions between dissimilar materials, stacked in layers as N-P-N or P-N-P. Each layer has its own name: The top is the emitter , the middle of the sandwich is the gate , and the bottom is the collector . (The structure of a transistor isn't usually a three-layer cake with top and bottom, but that's effectively how it works.)

Ordinarily no electricity could pass through such a stack from emitter to collector because the two junctions in the middle are oriented in opposite directions. One blocks electrical flow one way, and the second blocks the flow in the other direction.

The neat trick that makes a transistor work is changing the voltage on the middle layer. Say you have a P-N-P transistor. Your wire dumps electrons into the holes in the P-layer, and they travel to the junction with the N-layer. The N-layer is full of electrons, so it won't let the current flow further. But if you drain off some of those electrons through the gate, current can flow through the junction ”and it can keep flowing through the next junction as well. It only takes a small current to drain electrons from the gate to permit a large current to flow from emitter to collector.

The design of transistor circuits is more complicated than our simple example. Electrical flow through the transistor depends on the complex relationships between voltages on its junctions, and the electricity doesn't necessarily have to flow from emitter to collector. In fact, the junction transistor design, although essential to the first transistors , is rarely used in computer circuits today. But the junction transistor best illustrates the core principles of all solid-state electronics.

Modern computer circuits mostly rely on a kind of transistor in which the electrical current flow through a narrow channel of semiconductor material is controlled by a voltage applied to a gate (which surrounds the channel) made from metal oxide. The most common variety of these transistors is made from N-type material and results in a technology called NMOS , an acronym for N-channel Metal Oxide Semiconductor. A related technology combines both N-channel and P-channel devices and is called CMOS (Complementary Metal Oxide Semiconductor) because the N-and P-type materials are complements (opposites) of one another. These names ”CMOS particularly ”pop up occasionally in discussions of electronic circuits.

The typical microprocessor once was built from NMOS technology. Although NMOS designs are distinguished by their simplicity and small size (even on a microchip level), they have a severe shortcoming: They constantly use electricity whenever their gates are turned on. Because about half of the tens or hundreds of thousands of gates in a microprocessor are switched on at any given time, an NMOS chip can draw a lot of current. This current flow creates heat and wastes power, making NMOS unsuitable for miniaturized computers (which can be difficult to cool) and battery-operated equipment, such as notebook computers.

Some earlier and most contemporary microprocessors now use CMOS designs. CMOS is inherently more complex than NMOS because each gate requires more transistors, at least a pair per gate. But this complexity brings a benefit: When one transistor in a CMOS gate is turned on, its complementary partner is switched off, thus minimizing the current flow through the complementary pair that make up the circuit. When a CMOS gate is idle, just maintaining its state, it requires almost no power. During a state change, the current flow is large but brief. Consequently, the faster the CMOS gate changes state, the more current that flows through it and the more heat it generates. In other words, the faster a CMOS circuit operates, the hotter it becomes. This speed-induced temperature rise is one of the limits on the operating speed of many microprocessors.

CMOS technology can duplicate every logic function made with NMOS but with a substantial saving of electricity. On the other hand, manufacturing costs somewhat more because of the added circuit complexity.

Integrated Circuits

The transistor overcomes several of the problems with using tubes to make a computer. Transistors are smaller than tubes and give off less heat because they don't need to glow to work. But every logic gate still requires one or more transistors (as well as several other electronic components ) to build. If you allocated a mere square inch to every logic gate, the number of logic gates in a personal computer microprocessor such as the Pentium 4 (about fifty million) would require a circuit board on the order of 600 square feet.

At the very end of the 1950s, Robert N. Noyce at Fairchild Instrument and Jack S. Kilby of Texas Instruments independently came up with the same brilliant idea of putting multiple semiconductor devices into a single package. Transistors are typically grown as crystals from thin-cut slices of silicon called wafers . Typically, thousands of transistors are grown at the same time on the same wafer. Instead of carving the wafer into separate transistors, the engineer linked them together (integrated them) to create a complete electronic circuit all on one wafer. Kilby linked the devices with micro-wires; Noyce envisioned fabricating the interconnecting circuits between devices on the silicon itself. The resulting electronic device, for which Noyce applied for a patent on July 30, 1959, became known as the integrated circuit , or IC. Such devices now are often called chips because of their construction from a single small piece of silicon ”a chip off the old crystal. Integrated circuit technology has been adapted to both analog and digital circuitry. Their grandest development, however, is the microprocessor.

Partly because of the Noyce invention, Fairchild flourished as a semiconductor manufacturer throughout the 1960s. The company was acquired by Schlumberger Ltd. in 1979, which sold it to National Semiconductor in 1987. In 1996, National spun off Fairchild as an independent manufacturer once again, and the developer of the integrated circuit continues to operate as an independent business, Fairchild Semiconductor Corporation, based in South Portland, Maine .

The IC has several advantages over circuits built from individual (or discrete) transistors, most resulting from miniaturization. Most importantly, integration reduces the amount of packaging. Instead of one metal or plastic transistor case per logic gate, multiple gates (even millions of them) can be combined into one chip package.

Because the current inside the chip need not interact with external circuits, the chips can be made arbitrarily small, enabling the circuits to be made smaller, too. In fact, today the limit on the size of elements inside an integrated circuit is mostly determined by fabrication technology; internal circuitry is as small as today's manufacturing equipment can make it affordably. The latest Intel microprocessors, which use integrated circuit technology, incorporate the equivalent of over 50 million transistors using interconnections that measure less than 0.13 of a micron (millionths of a meter) across.

In the past, a hierarchy of names was given to ICs depending on the size of circuit elements. Ordinary ICs were the coarsest in construction. Large-scale integration (LSI) put between 500 and 20,000 circuit elements together; very large-scale integration (VLSI) put more than 20,000 circuit elements onto a single chip. All microprocessors use VLSI technology, although the most recent products have become so complex (Intel's Pentium 4, for example, has the equivalent of about 50 million transistors inside) that a new term has been coined for them, ultra large-scale integration (ULSI).

Moore's Law

The development of the microprocessor often is summed up by quoting Moore's Law, which sounds authoritative and predicts an exponential acceleration in computer power. At least that's how Moore's Law is most often described when you stumble over it in books and magazines. The most common interpretation occurring these days is that Moore's Law holds that computer power doubles every 18 months.

In truth, you'll never find the law concisely quoted anywhere . That's because it's a slippery thing. It doesn't say what most people think. As originally formulated, it doesn't even apply to microprocessors. That's because Gordon E. Moore actually created his "law" well before the microprocessor was invented. In its original form, Moore's Law describes only how quickly the number of transistors in integrated circuits was growing.

What became known as Moore's Law was first published in the industry journal Electronics on April 19, 1965, in a paper titled "Cramming More Components into Integrated Circuits," written when Moore was director of the research and development laboratories at Fairchild Semiconductor. His observation ”and what Moore's Law really says ”was that the number of transistors in the most complex circuits tended to double every year. Moore's conclusion was that by 1975, ten years after he was writing, economics would force semiconductor makers to squeeze as many as 65,000 transistors onto a single silicon chip.

Although Moore's fundamental premise , that integrated circuit complexity increases exponentially, was accurate (although not entirely obvious at the time), the actual numbers given in Moore's predictions in the paper missed the mark. He was quite optimistic. For example, over the 31-year history of the microprocessor from 4004 to Pentium 4, the transistor count has increased by a factor of 18,667 (from 2250 to 42 million). That's approximately a doubling every two years.

That's more of a difference than you might think. At Moore's rate, a microprocessor today would have over four trillion transistors inside. Even the 18-month doubling rate so often quoted would put about 32 billion transistors in every chip. In either case, you're talking real computer power.

Over the years, people have gradually adapted Moore's Law to better suit the facts ”in other words, Moore's Law doesn't predict how many transistors will be in future circuits. It only describes how circuit complexity has increased.

The future of the "law" is cloudier still. A constant doubling of transistor count requires ever-accelerating spending on circuit design research, which was possible during the 20 years of the personal computer boom but may suffer as the rapid growth of the industry falters. Moreover, the law ”or even a linear increase in the number of transistors on a chip ”ultimately bumps into fundamental limits imposed by the laws of physics. At some point the increase in complexity of integrated circuits, taken to extreme, will require circuit elements smaller than atoms or quarks, which most physicists believe is not possible.

Printed Circuits

An integrated circuit is like a gear of a complex machine. By itself it does nothing. It must be connected to the rest of the mechanism to perform its appointed task. The integrated circuit needs a means to acquire and send out the logic and electrical signals it manipulates. In other words, each integrated circuit in a computer must be logically and electrically connected ”essentially that means linked by wires.

In early electrical devices, wires in fact provided the necessary link. Each wire carried one signal from one point to another, creating a technology called point-to-point wiring . Because people routinely soldered together these point-to-point connections by hand using a soldering iron, they were sometimes called hand-wired . This was a workable , if not particularly cost-effective , technology in the days of tubes, when even a simple circuit spanned a few inches of physical space. Today, point-to-point wiring is virtually inconceivable because a computer crams the equivalent of half a million tube circuits into a few square inches of space. Connecting them with old-fashioned wiring would take a careful hand and some very fine wire. The time required to cut, strip, and solder in place each wire would make building a single computer a lifetime endeavor.

Long before the introduction of the first computer, engineers found a better way of wiring together electrical devices ”the printed circuit board . The term is sometimes confusingly shortened to computer board , even when the board is part of some other, non-computer device. Today, printed circuit boards are the standard from which nearly all electronic devices are made. The "board" in the name "motherboard" results from the assembly being a printed circuit board.

Fabrication

Printed circuit board technology allows all the wiring for an entire circuit assembly to be fabricated together in a quick process that can be entirely mechanized. The wires themselves are reduced to copper traces, a pattern of copper foil bonded to the substrate that makes up the support structure of the printed circuit board. In computers, this substrate is usually green composite material called glass- epoxy , because it has a woven glass fiber base that's filled and reinforced with an epoxy plastic. Less-critical electronic devices (read "cheap") substitute a simple brownish substrate of phenolic plastic for the glass-epoxy.

The simplest printed circuit boards start life as a sheet of thin copper foil bonded to a substrate. The copper is coated with a compound called photo-resist , a light-sensitive material. When exposed to light, the photo-resist becomes resistant to the effects of compounds , such as nitric acid, that strongly react with copper. A negative image of the desired final circuit pattern is placed over the photo-resist covered copper and exposed to a strong light source. This process is akin to making a print of a photograph. The exposed board is then immersed in an etchant , one of those nasty compounds that etch or eat away the copper that is not protected by the light-exposed photo-resist. The result is a pattern of copper on the substrate corresponding to the photographic original. The copper traces can then be used to connect the various electronic components that will make up the final circuit. All the wiring on a circuit board is thus fabricated in a single step.

When the electronic design on a printed circuit board is too complex to be successfully fabricated on one side of the substrate, engineers can switch to a slightly more complex technology to make two-sided boards. The traces on each side are separately exposed but etched during the same bath in etchant. In general, the circuit traces on one side of the board run parallel in one direction, and the traces on the other side run generally perpendicular . The two sides get connected together by components inserted through the board or through plated-through holes , holes drilled through the board and then filled with solder to provide an electrical connection

To accommodate even more complex designs, engineers have designed multilayer circuit boards . These are essentially two or more thin double-sided boards tightly glued together into a single assembly. Most computer system boards use multilayer technology, both to accommodate complex designs and to improve signals characteristics. Sometimes a layer is left nearly covered with copper to shield the signal in the layers from interacting with one another. These shielding layers are typically held at ground potential and are consequently called ground planes .

One of the biggest problems with the multilayer design (besides the difficulty in fabrication) is the difficulty in repair. Abnormally flexing a multilayer board can break one of the traces hidden in the center of the board. No reasonable amount of work can repair such damage.

Pin-in-Hole Technology

Two technologies are in wide use for attaching components to the printed circuit board. The older technology is called pin-in-hole . Electric drills bore holes in the circuit board at the points where the electronic components are to attach. Machines (usually) push the leads (wires that come out of the electronic components) into and through the circuit board holes and bend them slightly so that they hold firmly in place. The components are then permanently fixed in place with solder, which forms both a physical and electrical connection. Figure 4.2 shows the installation of a pin-in-hole electronic component.

Figure 4.2. Pin-in-hole component technology.

graphics/04fig02.gif

Most mass-produced pin-in-hole boards use wave-soldering to attach pin-in-hole components. In wave-soldering, a conveyer belt slides the entire board over a pool of molten solder (a tin and lead alloy), and a wave on the solder pool extends up to the board, coating the leads and the circuit traces. When cool, the solder holds all the components firmly in place.

Workers can also push pin-in-hole components into circuit boards and solder them individually in place by hand. Although hand fabrication is time consuming and expensive, it can be effective when a manufacturer requires only a small number of boards. Automatic machinery cuts labor costs and speeds production on long runs; assembly workers typically make prototypes and small production runs or provide the sole means of assembly for tiny companies that can afford neither automatic machinery nor farming out their circuit board work.

Surface Mount Technology

The newer method of attaching components, called surface-mount technology , promises greater miniaturization and lower costs than pin-in-hole. Instead of holes to secure them, surface-mount components are glued to circuit boards using solder flux or paste, which temporarily holds them in place. After all the components are affixed to a circuit board in their proper places, the entire board assembly runs through a temperature-controlled oven, which melts the solder paste and firmly solders each component to the board. Figure 4.3 illustrates surface-mount construction.

Figure 4.3. Surface mount circuit board construction.

graphics/04fig03.gif

Surface mount components are smaller than their pin-in-hole kin because they don't need leads. Manufacturing is simpler because there's no need to drill holes in the circuit boards. Without the need for large leads, the packages of the surface-mount components can be smaller, so more components will fit in a given space with surface mount technology.

On the downside, surface mount fabrication doesn't lend itself to small production runs or prototyping. It can also be a headache for repair workers. They have to squint and peer at components that are often too small to be handled without tweezers and a lot of luck. Moreover, many surface-mount boards also incorporate some pin-in-hole components, so they still need drilling and wave-soldering.



Winn L. Rosch Hardware Bible, Sixth Edition
The Winn L. Rosch Hardware Bible, 6th Edition
ISBN: 0789728591
EAN: 2147483647
Year: 2003
Pages: 254
Authors: Winn L Rosch

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net