Solid-State Memory


To remember a single bit ”whether alone or as part of a nibble, byte, word, or double-word ”computer memory needs only to preserve a single state (that is, whether something is true or false, positive or negative, or a binary one or zero). Almost anything can suffice to remember a single state ”whether a marble is in one pile or another, whether a dab of marzipan is eaten or molding on the shelf, whether an electrical charge is present or absent. The only need is that the memory unit has two possible states and that it will maintain itself in one of them once it is put there. Should a memory element change on its own, randomly , it would be useless because it does not preserve the information that it's supposed to keep.

Although the possibilities of what can be used for remembering a single state are nearly endless, how the bits are to be used makes some forms of memory more practical than others. The two states must be both readily changeable and readily recognizable by whatever mechanism is to use them. For example, a string tied around your finger will help you remember a bit state but would be inconvenient to store information for a machine. Whatever the machine, it would need a mechanical hand to tie the knot and some means of detecting its presence on your finger ”a video camera, precision radar set, or even a gas chromatography system.

True old-timers who ratcheted themselves down into computers from the mainframe computers that ran rampant in business and industry in the 1950s, 60s, and 70s sometimes use the term core when speaking of a computer's memory system. The term doesn't derive from the centrality of memory to the operation of the computer but rather from one of the first memory technologies used by ancient computers ”a fabric of wires with a ferrite doughnut, called a core , woven into (literally) each intersection of the warp and woof. Although today the term core is but a memory, all current memory technologies share with it one important characteristic: Electricity is able to alter its contents. After all, today's computers think with electricity.

In digital computers, it is helpful to store a state electrically so the machine doesn't need eyes or hands to check for the string, marble, or marzipan. Possible candidates for electrical state-saving systems include those that depend on whether an electrical charge is present or whether a current will flow. Both of these techniques are used in computer memories for primary storage systems.

The analog of electricity, magnetism, can also be readily manipulated by electrical circuits and computers. Core memory, in fact, used the magnetic fields of the ferrite cores to store bits. Today, however, magnetic storage is mostly reserved for secondary storage because magnetism is one step removed from electricity. Storage devices have to convert electricity to magnetism to store bits and convert magnetic fields to electrical pulses to read them. The conversion process takes time, energy, and effort ”all of which pay off for long-term storage, at which magnetism excels, but are unnecessary for the many uses inside the computer.

The vast majority of memory used in computers is based on storing electrical charges rather than magnetic fields. Because all the other signals inside a computer are normally electronic, the use of electronic memory is only natural. It can operate at electronic speed without the need to convert technologies. Chip- makers can fabricate electronic memory components exactly as they do other circuits, even on the same assembly lines. Best of all, electronic memory is cheap. In fact, it's the most affordable of all direct-access technologies.

Modern memory circuits are made from the same silicon semiconductors as other electronic circuits such as microprocessors. In other words, most primary storage in today's computers uses solid-state semiconductor technology.

Read-Write Memory

In this post-modern age, we want ”we expect ”everything to be recyclable. We want whatever we use to be used again rather than used once and thrown away. It's one of the most ecologically sound policies (although using less would be even better).

Computers prefer to treat their memory the same way and for a similar reason. They don't worry about physical waste. They don't worry at all. But they can run out of resources. If a computer could only use memory once, it would quickly use up all its addresses. As it has new things to store, it would slide each bit into a memory address until it ran out of places to put them. At that point, it would no longer be able to do anything new, at least until you added more memory.

In computer terms, people don't talk about using memory, nor do they use the mechanical terms of recording and playing back . Rather, they use terms of authorship. When a computer remembers something, it writes it to memory. To recall something is to read it from memory. Reusable memory that can be written to and read from whenever the computer wants or needs to is called read-write memory .

As straightforward as it sounds, the term read-write memory is almost never used. Instead, engineers have called the read-write memory of most computers RAM , the familiar random-access term. They use the fast random-access capability of primary storage to distinguish it from slower secondary storage, which confusingly is also mostly random access.

They also use RAM to distinguish read-write memory from read-only memory (which we will talk about next ), even though ROM is also random access. About the best that can be said is that when someone says "RAM," what he usually means is read-write memory.

The electronic circuits that make random-access read-write memory possible take two forms: dynamic and static.

Dynamic Memory

The most common design that brings memory to life inside computers uses minute electrical charges to remember memory states. This form of memory stores electrical charges in small capacitors .

The archetypical capacitor comprises two metal plates separated by a small distance that's filled with an electrical insulator. A positive charge can be applied to one plate and, because opposite charges attract , it draws a negative charge to the other nearby plate. The insulator separating the plates prevents the charges from mingling and neutralizing each other. It's called a capacitor because it has the capacity to store a given amount of electricity, measured as its capacitance .

The capacitor can function as memory because a computer can control whether the charge is applied to or removed from one of the capacitor plates. The charge on the plates can thus store a single state and a single bit of digital information.

In a perfect world, the charges on the two plates of a capacitor would forever hold themselves in place. One of the imperfections in the real world results in no insulator being perfect. There's always some possibility that a charge will sneak through any material (although better insulators lower the likelihood , they cannot eliminate it entirely). Think of a perfect capacitor as being like a glass, holding whatever you put inside it (for example, water). A real-world capacitor inevitably has a tiny leak through which the water (or electrical charge) drains out. The leaky nature of capacitors themselves is made worse by the circuitry that charges and discharges the capacitor because it, too, allows some of the charge to leak off.

This system seems to violate the primary principle of memory ”it won't reliably retain information for very long. Fortunately, this capacitor-based system can remember long enough to be useful ”a few or a few dozen milliseconds ”before the disappearing charges make the memory unreliable. Those few milliseconds are sufficient that practical circuits can be designed to periodically recharge the capacitor and refresh the memory.

Refreshing memory is akin to pouring extra water into a glass from which it is leaking out. Of course, you have to be quick to pour the water while there's a little left so you know which glass needs to be refilled and which is supposed to be empty.

To ensure the integrity of their memory, computers periodically refresh memory automatically. During the refresh period, the memory is not available for normal operation. Accessing memory also refreshes the memory cell. Depending on how a chip-maker has designed its products, accessing a single cell also may refresh the entire row or column containing the accessed memory cell .

Because of the changing nature of this form of capacitor-based memory and its need to be actively maintained by refreshing, it is termed dynamic memory . Integrated circuits that provide this kind of memory are termed dynamic RAM (DRAM) chips.

In personal computer memories, special semiconductor circuits that act like capacitors are used instead of actual capacitors with metal plates. A large number of these circuits are combined together to make a dynamic memory-integrated circuit chip. As with true capacitors, however, dynamic memory of this type must be periodically refreshed.

Static Memory

Whereas dynamic memory tries to trap evanescent electricity and hold it in place, static memory allows the current flow to continue on its way. It alters the path taken by the power, using one of two possible courses of travel to mark the state being remembered . Static memory operates as a switch that potentially allows or halts the flow of electricity.

A simple mechanical switch will, in fact, suffice as a form of static memory. It, alas, has the handicap that it must be manually toggled from one position to another by a human or robotic hand.

A switch that can itself be controlled by electricity is called a relay , and this technology was one of the first used for computer memory. The typical relay circuit provided a latch. Applying a voltage to the relay energizes it, causing it to snap from not permitting electricity to flow to it. Part of the electrical flow could be used to keep the relay itself energized, which would, in turn , maintain the electrical flow. Like a door latch, this kind of relay circuit stays locked until some force or signal causes it to change, thus opening the door or the circuit.

Transistors, which can behave as switches, can also be wired to act as latches. In electronics, a circuit that acts as a latch is sometimes called a flip-flop because its state (which stores a bit of data) switches like a political candidate who flip-flops between the supporting and opposing views on sensitive topics. A large number of these transistor flip-flop circuits, when miniaturized and properly arranged, together make a static memory chip. Static RAM is often shortened to SRAM by computer professionals. Note that the principal operational difference between static and dynamic memory is that static RAM does not need to be periodically refreshed.

Read-Only Memory

Not all memory must be endowed with the ability to be changed. Just as there are many memories you would like to retain ”your first love, the names of all the constellations in the zodiac, the answers to the chemistry exam ”a computer is better off when it can remember some particularly important things without regard to the vagaries of the power line. Perhaps the most important of these more permanent rememberings is the program code that tells a microprocessor that it's actually part of a computer and how it should carry out its duties .

You can render a simple memory system, such as a light switch, unchangeable by carefully applying a hammer . With enough assurance and impact, you could guarantee that the system would never forget. In the world of solid-state, the principle is the same, but the programming instrument is somewhat different. All that you need is switches that don't switch ”or, more accurately, that switch once and jam. This permanent kind of memory is so valuable in computers that a whole family of devices called read-only memory (ROM) chips has been developed to implement it. These devices are called read-only because the computer that they are installed in cannot store new code in them. Only what is already there can be read from the memory.

Just as RAM isn't really RAM, most ROM isn't really ROM. The only true ROM is called mask ROM because its storage contents get permanently set in place when the chip is fabricated. When the circuits of the chip are laid out using a mask, they already reflect the information stored in the chip.

Most kinds of ROM chips can have their contents altered . They differ from RAM chips in that they cannot be written to in the normal operation of the computer. That is, nothing the computer normally does can change the contents of these almost-ROM chips. Their contents can be changed only using special equipment ”or special programming routines inside the computer.



Winn L. Rosch Hardware Bible, Sixth Edition
The Winn L. Rosch Hardware Bible, 6th Edition
ISBN: 0789728591
EAN: 2147483647
Year: 2003
Pages: 254
Authors: Winn L Rosch

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net