Defining Memory


The stuff that most people call "computer memory" in a specific sense functions as your computer's primary storage . That is, the contents of the storage system are in a form that your computer's microprocessor can immediately access, ready to be used. It can be accessed using only electricity at the speed of electricity (which can be nearly the speed of light). It is the memory used by your computer's microprocessor to hold the data and program code that's used during the active execution of programs, the microprocessor's main job. For this reason, primary storage is sometimes called working memory .

The immediacy of primary memory requires that your microprocessor be able to find any given value without poring through huge blocks of data. The microprocessor must access any value at random. Consequently, most people refer to the working memory in their computers as random access memory , or RAM, although RAM has a more specific definition when applied to memory technologies, as you'll see later on.

No matter the name you use for it, primary storage is in effect the short- term memory of your computer. It's easy to get at but tends to be limited in capacity ”at least compared to other kinds of storage.

The alternate kind of storage is termed secondary storage . In most computers, disks and tape systems serve as the secondary storage system. They function as the machine's long-term memory. Not only does disk and tape memory maintain information that must be kept for a long time, but it also holds the bulk of the information that the computer deals with. Secondary storage may be tens, hundreds, or thousands of times larger than primary storage. Secondary storage is often termed mass storage because of its voluminous capacity: It stores a huge mass of data.

Secondary storage is one extra step away from your computer's microprocessor. Your computer must transfer the information in secondary storage into its primary storage system in order to work on it. Secondary storage also adds a complication to the hardware. Most secondary storage is electromechanical. In addition to moving electrical signals, it also involves physically moving a disk or tape to provide access to information. Because mechanical things generally move slower than electrical signals (except in science fiction ), secondary storage is slower than primary storage, typically by a factor of a thousand or more.

In other words, the most important aspect of the primary storage system in your computer is access speed, although you want to have as much of it as possible. The most important aspect of secondary storage is capacity, although you want it to be as fast as possible.

Why does your computer need two kinds of memory? The role of secondary storage is obvious. It's the only place that the computer can keep things without worrying about a power glitch making its memory disappear. It's the only kind of storage that allows you to freely exchange blocks of data, such as distributing programs.

The need for primary storage may not seem as obvious. The reason is purely speed. As a microprocessor operates, it needs a constant stream of instructions ”the program that it executes as it operates ”and data that the instructions tell it to manipulate. That stuff has to come from somewhere. If it were on disk, the microprocessor might have to wait for each byte to be found before it could carry out its operation. On the average, that would take about nine milliseconds per instruction. Today's microprocessors can run through instructions and data about twenty million times faster than that. Looking stuff up on disk would clearly slow things down.

Electronic memory bridges the gap. Your computer, under the direction of its operating system, copies the instructions and data that are recorded on your disk to solid-state memory. Your microprocessor can then operate at full-speed (or nearly so), millions of times faster than if it had to move bytes between disk and its registers.

Your computer never knows where its next byte is coming from. The microprocessor might need to read or write any byte in the program or the mass of data it is working with. If you're running several programs and shift between them, the need for bytes can be far ranging indeed. To prevent the million-fold slowdown of the microprocessor, modern computer systems are designed to keep all (or most) of each program and its data in solid-state primary storage. That's why you need dozens or hundreds of megabytes of primary storage in a modern computer equipped with Windows.

Volatility

In all all-too-human memories, one characteristic separates out short-term and long-term memories. The former are fleeting. If a given fact or observation doesn't make it into your long-term memory, you'll quickly forget whatever it was ”for example, the name that went with the face so quickly introduced to you at a party.

A computer's primary storage is similar. The contents can be fleeting. With computers, however, technology rather than attention determines what gets remembered and what is forgotten. For computers, the reaction to an interruption in electrical supply defines the difference between short- and long-term remembering capabilities. The technical term used to describe the difference is memory volatility. Computer memory is classed either as nonvolatile or volatile.

Volatile memory is, like worldly glory , transitory . It lasts not the three score years and ten of human existence or the fifteen minutes of fame. It survives only as long as does its source of power. Remove power from volatile memory, and its contents evaporate in microseconds. The main memory system in nearly every computer is volatile.

Nonvolatile memory is exactly what you expect memory to be, forever. Once you store something in nonvolatile memory, it stays there until you change it. Neither rain, nor sleet, nor dark of night, nor a power failure affects nonvolatile memory. Types of nonvolatile memory include magnetic storage (tape and disk drives ) and special forms of memory chips (read-only memory and Flash memory).

Nonvolatile memory can be simulated by providing backup power to volatile memory systems ”a technology commonly used in the CMOS configuration memory systems used in most computers ”but this memory remains vulnerable to the vagaries of the battery.

Should the battery die or slip from its connection even momentarily, the contents of this simulated nonvolatile memory may be lost.

Given the choice, you'd of course want the memory of your computer to be nonvolatile. The problem is that nearly all memory systems based solely on electricity and electronic storage are volatile. Those all-electric memory systems that are nonvolatile are cursed with a number of drawbacks. Most are substantially slower than volatile memory (rewriting a block of Flash memory can take seconds compared to the microseconds required by most kinds of volatile memory). The common nonvolatile memory systems also have limited lives. Flash memory typically can be rewritten a few hundred-thousand times. Volatile memory might get rewritten hundreds of thousands of times in less than a second.

Measurement

The basic form of computer memory is the cell , which holds a single bit of data. The term bit is a contraction of binary digit . A bit is the smallest possible piece of information. It doesn't hold much intelligence; it only indicates whether something is or isn't (on or off), is up or down, is something (one) or nothing (zero). It's like the legal system: Everything is in black and white, and there are no shades of gray (at least when the gavel comes down).

When enough bits are taken collectively, they can code meaningful information. A pattern of bits can encode more complex information. In their most elementary form, for example, five bits could store the number 5. Making the position of each bit in the code significant increases the amount of information a pattern with a given number of bits can identify. (The increase follows the exponential increase of powers of two ”for n bits, 2 to the n th power of unique patterns can be identified.) By storing many bit-patterns in duplicative memory units, a storage system can retain any amount of information.

Measuring Units

People don't remember the same way computers do. For us human beings, remembering a complex symbol can be as easy as storing a single bit. Although two choices may be enough for a machine, we prefer a multitude of selections. Our selection of symbols is as broad as our imaginations. Fortunately for typewriter- makers , however, we've reserved just a few characters as the symbol set for our language ”26 uppercase letters, a matching number of lowercase letters , ten numerals, and enough punctuation marks to keep grammar teachers preoccupied for entire careers.

Representing these characters in binary form makes computers wonderfully useful, so computer engineers tried to develop the most efficient bit-patterns for storing the diversity of symbols we finicky humans prefer. If you add together all those letters, numbers , and punctuation marks, you'll find that the lowest power of two that could code them all is two to the seventh power, or 128. Computer engineers went one better: By using an eight-bit code, yielding a capacity of 256 symbols, they found that all the odd diacritical marks of foreign languages could be represented by the same code. The usefulness of this eight-bit code has made eight bits the standard unit of computer storage, a unit called the byte .

Half a byte (a four-bit storage unit) is called a nibble because, at least in the beginning of the personal computer revolution, engineers had senses of humor. Four bits can encode 16 symbols ”enough for ten numerals and six operators (addition, subtraction, multiplication, division, exponents, and square roots), making the unit useful for numbers-only devices such as handheld calculators .

The generalized term for a package of bits is the digital word , which can comprise any number of bits that a computer might use as a group . In the world of Intel microprocessors, however, the term word has developed a more specific meaning ”two bytes of data, or 16 bits.

In the Intel scheme, a double-word comprises two words, or 32 bits; a quad-word is four words, eight bytes, or 64 bits.

The most recent Intel microprocessors are designed to handle data in larger gulps. To improve performance, they feature wider internal buses between their integral caches and processing circuitry . In the case of the current Intel microprocessors, this bus is 128-bits wide. Intel calls a single bus-width gulp a line of memory.

Because the designations word and double-word sometimes vary with the register width of microprocessors, the Institute of Electrical and Electronic Engineers (IEEE) developed a nomenclature system that's unambiguous for multiple-byte widths: doublet for two bytes, quadlet for four, and octlet for eight. Table 15.1 summarizes the common names and the IEEE standard designations for the sizes of primary storage units.

Table 15.1. Primary Intel Memory Storage Unit Designations
Unit IEEE Standard Notation Bits Bytes
Bit   1 0.125
Nibble Nibble 4 0.5
Byte Byte 8 1
Word Doublet 16 2
Double-word Quadlet 32 4
Quad-word Octlet 64 8
Line   128 16

The Multimedia Extensions (MMX) used by all the latest computer microprocessors introduced four additional data types into computer parlance. These repackage groups of smaller data units into the 64-bit registers used by the new microprocessors. The new units are all termed packed because they fit (or pack ) as many smaller units as possible into the larger registers. These new units are named after the smaller units comprising them. For example, when eight bytes are bunched together into one 64-bit block to fit an MMX microprocessor register, the data is in packed byte form. Table 15.2 lists the names of these new data types.

Table 15.2. New 64-Bit MMX Storage Designations
Name Basic Units Number of Units
Packed byte Byte (8 bits) 8
Packed word Word (16 bits) 4
Packed double-word Double-word (32 bits) 2
Quad-word 64 bits 1

Today's applications demand thousands and millions of bytes of memory. The basic measuring units for memory are consequently large multiples of the byte. Although they wear common Greek prefixes shared by units of the metric system, the computer world has adopted a slightly different measuring system. Although the Greek prefix kilo means thousand, computer people assign a value of 1024 to it, the closest round number in binary, 2 10 (two to the tenth power). Larger units increase by a similar factor so that a megabyte is actually 2 20 bytes and a gigabyte is 2 30 bytes. Table 15.3 summarizes the names and values of these larger measuring units.

Table 15.3. Names and Abbreviations of Large Storage Units
Unit Abbreviation Size in Units Size in Bytes
Kilobyte KB or K 1024 bytes 1024
Megabyte MB or M 1024 kilobytes 1,048,576
Gigabyte GB 1024 megabytes 1,073,741,824
Terabyte TB 1024 gigabytes 1,099,511,627,776
Petabyte PB 1024 terabytes 1,125,899,906,843,624
Exabyte EB 1024 petabytes 1,152,921,504,607,870,976
Zettabyte ZB 1024 exabytes 1,180,591,620,718,458,879,424
Yottabyte YB 1024 zettabytes 1,208,925,819,615,701,892,530,176
Access

Memory works like an elaborate set of pigeonholes used by post office workers to sort local mail. A memory location called an address is assigned to each piece of information to be stored. Each address corresponds to one pigeonhole, unambiguously identifying the location of each unit of storage. The address is a label, not the storage location itself (which is actually one of those tiny electronic capacitors, latches, or fuses ).

Direct Access

Because the address is most often in binary code, the number of bits available in the code determines how many such unambiguous addresses can be directly accessed in a memory system. As noted before, an eight-bit address code permits 256 distinct memory locations (2 8 = 256). A 16-bit address code can unambiguously define 65,536 locations (2 16 = 65,536). The available address codes generally correspond to the number of address lines of the microprocessor in the computer, although strictly speaking they need not.

The amount of data stored at each memory location depends on the basic storage unit, which varies with the design of the computer system. Generally, each location contains the same number of bits that the computer processes at one time. Although today's Pentium-class microprocessors have 32-bit registers and 64-bit data buses, the smallest unit of memory they can individually address is actually four double-words (16 bytes). Smaller memory units cannot be individually retrieved because the four least-significant address lines are absent from these microprocessors. Because the chips prefer to deal with data one line at a time, greater precision in addressing is unnecessary.

In writing to memory, where a microprocessor might need to change an individual byte but can only address a full line, the chip uses a technology termed masking . The mask preserves all the memory locations in the line that are not to be changed. Although they address the byte by the chunk it lies within, the mask prevents overwriting the bytes of memory that do not need to change.

Memory chips do not connect directly to the microprocessor's address lines. Instead, special circuits that comprise the memory controller translate the binary data sent to the memory address register into the form necessary to identify the memory location requested and retrieve the data there. The memory controller can be as simple as address-decoding logic circuitry or an elaborate application-specific integrated circuit that combines several memory-enhancing functions.

To read memory, the microprocessor activates the address lines corresponding to the address code of the wanted memory unit during one clock cycle. This action acts as a request to the memory controller to find the needed data. During the next clock cycle, the memory controller puts the bits of code contained in the desired storage unit on the microprocessor's data bus. This operation takes two cycles because the memory controller can't be sure that the address code is valid until the end of a clock cycle. Likewise, the microprocessor cannot be sure the data is valid until the end of the next clock cycle. Consequently, all memory operations take at least two clock cycles.

Writing to memory works similarly: The microprocessor first sends off the address to write to, the memory controller finds the proper pigeonhole, and then the microprocessor sends out the data to be written. Again, the minimum time required is two cycles of the microprocessor clock.

Reading or writing can take substantially longer than two cycles, however, because microprocessor technology has pushed into performance territory far beyond the capabilities of today's affordable DRAM chips. Slower system memory can make the system microprocessor ”and the rest of the computer ”stop while it catches up, extending the memory read/write time by one or more clock cycles.

Bank-Switched Memory

In some applications, a computer or piece of electronic gear has more memory than addresses available for storing it. (Such address shortfalls were admittedly more common in the past than today, when microprocessor's can address terabytes of memory.) For example, the telephone company long ago ”back when there was but one telephone company ”faced its own addressing shortage when the number of phones in use exceeded the ten million distinct seven-digit telephone numbers. The telephone company's solution was to break the nation (and the world) into separate ranges, what we know now as area codes . Each area code has the full ten million phone numbers available to it, expanding the range of available telephone numbers by a factor equal to the number of area codes.

When computers were limited to a few dozen kilobytes by the addressing range of their microprocessors, clever engineers developed their own version of the area code. They divided memory into banks , each of which individually fit into the address range of the microprocessor. Using the computer equivalent of a giant channel selector knob, engineers enabled the computer to switch any one of the banks into the addressing range of the microprocessor while removing the rest from the chip's control. The maximum addressable memory of the system becomes the product of the addressing range of the chip and the number of banks available. Because the various banks get switched in and out of the system's limited addressing range, this address-extension technique is usually called bank-switching .

The bank-switching technique was once used to extend the addressing range of personal computers, creating what was called expanded memory . Bank-switching is still sometimes used in video display systems, especially in older display modes.

The memory banks on the motherboards of most modern computers have nothing to do with bank switching. In this context, a bank of memory is any size block of memory that is arranged with its bits matching the number of data connections to your microprocessor. That is, a bank of memory for a Pentium 4 is a block of memory arranged 64 bits wide.

True bank-switching requires a special memory board that incorporates data registers that serve as the actual bank switches. In addition, programmers must specifically write their applications to use bank-switched memory. The program (or operating system that it runs under) must know how to track every switch and turn off memory to be sure that the correct banks are in use at any given moment.



Winn L. Rosch Hardware Bible, Sixth Edition
The Winn L. Rosch Hardware Bible, 6th Edition
ISBN: 0789728591
EAN: 2147483647
Year: 2003
Pages: 254
Authors: Winn L Rosch

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net