By now you have probably figured out that the processor (the CPU) is the physical heart and soul of your computer. Everything else in the system exists to support the CPU by exchanging power, commands, or data with it, or responding to instructions from the CPU. This chapter takes a closer look at the things that happen inside the CPU and explains how to read and understand the published specifications for a CPU chip.
But before looking at today's CPUs, consider how the CPUs used in personal computers have evolved.
Before the days of integrated circuits, the earliest computers used electric relays and vacuum tubes in their processors. They were huge, expensive machines that cost a fortune to operate. The ENIAC, the ancestor of all modern computers, was built at the University of Pennsylvania in 1945. It weighed around 30 tons, and its 17,468 tubes consumed 150 kW of power. Between replacing burned-out tubes, the people who operated it (they were known as computers) had to set thousands of switches and plug in hundreds of cables to run a single program.
By 1952, computers were somewhat smaller and more efficient, but by today's standards they were still big and complicated, and they still used tubes. In one early demonstration, CBS News borrowed a pair of UNIVACs from Remington Rand to predict the outcome of that year's presidential election. They set up one in their studio on election night as a visual prop (with plenty of flashing lights and spinning tapes), while a second machine in Philadelphia actually analyzed the results. An hour after the polls closed in the east, UNIVAC predicted that General Eisenhower would win the election by a landslide, but the human analysts were expecting a closer race, so they didn't report the computer's estimates. By midnight, it was obvious that UNIVAC had it right. "The trouble with machines," reporter Edward R. Murrow told his TV audience, "is people."
At the same time that the mathematicians at several universities and corporate laboratories were building and using those early computers, another group of scientists at Bell Telephone Laboratories invented the transistor, which would eventually make vacuum tubes obsolete, because transistors could perform the same work in less space at lower cost. Transistor circuits were used in a computer for the first time in 1949.
Bell Labs started to license transistor technology to outside users in 1952, and transistors began to appear in hearing aids and portable radios within the next couple of years. By 1958, both Remington Rand and Philco had introduced all-transistor computers, which were much smaller, faster, and less expensive than the older tube monsters.
The new transistor technology led quickly to the next step in creating even smaller electronic circuits. W. A. Dummer, an English engineer, wrote in 1952, "With the advent of the transistor and the work in semiconductors generally, it seems now possible to envisage electronic equipment in a solid block with no connecting wires. The block may consist of layers of insulating, conducting, rectifying, and amplifying materials, the electrical functions being connected directly by cutting out areas of the various layers."
It took another seven years to move from Dummer's vision to the first true integrated circuit (known as a chip). By 1959, Jack Kilby built an integrated circuit at Texas Industries, and Robert Noyce at Fairchild Semiconductor developed a design for integrated circuits that could be mass produced. In 1968, Noyce and his partner Gordon Moore established NM Electronics, which soon became Intel.
One of Intel's customers was Busicom, a Japanese calculator company that wanted to build a programmable electronic calculator. Rather than building a new chip that was limited to Busicom's specific requirements, Ted Hoff, the Intel engineer on the project, designed a more flexible microprocessor chip that could be used for many purposes, depending on its programming. That first microprocessor, the Intel 4004, became available in 1971.
By today's standards, the 4004 was pretty primitive. It contained 2,300 transistors, and it could process only 4 bits and perform about 60,000 operations per second. But combined with three companion chips (a chipset) that provided memory support and I/O control, it was a fully functional, if limited, computer that fit on a single tiny circuit board.
Since that time, Intel has designed and produced a progression of increasingly complex and powerful CPU chips, as shown in Table 6.1. When IBM chose the Intel 8088 for their first PC, that design became the foundation for most of the personal computers that followed it. Intel's competitors have designed their own CPU families, but most of those chips are compatible with operating systems written for Intel products. The major exception was Apple, which used a Motorola CPU until recently, when they also adopted an Intel chip.
Name | Date Released | Number of Transistors | Maximum Speed |
---|---|---|---|
4004 | 1971 | 2,300 | 108 kHz |
8008 | 1972 | 3,500 | 200 kHz |
8080 | 1974 | 6,000 | 2 MHz |
8086 | 1978 | 29,000 | 5 MHz |
8088 | 1979 | 29,000 | 8 MHz |
80286 | 1982 | 134,000 | 12 MHz |
Intel386 | 1985 | 275,000 | 16 MHz |
Intel80386 | 1987 | 1.16 million | 33 MHz |
Intel486 DX | 1989 | 1.2 million | 50 MHz |
Intel Pentium | 1993 | 4.5 million | 233 MHz |
Intel Pentium Pro | 1995 | 5.5 million | 200 MHz |
Intel Pentium II | 1997 | 7.5 million | 450 MHz |
Intel Pentium III | 1999 | 28 million | 1 GHz |
Intel P4 Extreme | 2000 | 178 million | 3.8 GHz |
Pentium Xeon MP | 2002 | 286 million+ | 4 GHz+ |
Intel Itanium 2 (9MB Cache) | 2004 | 592 million |
Intel's major competitor is AMD (Advanced Micro Devices), whose line of CPUs has evolved at a similar rate to the ones made by Intel.
In 1965, Gordon Moore predicted that the maximum number of transistors on an integrated circuit would double every two years or less. This has become known as Moore's Law. In microprocessors, the increased number of transistors is also reflected in the processing power, measured in millions of instructions per second (MIPS).