13.1 Why Implementations Change

A new architecture can be novel only at its introduction. After the initial launch of a computer architecture, every subsequent implementation needs some fresh rationale, such as better performance, lower cost, or desirable features. Here, we try to avoid marketing rhetoric and instead elucidate the underlying requirements and scientific principles.

13.1.1 Demands and Opportunities

Processing early twentieth-century census data stimulated improvements in tabulating machines that worked with punch cards. The difficulty of decrypting enemy messages about operations in World War II stimulated the building of one of the earliest electronic computers. The inability of space launches to proceed if consensus is lost among redundant on-board computers suggests the need to understand and prove the reliability of systems and their individual components.

Toward the end of the twentieth century, successive "killer applications" stimulated the development of new computer architectures and better implementations of existing architectures. We will cite just a few. Document preparation was possible with specialized electronic typewriters, but became universally accessible when small personal computers became truly affordable. The spreadsheet program was functional on character-cell displays, but became much more popular with scalable and scrollable graphical displays. The World Wide Web increased the appeal of computers in the populace and substantially raised the minimum level of system performance deemed suitable for, and acceptable to, an individual computer owner.

Traditional applications in scientific research and within large corporations can usually put greater computing power to productive use, but a widespread reluctance to pay more for any new computer than for its predecessor underlies the continual effort to demonstrate "price/performance" metrics in the industry. Sometimes, as with traditional supercomputers, the disjunction between desire for the latest or best and the stark realities of the cost of implementation remain largely unresolved and result in a very narrow market.

13.1.2 Implications of Moore's Law

While the laws of physics such as Heisenberg's uncertainty principle, the atomic granularity of all materials, and the third law of thermodynamics set ultimate limits on computer implementations, the untapped possibilities remaining to be explored and exploited have kept the computer industry on a technological trajectory of feasibility first articulated by Gordon Moore, a cofounder of Intel Corporation.

About every two years, the technology for manufacturing computer chips goes into a new "process" that makes possible a 30% decrease in the linear "feature size" of conducting paths and transistors. Since the chip is two-dimensional, any given circuitry now takes only (0.7)2 0.5 as much area on the chip as before. Thus every couple of years a new process enables a doubling of the number of transistors that will fit within roughly the same space.

For many years, the larger numbers of transistors only made possible the design of better processors. By the end of the twentieth century, on-chip cache became feasible and began to occupy more chip area than the actual processor logic.

Device speeds tend to double every three years. Unfortunately, the amount of heat dissipated by the fast switching activity of enormous numbers of transistors also increases inexorably, roughly doubling every three years in spite of lowered operating voltages. Ingenious engineering has thus far developed ways to conduct the heat away from the chip.

Like a product name that has slipped away from its trademark holder into the public domain, Moore's law has entered the vernacular and is often applied in ways that Moore had never intended (see Tuomi).

13.1.3 Anticipating a Long Lifetime for an Architecture

The designers of the Alpha architecture explicitly considered how critical their initial design decisions would be in assuring that implementations of the new architecture could remain attractive in the marketplace for a couple of decades. Thus they emphasized a clean 64-bit design free from the perceived limitations of existing 32-bit and RISC designs. They also adhered to the IEEE standards for floating-point representations.

More challenging was the recognition that a long lifetime implied a need to anticipate an improvement in overall computing performance by some three orders of magnitude. No single factor would be able to produce that degree of improvement alone, but the following factors were identified as important at the time:

  • Clock speeds should increase by a multiple of 10 over 25 years, but not by a factor of 100 to 1000.

  • Multiple issue (superscalar designs) might increase the amount of work per clock cycle by a multiple of 10, if coordinated improvements in compiler technology could also be achieved.

  • Some form of multiprocessor design would probably be needed to account for a remaining multiple of 10.

By implication, nothing in the initial design should put up roadblocks to taking advantage of these principal factors. Furthermore, the design should lend itself to the incorporation of other initially unproven factors that might become technologically accessible and yield additional smaller multiples toward the overall improvement.



ItaniumR Architecture for Programmers. Understanding 64-Bit Processors and EPIC Principles
ItaniumR Architecture for Programmers. Understanding 64-Bit Processors and EPIC Principles
ISBN: N/A
EAN: N/A
Year: 2003
Pages: 223

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net