2.3 Moore s Law

2.3 Moore's Law

A fundamental underpinning of the software industry is Moore's law, which observes a relentless and dramatic improvement in the performance per unit cost of the material information technologies and predicts similar future improvements. Moore's law is one of the central features of our industrial age, since it is a significant driver for economic growth and is behind the relentless changes wrought by IT. To understand the software industry, it is important to appreciate Moore's law and its ramifications. It is also instructive to delve into its underlying causes, particularly in extrapolating it into the future.

2.3.1 Metrics of Performance

Hardware performance metrics affect the software and ultimately the user in various ways. Three major classes of performance metrics are summarized in table 2.1. Capacity applies to a specific resource, like a processor and memory, and is where the characteristics of individual material technologies are most directly relevant. Throughput and delay apply to the entire system, software and hardware, taking into account all resource constraints, in the context of a specific application. Delay is the metric most evident to the user of an application. Throughput is usually of greatest concern to the operator of an application, because it relates to the aggregate level of activity that is supported across all users and ultimately the number of users.

Table 2.1: Performance Metrics Relevant to a Software Application

Performance Metric




The ideal throughput of a specific resource that is completely utilized, like a single processor, storage medium, or communication link.

Number of transactions per second for a processor, assuming it is performing only these transactions and assuming a new transaction is presented just as the previous transaction is completed.


What is actually accomplished per unit time for a total system, typically consisting of multiple resources and a mixture of processing, storage, and communication. Greater throughput is desirable because it supports a higher level of activity.

The number of transactions per second on an e-commerce site that farms these transactions out to multiple processors depends on the rate the network can accommodate the requests and the processors can complete them.


Time elapsed while waiting for something expected to happen, taking into account all the resources involved. Less delay is desirable because it allows an application to respond more quickly to user actions or requests.

The delay between clicking a hyperlink and viewing the resulting page depends on the network and the remote Web server; delay between a live sporting event and its video representation on the screen depends on the network.

While the equipment and application software can be largely functionally separated by appropriate intermediate infrastructure software, as shown in figure 2.4, the performance of application software inevitably depends on the performance characteristics of the hardware.

Example The user cannot perform a given task on a Web server more quickly than the execution time of the software that realizes that task (that execution time depends in turn on the speed of the processor), plus any time required to transfer the request over the network and get the results back (which depends on the performance of the network).

Throughput and delay are dependent. At low throughput, delay is directly determined by the capacity constraints of individual resources, like the time to store a file on disk or the time to transfer a message on a communication link. As throughput increases, congestion occurs. Congestion is due to irregular requests of the system, causing at times a rate of requests that temporarily exceeds capacity constraints on individual resources, and these requests must wait for available resources, introducing a congestion delay.

Example In a Web server, during periods of congestion (high utilization), requests to view Web pages may temporarily exceed the ability of the server to satisfy those requests. The excess requests are queued up waiting until the completion of earlier-arriving requests. That waiting time is a congestion-induced delay.

Congestion thus causes excess delay at high throughput. The severity of congestion can be predicted from the utilization u 1, a property of an individual resource (particularly a processor or communication link) defined as the actual average throughput as a fraction of the capacity. (As an example, a processor that is working half the time and idle the half the time on average has a utilization of 0.5.) According to one simple statistical model,[10] the average excess congestion-induced delay Dc is related to the utilization u by

where Ds is the average service time, defined as the average time to complete the incoming tasks. Thus, for small utilization, the congestion delay approaches zero, but as u approaches 1, the congestion delay gets arbitrarily large. The total (service plus congestion) average delay is the sum

Congestion plays a role in the economics of resource sharing; it can be controlled through pricing and other mechanisms (see chapter 9).

The delay and throughput performance parameters relate directly to the cost of the hardware required for a given application. To maintain acceptable delay, the utilization must be kept within bounds. This means that hardware resources cannot be fully utilized on average in order to account for the variation in load. If a larger throughput is required (e.g., the number of users has increased), the utilization can be increased (e.g., no hardware resources are added) but only at the expense of added delay. Alternatively, capital expenditures on additional hardware can increase capacity, keeping utilization and delay constant.[11]

2.3.2 Statement of Moore's Law

Economic historians will undoubtedly view the rapid advance in information technologies as a seminal and remarkable defining characteristic of the information age. This advance can be captured succinctly by Moore's law, which in its simplest form states that "the performance per unit cost of material information technologies increases exponentially with time." It applies to all three areas of IT, at least for the time being, although the law as originally stated by Gordon Moore (1965) applied specifically to integrated circuits. (Amusingly, Moore used the inelegant term cramming to describe the progress in getting more devices on each chip.) Engineers do not usually associate the appellation "Moore's law" with storage and communication, although the exponential improvement in both those technologies can be traced to similar underlying phenomena as in electronics (specifically scaling) and to the fact that both storage and communication incorporate electronics as well as other material technologies (such as magnetic or optical storage medium or fiber optics).

Economists are quite familiar with exponential advances. For example, the compounding of reinvested interest results in an exponential increase in principal, and similarly if the world economy grows at a fixed percentage rate, then its total size increases exponentially. However, an exponential improvement in the cost-effective performance of technology (in contrast to the diffusion of technology, which is related to the growth in the overall economy) was unknown to science and engineering prior to the information age, and is not an observed characteristic of earlier technologies such as transportation and electrification.

Example An exponential increase in railroad tracks was observed during the first few decades of the railroad industry in the United States (Schaller 1997). However, this relates to the diffusion of the technology through the compounding of investments, not the performance of the technology. For trains, performance would be measured by speed or fuel consumption per unit distance or maximum load capacity. As Time magazine (January 3, 1983) wrote, "If the automobile business had developed like the computer business, a Rolls Royce would now cost $2.75 and run 3 million miles on a gallon of gas." Even those figures would be dwarfed in the two decades since that was written.

If one waits long enough, the cumulative effect of an exponential advance is dramatic. We have seen approximately three to four decades of the operation of Moore's law thus far, and it is expected to pertain some years into the future.

2.3.3 Incrementing Moore's Law

An exponential improvement requires a parameter to describe the speed of advance. Economists are accustomed to using the rate r, where the exponential advance in time t is described quantitatively as (1 + r)t, and r can be interpreted as the fraction (usually expressed as a percentage) of increase in unit time (Δt = 1). Scientists and engineers, on the other hand, are accustomed to a different (albeit equally descriptive) parameter, the time d required for a doubling.

Example Radioactive decay is typically characterized by its half-life, defined as the time for the intensity of the radioactive decay to decrease by a factor of 2.

These parameters are related by (1 + r)d = 2 and shown in table 2.2 for the three areas of IT (based on published estimates). It is also useful to compare the accumulated performance improvement over an extended period of time; one decade is shown in table 2.2. Note that processing is improving more slowly than electronics because of implementation inefficiencies. Storage capacity (total bits that can be stored per unit cost) is improving as fast as electronics, while the storage throughput (rate at which bits can be written or read) is improving more slowly. Communication throughput for fiber optics is improving dramatically, but this is a recent and relatively short-term phenomenon.[12] Longer-term, fiber optics should follow a similar trend to electronics. Generally, these capacity improvements flow directly to the achievable throughput for a given application, if the application and its supporting hardware infrastructure are well designed.

Table 2.2: Estimated Improvement in Performance per Unit Cost for the Information Technologies



Doubing Time (months)

Rate (%/year)

Multiplier (per 10 years)

Electronics throughput (CMOS)[a]

Data processed per unit time




Processing throughput (commercial computers)

Data processed per unit time




Storage capacity (magnetic disk)

Bits stored




Storage throughput (magnetic disk)

Bits written or read per unit time




Communication throughput (fiber optics)

Bits transported per unit time




Sources: Bokerts (1969; 2000); Dray and Shenoy (2000).

[a]CMOS stands for complementary metal oxide semiconductor.

2.3.4 Effect of Moore's Law on Software

The implications of Moore's law to the software industry are profound and expected to continue for some time. The role that electronics and photonics play in the software industry is analogous to the role of the aircraft industry from the perspective of the travel industry. However, contrary to aircraft, electronics and photonics performance characteristics per unit cost are improving exponentially with time, adding a dramatic "tail wind."

Historically the computer industry has evolved through four phases, listed in table 2.3. They are overlapping because none has disappeared; for example, the mainframe is still widely used for crucial enterprise applications. The evolution through these phases has been largely driven by the declining cost and improved performance of computing as well as (in the fourth phase) the declining cost and improved performance of communication. Computing power once affordable only to large organizations is now affordable to individual consumers, opening up entirely new business models for the software industry. The dramatic effect of the networked computing phase has just begun and is an important issue in this book.

Table 2.3: The Four Historical Overlapping Phases of Computing



Effect on software

Mainframe computer

All computing is done in batch mode on a large, expensive centralized computer.

Computers are affordable only to large organizations, and essentially all software applications serve back-office business needs or the computational needs of science and engineering.

Time-sharing computer

Multiple users can work interactively on the same (relatively large and expensive) centralized computer.

Software applications begin to focus on the needs of individuals as well as large organizations, enhancing their effectiveness and productivity.

Personal computer

Individual users have dedicated desktop computers. Individuals can acquire new applications directly from software vendors and install them.

A consumer market for software develops with distinctly different sales and distribution mechanisms.

Networked computer

All computers can communicate with one another. Applications can be distributed across different computers. For example, client-server computing is a mixture of the time-sharing and personal computer modes, in which shared applications on a server are presented to individual users on their personal computers.

Software applications become deeply ingrained in enterprise business processes, which are inherently distributed. Software applications become a basis for communication among individuals and groups, and for distributed information access. New challenges of interoperability and portability arise.

Source: Messerschmitt (1999c).

Another result of the operation of Moore's law is to free developers to concentrate less on performance and more on features that enhance usability (e.g., graphic user interfaces and real-time video), reduced time to market, or added functionality. In earlier years, software developers spent lots of time and effort enhancing performance to achieve adequate interactive delays; this remains important but secondary to functionality. The operation of Moore's law has allowed advances in processing power for user interface enhancements through graphics and for incorporating new media like audio and video into applications. The sale of a sequence of upgrades of the same application is an important economic underpinning of the software industry, but this would be less feasible without the advances in processing power. A major technical challenge to software developers today is dealing with the inherent complexity of applications rather than achieving adequate performance. Many of the techniques for dealing with this complexity require added processing power (see chapter 4).

Advances in storage are also having a profound effect. It is estimated that the total information generated worldwide is growing at about 50 percent per year (Lyman and Varian 2000). Since this rate is comparable to Moore's law for storage, the cost of storing this information is not growing very fast, if at all. The pace of generation of information is profoundly influenced by IT, and the declining cost per unit storage is one enabler.

A final effect of Moore's law is its influence on the boundary between hardware and software. For any given application, with the declining cost of electronics, a software implementation becomes more attractive over time.

Example At one time, dedicated hardware was required to process speech. Later, it became attractive to process speech using specialized programmable digital signal processing. Today most speech processing can be performed on a standard personal computer with the addition of software, eliminating the need for specialized hardware.

Software solutions leverage the greater manufacturing volumes and resulting economies of scale for programmable hardware devices, allow a given application to readily benefit from future technology advances, and allow a reduced time to market and greater flexibility to change and upgrade a design (often after manufacture and shipment). Software solutions benefit from the sharing of a single programmable device by multiple applications. Of course, customized hardware solutions are required for applications that stress the performance capabilities of today's electronics. A key predictor of the economic viability of a software solution is the gap between the intrinsic capabilities of electronics on the one hand and the application needs on the other—as this gap grows, software implementation on standard programmable devices becomes increasingly attractive.

2.3.5 System Bottlenecks

An IT system is a mixture of various hardware and software elements working together to achieve a single purpose. Practical systems mix all the technologies we have discussed. While they directly benefit from the operation of Moore's law, they also suffer from bottlenecks. A bottleneck is a capacity parameter that limits system performance; even increasing other aspects of the system capacity would not help. Moore's law translates into improved system performance only with skillful system design, in part to avoid or bypass bottlenecks. This is considered further in chapter 4.

Some important system bottlenecks are imposed by both technology and economics. The deployment of broadband communication technologies to individual residences is constrained by the high capital investments and risks of deploying fiber optics technology. Mobile applications require wireless communication technologies, which are inherently less capable than fiber optics.[13] The characteristics of battery technology also limit the amount of processing that can be performed within portable terminal devices.

One physical property that is not described by Moore's law is the speed of light, which remains fixed and ultimately becomes a bottleneck. Because of the finite speed of light, physically small systems are likely to outperform physically large ones.

2.3.6 Why Moore's Law?

The origins of Moore's law lie in observations of three material technology inventions at the beginning of the information age. The first was the transistor in the late 1940s, followed by the integrated circuit in the late 1950s. The second was magnetic storage using a rotating disk as a medium in the 1950s, which has similarly benefited from decades of incremental improvement. The third was communication by light using fiber optics as the medium in the 1970s, also followed by decades of incremental improvement.

Moore's law is an empirical observation, but what factors underlie it (Schaller 1997)? There seem to be three contributing factors: the physical law of scaling, the economics of internal investment, and the self-fulfillment of expectations. First, consider physical laws. Like other technologies, electronics and photonics have fundamental limits on performance defined by physical laws. Fortunately today's technologies are far from these physical limits, suggesting that the rate of improvement predicted by Moore's law may continue for some time (Keyes 2001).

Since previous technologies (like transportation or electrification) did not advance according to Moore's law, there must be something distinctive about the material information technologies. That is the physical law of scaling: as the physical dimensions of electronic and optical devices are miniaturized, all their important performance characteristics improve. This is a remarkable observation to engineers, who are accustomed to making trade-offs: as one performance measure is improved, others get worse.[14] The law of scaling can be stated as follows. Let s < 1 be a factor by which all the feature sizes of a given technology are scaled; that is, all physical dimensions of transistors are scaled by s, as are the widths of the wires connecting the transistors. Then physical laws predict the performance implications listed in table 2.4.

Table 2.4: Scaling Laws for MOS[a] Integrated Circuits with a Fixed Electric Field Intensity



Numerical change



Increase in the number of devices per unit area of a chip (or total number of devices on a chip with the same total area).

The area of each device is proportional to the square of its dimensions.


Rate at which operations can be performed by each device.

The velocity of electrons does not change, but the physical distance they must move is reduced by s.


Power consumption per unit area (or total power consumption for a chip with the same total area).


Both supply voltage and the distance traveled by electrons drops, reducing Ohmic losses per device. This is offset by the larger number of devices.

Source: Firesmith and Henderson-Sellers (2001).

Notes: All these scaling predictions assume that the power supply voltage is reduced in proportion to s (so-called constant electric field scaling). In practice this voltage reduction is difficult because systems like to mix technologies and use a single supply voltage. For this reason, these scaling laws are somewhat optimistic, but they do capture trends.

[a]MOS stands for metal oxide semiconductor.

With material IT everything improves with scaling: device speed improves, the complexity (number of devices) can increase, and the power consumption per unit of complexity or speed is reduced. Technological improvements focus on scaling, making devices smaller and smaller and packing more devices on a chip. Similar phenomena occur in storage and communication.[15] There are other advances as well, such as the increasing total area of chips and improvements in materials.

How does scaling result in an exponential improvement over time? This seems a likely outcome of scaling (in the direction of miniaturization, not gigantism), using the following logic. Suppose the feature sizes for each generation of technology is scaled by the same scaling factor s. Then the feature size decreases as s, s2, s3, and so forth. Assuming these generations are equally spaced in time, the improvements in table 2.4 occur geometrically with time in accordance with Moore's law. Thus, with a couple of (perhaps bold) assumptions, we can predict Moore's law as a direct consequence of the law of scaling.

While the law of scaling predicts the exponential improvement described by Moore's law, it provides no explanation of the observed rate of improvement. For this, we turn to the economics of internal investment. The doubling time is determined, to a large degree at least, by the investments that industrial firms and the government make in research and development (R&D) directed at incremental technology improvements (principally scaling). At every generation of technology, new obstacles are faced that require innovation, and new equipment and technologies have to be designed. Since the size of the improvements and the time required to make those improvements depends on the size of the investments, economics must intervene.

Consider industrial firms investing in a new generation of technology: they must make three dependent investment decisions. First, they determine the rate of investment in R&D. Second, they choose a scaling factor s to aim at with that investment. Third, they decide the time interval over which that scaling is to be achieved. Qualitatively, for a given rate of R&D investment, a larger s (a more modest scaling and a smaller performance payoff) can be achieved in a shorter time, therefore increasing the present value of that payoff,[16] and with lower risk.[17] The overall rate of investment in R&D depends on current revenues and profitability expectations, where revenues depend in turn on the outcomes of previous R&D investments and the success in defining new products based on previous generations of technology. Overall, firms must balance a large number of factors to achieve the ultimate goal of profitability, now and in the future. The point is that the rate of technological improvement depends in a complex way on a financial and economic model of R&D investment, and the law of scaling provides no clue in and of itself.

The situation is further complicated (in a delightful way) by the observation that scaling results in an ever-expanding suite of applications and products at lower cost, and therefore an increase in unit volumes. This expanding market results in a virtuous positive feedback cycle: scaling expands the market, and the market expansion creates increasing revenues for suppliers, which in turns allows ever expanding investments in R&D to fuel the next cycle of scaling (as well as capital equipment investments in the required new fabrication plants). This is fortunate, because empirically each cycle of scaling requires ever-larger R&D and capital investments.

The third factor behind Moore's law is the self-fulfillment of expectations, and this has to do with how the industry is organized and coordinated. The electronics industry as a whole must coordinate three complementary businesses: the supply of equipment required to fabricate integrated circuit chips, the construction and operation of factories to fabricate those chips, and the manufacture and assembly of equipment that incorporate those chips.[18] (Of course, a major point of this book is that the IT industry has spawned a fourth business, software. Industrial organization is discussed in greater depth in chapter 7.) These complementary investments must be coordinated and precede the need. Technology thus advances in discrete generations, where fabrication equipment suppliers supply each new generation of equipment, chip manufacturers construct factories to accept this equipment and produce salable chip designs to fabricate, and equipment manufacturers design new chips into their new generations of equipment. Coordination is considerably enhanced by a predictable road map that all can follow. The world's semiconductor manufacturers cooperate on just such a road map (International Technology Roadmap 2002). If this road map happens to be based on Moore's law, and this law is feasible in terms of the pace of innovation and internal investment, then the law becomes self-fulfilling. Firms that don't follow the road map lose out to competition.

See chapter 10 for discussion of the applicability of Moore's law in the future.

[10]The details are less important than the overall trend that this model predicts. Nonetheless, we present them here (see Messerschmitt 1999c). Dc is average waiting time for a Poisson arrival of requests and an independent exponentially distributed time to service each request and average service time Ds. For example, u = 0.5 results in a congestion-induced delay equal to the average service time. Of course, this specific model would not always be an accurate reflection of reality in all instances of congestion.

[11]The ability to improve capacity through the addition of hardware ("scaling out") depends on the proper architecture of an application and its underlying infrastructure. This is discussed further in chapter 4.

[12]Up to the mid-1990s communication through fiber was limited by improvements in the electronics to transmit and receive the bits, and thus doubled about every eighteen months. The innovation that circumvented this bottleneck was wavelength-division multiplexing (WDM), which allowed multiple streams of bits to be transmitted simultaneously on different wavelengths. This has increased the rate of improvement dramatically because it bypasses the electronics bottleneck.

[13]This is a fundamental consequence of the observation that the radio spectrum has vastly less bandwidth than the optical spectrum, even as the latter is limited by the material properties of the glass that makes up fiber.

[14]For example, increasing the maximum load capacity of a train likely decreases both fuel economy and maximum velocity.

[15]The advance of magnetic and optical storage focuses on decreasing the area required for storage of each bit, allowing more bits to be stored on a given disk and also, along with advances in electronics, enabling faster transfer of bits in and out. Advances in fiber optics focus on increasing the transfer rate of bits through the fiber, which is equivalent to reducing the length of fiber occupied by each bit, because the bits always travel at the same velocity (a fraction of the speed of light).

[16]The time value of money dictates that a return R occurring n years in the future has present value R/(1 + r)n for prevailing interest rate r. The discount factor (1 + r)n is smaller for earlier returns (smaller n), increasing the present value of the return.

[17]The specific risks to be concerned about are outright failure to achieve the goal, a need to increase the investment to achieve the goal, or a delay in achieving the goal.

[18]These three businesses were once largely captive to large vertically integrated firms. Industry maturity and market forces have favored specialization, so now they are largely separate firms (with some notable exceptions). Many of the largest equipment manufacturers, like IBM, Nortel, Lucent, Siemens, Philips, NEC, Sony, and many others, retain internal integrated circuit manufacturing. However, they purchase many chips from specialized integrated circuit manufacturers like Intel and Texas Instruments. Meanwhile, most of the newer equipment manufacturers (like CISCO) outsource their integrated circuit manufacturing, even chips they design themselves.

Software Ecosystems(c) Understanding an Indispensable Technology and Industry
Software Ecosystem: Understanding an Indispensable Technology and Industry
ISBN: 0262633310
EAN: 2147483647
Year: 2005
Pages: 145

Similar book on Amazon

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net