Responding to the Crisis


By 1992, the Internet was growing at rates that were straining it, its address space, and its physical and logical support infrastructures. Concern was running high among the techies behind the scenes at the IETF, but nobody else really seemed to grasp the problem. In November 1992, RFC 1380 was released. This RFC was strictly informational in nature, but it caused a sensation the likes of which the network engineering community has seldom experienced!

Certain parties within the IETF had already calculated the projected date by which the remaining supply of Class B address spaces would be exhausted. After that point, the other address classes would also come under increased pressure, and the failure would increase exponentially. The bottom line was that everything would start crashing sometime around March 1994. This Date of Doom became the rallying cry for mobilizing engineering resources to resolve all sorts of problems, big and small, short-term and long-term, for the sake of maintaining the Internet's viability.

The Problems

The problems dated back to the beginning but remained latent. During the Internet's early days, it grew very slowly. That slow rate of growth was caused by two primary factors:

  • These were the halcyon days before client/server computing architectures.

  • The Internet was still not a commercial vehicle; the only entities connected to it were academic, research, and military organizations. They all used it to facilitate their mutual research.

These two factors led to the mistaken belief that the 32-bit address space, the class-based address architecture, and all the Internet's physical and logical support infrastructure would service the Internet far into the future. Not much time passed before that was proven untrue. The symptoms of commercialization, and their projected impacts, appeared likely to cause numerous problems. The most immediate problem areas foreseen by the IETF included the following:

  • Impending exhaustion of the Class B address space Subsequent exhaustion of the other areas of the IPv4 address space.

  • Routing performance problems caused by an explosion in the size of the Internet's routing tables Human impacts of having to support an increasingly larger set of routes in an ever-growing internetwork.

Let's take a look at these two points, because they are highly correlated.

Address Space Exhaustion

The class-based IPv4 address architecture was flawed from the start. Only the very slow initial growth of the Internet masked its deficiencies. The slow growth vis-à-vis the sheer size of the address space (remember, an 8-bit address space with just 256 addresses was upgraded to a 32-bit address space with more than 4 billion addresses) created a false sense of security. This resulted in wasteful address assignment practices. Not the least of these included the following:

  • Hoarding of address blocks Early adopters of the Internet tended to register far more blocks of addresses than they needed. Many large, early Internet users grabbed Class A address spaces just because they could. In retrospect, it seems the only criterion in handing out IP address blocks was the size of the organization, not the size of its needs. This was justified as "planning" for unknown and unspecified future requirements.

  • Inefficient assignments Because IP addresses were so plentiful, many organizations assigned their numbers in a wasteful manner. This too could be attributed to planning.

  • Inefficient architecture As you saw in Chapter 2, "Classical IP: The Way It Was," the class-based architecture contained inordinately large gaps between the classes. These gaps wasted a considerable number of addresses. Under strict classical rules, a medium-sized company that requires 300 IP addresses would have been justified in requesting a Class B address. In effect, more than 65,000 addresses would be wasted due to the architectural inefficiency.

The inadequacy of the original address space and architecture was being felt most acutely in the Class B range. It was believed, based on an examination of address-space assignments, that this range would be exhausted first, and that continued explosive growth would rapidly exhaust the remaining address classes in a chain reaction.

Routing Table Explosion

A slightly more subtle implication of the rapid consumption of IP address blocks was the commensurate increase in the size of the Internet's routing table. A routing table, in simple terms, is a router's list of all known destinations, correlated with which interface should be used to forward datagrams to those destinations. Logically, then, the greater the number of network addresses, the larger a routing table becomes. When a router receives a packet to be forwarded to a destination, it must first examine that packet's destination IP address and then do a table lookup to find out which port it should use to forward that packet. The larger the table, the greater the amount of time and effort required to find any given piece of data. This means that the time it takes to figure out where to send each datagram increases with the size of the routing table. Even worse, the demands on physical resources also increase.

That's a bit of an oversimplification, but it serves to illustrate my point: The Internet was growing rapidly, and this was directly increasing the size of the routing tables needed by the Internet's routers. This, in turn, caused everything to slow down. It was also straining the physical capabilities of the Internet's routers, sometimes right to the outer edges of the router's capabilities. In other words, simply plugging in more memory or upgrading to the next-larger router or CPU wouldn't help.

The bloating of the Internet's routing tables had the potential to become a vicious, self-sustaining cycle: The larger the tables became, the more time was required to process any given packet. The more time was required to process a packet, the more memory and CPU time were consumed. As you consumed more CPU cycles to process a packet, the more packets became backlogged. This was a problem that could, if left unchecked, cause the Internet to collapseespecially since the Internet continued to grow at a phenomenal rate.

Clearly, something had to be done.

The Long-Term Solution

Being able to identify a problem's root cause is always the necessary first step in solving the problem. Thus, you could argue that the IETF was perfectly positioned to rectify the problems inherent in the IPv4 address space: It understood precisely the sources of its impending problems. But the IETF faced a catch-22. This was the type of problem that couldn't be solved quickly, yet time was not a luxury the IETF could afford! They had to act, and act quickly, to ensure the Internet's continued operability.

The IETF realized that the right answerlong-termwas to completely reinvent the Internet Protocol's address space and mechanisms. But that would require a tremendous amount of time and effort. Something had to be done in the short term. They had to find ways to buy the time needed to come up with a new IP addressing system. This new protocol, and preferred long-term solution, became known as IPv6. We'll examine IPv6 in Chapter 15, "IPv6: The Future of IP Addressing."

For now, let's focus on the emergency measures deployed to shore up the ailing IPv4. The vast majority of these measures were simple quick fixes. Although none of these, individually, would solve the larger problem, each would help forestall the impending Date of Doom in some small way. We'll call these quick fixes interim solutions and look at them in the next section.




IP Addressing Fundamentals
IP Addressing Fundamentals
ISBN: 1587050676
EAN: 2147483647
Year: 2002
Pages: 118
Authors: Mark Sportack

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net