By 1992, the Internet was growing at rates that were straining it, its address space, and its physical and logical support infrastructures. Concern was running high among the techies behind the scenes at the IETF, but nobody else really seemed to grasp the problem. In November 1992, RFC 1380 was released. This RFC was strictly informational in nature, but it caused a sensation the likes of which the network engineering community has seldom experienced! Certain parties within the IETF had already calculated the projected date by which the remaining supply of Class B address spaces would be exhausted. After that point, the other address classes would also come under increased pressure, and the failure would increase exponentially. The bottom line was that everything would start crashing sometime around March 1994. This Date of Doom became the rallying cry for mobilizing engineering resources to resolve all sorts of problems, big and small, short-term and long-term, for the sake of maintaining the Internet's viability. The ProblemsThe problems dated back to the beginning but remained latent. During the Internet's early days, it grew very slowly. That slow rate of growth was caused by two primary factors:
These two factors led to the mistaken belief that the 32-bit address space, the class-based address architecture, and all the Internet's physical and logical support infrastructure would service the Internet far into the future. Not much time passed before that was proven untrue. The symptoms of commercialization, and their projected impacts, appeared likely to cause numerous problems. The most immediate problem areas foreseen by the IETF included the following:
Let's take a look at these two points, because they are highly correlated. Address Space ExhaustionThe class-based IPv4 address architecture was flawed from the start. Only the very slow initial growth of the Internet masked its deficiencies. The slow growth vis-à-vis the sheer size of the address space (remember, an 8-bit address space with just 256 addresses was upgraded to a 32-bit address space with more than 4 billion addresses) created a false sense of security. This resulted in wasteful address assignment practices. Not the least of these included the following:
The inadequacy of the original address space and architecture was being felt most acutely in the Class B range. It was believed, based on an examination of address-space assignments, that this range would be exhausted first, and that continued explosive growth would rapidly exhaust the remaining address classes in a chain reaction. Routing Table ExplosionA slightly more subtle implication of the rapid consumption of IP address blocks was the commensurate increase in the size of the Internet's routing table. A routing table, in simple terms, is a router's list of all known destinations, correlated with which interface should be used to forward datagrams to those destinations. Logically, then, the greater the number of network addresses, the larger a routing table becomes. When a router receives a packet to be forwarded to a destination, it must first examine that packet's destination IP address and then do a table lookup to find out which port it should use to forward that packet. The larger the table, the greater the amount of time and effort required to find any given piece of data. This means that the time it takes to figure out where to send each datagram increases with the size of the routing table. Even worse, the demands on physical resources also increase. That's a bit of an oversimplification, but it serves to illustrate my point: The Internet was growing rapidly, and this was directly increasing the size of the routing tables needed by the Internet's routers. This, in turn, caused everything to slow down. It was also straining the physical capabilities of the Internet's routers, sometimes right to the outer edges of the router's capabilities. In other words, simply plugging in more memory or upgrading to the next-larger router or CPU wouldn't help. The bloating of the Internet's routing tables had the potential to become a vicious, self-sustaining cycle: The larger the tables became, the more time was required to process any given packet. The more time was required to process a packet, the more memory and CPU time were consumed. As you consumed more CPU cycles to process a packet, the more packets became backlogged. This was a problem that could, if left unchecked, cause the Internet to collapseespecially since the Internet continued to grow at a phenomenal rate. Clearly, something had to be done. The Long-Term SolutionBeing able to identify a problem's root cause is always the necessary first step in solving the problem. Thus, you could argue that the IETF was perfectly positioned to rectify the problems inherent in the IPv4 address space: It understood precisely the sources of its impending problems. But the IETF faced a catch-22. This was the type of problem that couldn't be solved quickly, yet time was not a luxury the IETF could afford! They had to act, and act quickly, to ensure the Internet's continued operability. The IETF realized that the right answerlong-termwas to completely reinvent the Internet Protocol's address space and mechanisms. But that would require a tremendous amount of time and effort. Something had to be done in the short term. They had to find ways to buy the time needed to come up with a new IP addressing system. This new protocol, and preferred long-term solution, became known as IPv6. We'll examine IPv6 in Chapter 15, "IPv6: The Future of IP Addressing." For now, let's focus on the emergency measures deployed to shore up the ailing IPv4. The vast majority of these measures were simple quick fixes. Although none of these, individually, would solve the larger problem, each would help forestall the impending Date of Doom in some small way. We'll call these quick fixes interim solutions and look at them in the next section. |