CIDR was defined in RFCs 1517 through 1520, published in September 1993. As you peruse these documents, you can't help but notice the sense of urgency with which the IETF was grasping for a solution to its impending address crisis. The Internet's commercialization wrought a dramatic and rapid evolution in its size, scope, and user base. Once just populated with a few academic and research bodies collaborating on DARPA, the Internet quickly grew to global proportions. Its user community also spanned the gamut, including an ever-increasing number of users who tried desperately to figure out how to make money off the Net. The effect was rapidly increasing pressure on all the Internet's internal support mechanisms, including its address space, that threatened to collapse the Internet itself. Specific areas of concern for the IESG included the following:
With these concerns for their motivation, the IESG studied the problems plaguing the IP address space extensively. They documented their recommendations in RFC 1380, which was published in November 1992. This document posited CIDR as the solution to the first two problems. CIDR remained to be developed, but enough of its desired functionality was spelled out in RFC 1380 that the Internet community could appreciate the impending changes. At this point in time, nobody within the IETF was certain exactly how to solve the looming address crisis. Consequently, they attacked in all feasible directions. Specifying a classless IP architecture was but one of the myriad tactics deployed to buy enough time for IPv6 to be developed. Implicit in this statement is the fact that, like the various stopgap technologies we examined in Chapter 5, nobody really expected CIDR to be more than just another stepping-stone to IPv6. CIDR: An Architectural OverviewThe IETF witnessed the successful grassroots innovation of VLSM and appreciated the significance of moving beyond octet boundaries in the IP addressing architecture. With subnets, this was a trivial endeavor: A subnet mask, regardless of whether it is of fixed or variable length, is of local significance only. In other words, routers do not use subnet addresses to decide on optimal paths through a network. However, embracing a bit-level definition of network addresses would have a tremendous impact on routers and how they calculate routes. The previous method of operation was the class-based addressing architecture that we examined in Chapter 2, "Classical IP: The Way It Was." Classical IP used the leftmost bits of leftmost octet to determine an address's class. It was possible to identify the class of any IP address simply by examining its first octet in binary form. By identifying in which bit position a 0 first appeared, you could determine whether it was a Class A, B, C, or D. After establishing an address's class, a router would know precisely how many bits of the 32-bit address it should use to make routing decisions. Abandoning the class-based approach enables a more efficient use of an address space via more finely tunable address allocation. An even more important change was that the mathematical boundaries of the old address classes were done away with. Thus, the once-threatening problem of the depletion of Class B addresses was solved. The solution came in two forms:
The demand for Class B network addresses was seriously reduced simply by letting network addresses be created by bit boundaries, as opposed to octet boundaries. Thus, if a Class C (24-bit) network were too small, you could get a 23-bit, 22-bit, 21-bit, and so on network instead of just jumping right to a 16-bit network block. The potential supply of 16-bit-sized networks was greatly increased by eliminating the mathematical boundaries of the old address classes. Under the CIDR rules, any sized network block could be created from anywhere in the 32-bit addressing system. An additional benefit of CIDR was that smaller networks could still be carved from larger network blocks. In the class-based architecture, this was known as subnetting. Subnets, as you have seen in previous chapters, violated the class-based architecture. Thus, they couldn't be routed but were invaluable for creating local subnetworks. In a CIDRized environment, abandoning the rigid hierarchy of class-based addresses in favor of bit-level boundaries created a huge problem for routers: How do you figure out how many bits are significant for routing? The answer was to expand the use of subnet masks and to make them routable instead of just of local significance. In this fashion, the boundary or distinction between subnet masks and network blocks became perfectly blurred. Today, the distinction is almost semantic. It depends on how aggregatable your address distribution scheme is and how your routers are configured. If the word aggregatable leaves you scratching your head, fear not! It's not as complex as it sounds. We will look at aggregatability later in this chapter. For now, let's take a closer look at CIDR notation. CIDR NotationIt is imperative that you understand CIDR notation, including what it is and what it isn't. CIDR notation has become the predominant paradigm for conveying the size of a network's prefix. But it is just a human-friendly form of shorthand. When you configure routers, you must still use the dotted-quad style of IP mask. For example, 255.255.255.248 is the equivalent of a /29. It should be obvious which one is easier to use. Table 6-1 shows the valid range of CIDR block sizes, the corresponding bitmask, and how many mathematically possible addresses each block contains. If this table looks familiar, it's because of the similarities between CIDR and VLSM. You've seen some of this data before, in Table 4-1. CIDR notation was included with that table simply to make it easier for you to go back and compare VLSM with CIDR.
This table omits network masks /1 through /4 because they are invalid. The largest network permitted under current CIDR rules is a /5. A /5 is sometimes called a superblock because it is equivalent to eight Class A address blocks. Such an enormous address block is not made available for end-user organizations on the Internet. Instead, a /5 might be allocated to a regional registry for use in providing IP address block assignments to service providers and/or large end-user organizations in the registry's home region. Backward Compatibility with Classical IPBackward compatibility is always a critical issue whenever a technological advancement is proposed. CIDR was no exception. You know, based on your reading thus far in the book, that Classical IP has become obsolete. However, CIDR represented more of an extension of Classical IP than a complete rewrite. The backward compatibility was almost complete: The entire 32-bit address space was preserved, as was support for all previously assigned IP addresses. The notion of splitting an address into host and network address subfields was also retained. As you'll see throughout this chapter, CIDR represented nothing more than a complete legitimization of the classical form of VLSM. The degree of backward compatibility with Classical IP ensured CIDR's success. The classes themselves were abolished, yet the address space survived. |