Interim Solutions


The old saw still cuts true: Every little bit helps. In that spirit, the IETF attacked in all directions as it tried to stave off the impending, projected Date of Doom. Some of the measures, as you've seen so far in this chapter, were fairly dramatic, big-ticket items. Others were remarkably simple and easy, yet effective. In this section, I'll show you some of the simple items that helped the Internet cope with its addressing crisis. These efforts included some measures to increase the pool of available address space. Others were intended to set the stage for abolishing class-based network architectures in favor of a classless architecture.

Here are some of the quick fixes employed by the IETF:

  • Emergency reallocation of classful address spaces

  • Emergency rationing of the remaining address spaces

  • Urging holders of over-sized class-based addresses to return any unused or underused blocks

  • Reserving address blocks for use in private networks but not across the Internet

  • Changing the criteria for obtaining address spaces to make it significantly more difficult for enterprises and organizations to get their own address space

Each of these tactics helped, in some small way, to stave off the Date of Doom. Cumulatively, they revolutionized how the Internet is accessed and used. Internet users are still experiencing the impacts of these changes today, sometimes for the first time. Throughout the remainder of this chapter, we'll examine each of these tactics and their impacts on IP network users so that you can better appreciate some of the subtle challenges inherent in obtaining and managing an IP address space in today's environment.

One of the other, more-significant proposals that emanated from the IETF's emergency effort was to more fully embrace the concepts proven in VLSM. Although VLSM was just a grassroots innovation, there was tremendous benefit in being able to flexibly devise subnet masks. Logic fairly dictates that there would be even more benefit if network masks (as opposed to just subnet masks) were equally flexible. Imagine being able to define network addresses at any bit boundary! Unfortunately, that would require an almost complete rewrite of the IPv4 address space and its supporting protocols. That's nearly as big a change as IPv6. This proposal ultimately came to be known as Classless Interdomain Routing (CIDR). CIDR would abandon class-based addresses in favor of bit-boundary network definitions. We'll look at CIDR and its architecture, symmetry, benefits, and uses in Chapter 6. For now, let's continue to focus on the IETF's quick fixes.

Emergency Reallocation

One of the simpler devices in the IETF's bag of tricks to buy time was documented in RFC 1466. This document basically called for the as-yet still-unused Class C network address space to be reallocated into larger, equal-sized blocks. These blocks could then be allocated to regional registries to satisfy the growing and increasingly global demand for Internet access.

Table 5-1 shows the new allocation scheme, including the specific address blocks that were reallocated.

Table 5-1. RFC 1466 Renumbering

CIDR Block Size

Base Address

Terminal Address

/7

192.0.0.0

193.255.255.255

/7

194.0.0.0

195.255.255.255

/7

196.0.0.0

197.255.255.255

/7

198.0.0.0

199.255.255.255

/7

200.0.0.0

201.255.255.255

/7

202.0.0.0

203.255.255.255

/7

204.0.0.0

205.255.255.255

/7

206.0.0.0

207.255.255.255


These blocks are huge. In fact, they represent the equivalent of two Class A network blocks. You might question how serious the impending address shortage could have been given that so many addresses were still available. That's a fair question, but you have to remember that this was a proactive initiative to shore up the impending depletion of the Class B address space. The IETF was using all the resources at its disposal to ensure that a failure did not occur.

The important thing to note about this reallocation was that it set the stage for two things:

  • Due to the fact that the large reallocated blocks were to be handed out to global registries, route aggregation could be done much more effectively as the Internet scaled to global proportions.

  • Migration away from a class-based address architecture toward a more flexible bit-level definition. (This approach came to be known as Classless Interdomain Routing [CIDR]. It is thoroughly examined in Chapter 7.) Removing so many addresses from the pool of available Class C networks was a powerful statement of the IETF's intention to evolve away from the class-based addressing system.

Each of these aspects warrants closer inspection.

Improving Aggregatability

Reallocation of unassigned Class C network blocks enabled a dramatic improvement in route aggregatability in comparison to the previous size-based classful architecture. Aggregatability is a mouthful of a word that simply means the ability to be aggregated, or lumped together. Stop and think about that: This inventory of Class C blocks was previously handed out directly to end-user organizations solely on the basis of size. This meant there was little, if any, correlation between numeric contiguity (which is an absolute prerequisite to route aggregation) and geographic location. There was, however, a near-perfect correlation between numeric contiguity and the size of end-user networks. Unfortunately, for routing purposes, that's a useless correlation!

RFC 1466 sought to correct this deficiency inherent in the original class-based IPv4 architecture by creating an inventory of "superblocks" that could be distributed to regional Internet registries. In Chapter 1, "Developing the Internet's Technologies," you learned that registries are entities responsible for parsing address blocks within global regions. Examples that might jog your memory include APNIC, RIPE, and ARIN.

Creating very large blocks of addresses for those registries to distribute within their region automatically meant that, over time, route aggregation around the world would improve. This was deigned necessary to bring the Internet's address space into alignment with its evolving needs. Specifically, commercialization meant globalization. Globalization meant scales previously unheard of for a single internetwork. It also meant that distributing addresses in a manner that precluded any meaningful aggregation could no longer be afforded.

Introduction to a Classless Environment

The other aspect of RFC 1466's impact was to set the stage for introducing a classless address system. This aspect is very subtle and requires some thought to appreciate. The reallocation of Class C-sized networks into superblocks meant that all demands for address spaces within geographic regions would be satisfied via those superblocks, regardless of the size of those demands. Having examined the mathematics of the class-based IPv4 architecture in Chapter 2, you know how early routers determined how many bits of an IP address were used for the network portion of the address. They examined the 32-bit address (in binary) and looked to see where the first binary 0 appeared in the leftmost octet of that address. RFC 1466 undermined this old approach, because all the superblocks (and, consequently, all the network addresses created from them) were from the old Class C range. Thus, another mechanism would be absolutely essential in determining how many bits of an IP address from this range were used for the network address.

This is where CIDR comes in. It is, unarguably, the most significant of the IPv4 enhancements to come out of the Date of Doom mobilization, and it is complex enough to warrant its own chapter. For now, suffice it to say that RFC 1466 set the stage for deploying CIDR globally. The basis for CIDR predated RFC 1466, but the version that became standard came later, in RFCs 1517 through 1520. Thus, the chronology fits. The IETF knew how it wanted to evolve the address space, so it used RFC 1466 to create a pool of addresses to support this evolution. Doing so allowed it to finalize the details of classless addressing.

The Net 39 Experiment

An initiative closely related to RFC 1466 was a series of experiments designed to demonstrate how the huge Class A address space could be used more efficiently. Dubbed the Net 39 Experiment for its use of address space 39.0.0.0, these tests validated the concept of routing to variable-length network masks. This was important because the IPv4 address space itself wasn't in danger of depletion. Indeed, plenty of Class A and C network blocks were left! Only Class B was coming under pressure. But using addresses from different numeric ranges meant completely rewriting IP's address architecture and supporting protocols. You wouldn't want to undertake such an endeavor without first testing your concepts extensively.

The Class A address space was particularly important to test because it represented fully half the total IPv4 addresses. Unfortunately, each Class A block was so huge as to be impractical for end-user organizations. Consequently, many of those blocks sat idle while the Class B space was endangered. The experiments to understand Class A subnetting were documented in RFCs 1797 and 1879. Knowing how much address space was locked away in impractically large blocks, the IETF was very motivated to find a way to use them. There were more than enough addresses there to stave off the impending addressing crisis almost indefinitely.

NOTE

RFC 1897, published in January 1996, presents an interesting juxtaposition that helps you better appreciate the stopgap nature of many of the technologies presented in this chapter. That RFC, published concurrently with many of the other RFCs mentioned in this chapter, allocated addresses for use in testing IPv6.


The output of this trial was used to figure out what would happen if an address block from one class were used in a way that was inconsistent with its original intent. For example, a Class A address (network 39.0.0.0 was used) could be assigned to an ISP. That ISP could then carve it into smaller networks for assignment to end users. Thus, a customer could be given a Class C-sized network created within 39.0.0.0. For the sake of continuing this example, let's say an ISP customer was given 39.1.1.0/24. One of the more obvious problems encountered was that routers would tend to look at 39.1.1.0/24 and treat it as 39.0.0.0/8. That was always true if a classful interior routing protocol were used. In retrospect, that shouldn't be so surprising.

From this research came the conclusion that the Class A address space could be carved up to shore up the rapidly depleting IPv4 address spaces. However, some constraints were necessary to ensure that routing across the Internet was not adversely affected.

One of the constraints deemed necessary for the Internet to continue functioning properly was how convoluted its topology could become. The IETF recommended against end-user organizations connecting to more than one ISP, because this meant that the network block of such organizations would have to be supported by each of their ISPs. That would directly increase the number of routes that the Internet would have to support. Perhaps the most intriguing aspect of the Net 39 Experiment RFCs was that, at this stage of the IPv4 address space's development, the IETF was thinking in terms of both classful and classless (CIDR) addressing and routing. However, they recognized that you couldn't ensure problem-free internetworking if you needed to support both simultaneously. Thus, all classful interior routing protocols (known more commonly as Interior Gateway Protocols [IGPs]) were deemed historic. The message was clear: The tests were successful, and that opened the door for a classless address architectureCIDR.

Emergency Call for Unused Addresses

Not wanting to overlook even the most obvious of opportunities in their quest to prop up the failing IPv4 address space, the IETF issued RFC 1917 in February 1996. Although it doesn't stipulate any technology and, consequently, could be termed an information RFC, this document has achieved the status of an Internet Best Current Practice. It remains in effect as BCP #4.

This RFC was unarguably the simplest of the stopgap measures: It called for people to voluntarily surrender any unused or underutilized address spaces. In theory, this would result in a temporary increase in their inventory of available and assignable address spaces and would let that inventory be parsed out in aggregatable blocks to Internet service providers (ISPs). Two primary target audiences were identified in this RFC:

  • Holders of IP address blocks that were too large for their requirements

  • Holders of IP address blocks that required IP addresses to support IP-based applications but who were isolated from the Internet

For different reasons, each of these communities represented an opportunity to reclaim addresses that were currently being wasted.

Oversized Block Assignments

The more pragmatic side of this request was an acknowledgment that many existing holders of IP address blocks were given blocks larger than they needed due to the inefficiency of the class-based architecture. In a classless environment, network prefixes can be established on any bit boundary. Thus, it is possible to more precisely tailor a network block to an Internet user community's actual needs. For example, if an organization needed 450 IP addresses, in a classful environment they would have been given a Class B address block. However, in a classless environment, they could be given a network with 512 addresses instead of 65,535. You could logically expect that a large number of the holders of Class B space would be able to return at least half of the space they were originally allocated. The point is that many very usable and routable blocks could be harvested from organizations that grabbed a Class B network space but needed only hundreds of addresses. These blocks could be freed up for reallocation to other organizations if and only if a mechanism were developed that enabled the definition of network blocks on bit boundaries instead of octet boundaries.

Isolation from the Internet

Many companies required IP and, consequently, IP addresses to support a base of networked applications without really requiring access to the Internet. Others chose to embrace IP for their internal communication requirements but could not connect to the Internet for fear of compromising the security of their networked computing assets. In theory, obtaining legitimate IP addresses was the "right" thing to do. I can remember very specific instances in which two network engineers would be on opposite sides of this issue. One would argue that, because the network in question would never be connected to the Internet, there was no reason to obtain "legal" IP addresses. The other would invariably argue from the purists' perspective. You can't predict the future, and your requirements might change, so do it right the first time! With that supporting logic, they would advocate obtaining real IP addresses.

There was no good way to settle this argument. Neither position was clearly right or wrong. It was more a matter of what you believed. Even the existence of RFC 1597, with its private address spaces, didn't really settle the argument. Those addresses were reserved for private networks but didn't mandate their use. With RFC 1917, the correct answer was that in the best interests of the Internet community, you shouldn't waste "real" IP addresses for an isolated network. Instead, the IETF urged you to save those addresses for use in routing across the Internet. Private IP networks, regardless of size, should use the addresses stipulated in RFC 1918 (which were previously reserved in RFC 1597, but as class-based address blocks).

The great debate had finally been settled: Don't use real IP addresses unless you need to route over the Internet. This gave the caretakers of the Internet's address space (IANA and its myriad delegate organizations) a clear directive as well as a means of enforcement.

Although that was all very well and good, and it would certainly help slow down the rate of address consumption, something still needed to be done about all the past decisions that had already been made. Quite simply, the purists of the world were sitting on a substantial number of address blocks that were being used on isolated networks! This represented a tremendous potential pool of addresses that could greatly mitigate the address crisis being experienced. The IETF and IANA would embark on an aggressive campaign to identify and reclaim unused and underused address spaces. This effort was largely successful, but, as you'll see in Chapter 13, "Planning and Managing an Address Space," it did induce some very unexpected behaviors in terms of evolving address-space management tactics!

Preserving Address Block Integrity

One of the more subtle problems plaguing the Internet during the early to mid-1990s was a side effect of address space depletion. This side effect was the rate at which the Internet's routing tables were growing. As you saw earlier in this chapter, the size of a network's routing tables are crucial to that network's end-to-end performance. Routing tables can expand for numerous reasons. In addition to the exponential rate of growth it was experiencing, the Internet's tables were expanding for three primary reasons:

  • Class-based addresses still being assigned to new customers had legacy effects. RFC 1466, as you just saw, was a huge step toward minimizing such legacy effects. By converting so many class-based network addresses into classless address space, the IETF sought to make as clean a break as possible from the old, obviously inefficient, class-based IPv4 addressing.

  • Customers leaving their ISPs did not return their address blocks. This forced the new ISP to support routing for those specific blocks of addresses.

  • End users of the Internet were applying for their own address blocks instead of using blocks assigned to them by an ISP.

The last two items are interrelated. Rather than trying to explore them separately, it makes more sense to my twisted way of thinking to examine them from the perspective of their practical impacts on the Internet. Essentially, how well or how poorly you manage an address space is evident in how well the routes for that space aggregate or fragment. Aggregation is rolling smaller network addresses into larger network addresses without affecting routes to those networks. For example, instead of listing 10.10.1.0/24, 10.10.2.0/24, 10.10.3.0/24, and so on up to 10.10.255.0/24, you could aggregate all those network blocks into just 10.10.0.0/16. This larger block tells the rest of the world how to access all the smaller network blocks that may be defined from that /16 block.

Fragmentation is the opposite of aggregation: Network addresses are so numerically dissimilar and discontiguous as to defy aggregation. If aggregation isn't possible, the routers in the internetwork must remember a discrete route to each network. This is inefficient and can hurt the internetwork's performance.

Aggregation and fragmentation are best thought of as extremes, with infinite room for compromise in between. The politics of Internet governance have basically pitted Internet end-user organizations against ISPs in an ongoing struggle between these two idealistic extremes.

These impacts include route aggregation (and how it was damaged by rapid growth using Classical IP addressing rules) and the effects of directly registered customer address blocks. Examining these two impacts from a practical perspective will help you better appreciate how the Internet's address-space policies and rules have evolved over time and why things are the way they are.

Aggregation Versus Fragmentation

The addition of lots of new networks to the Internet would not necessarily contribute to the bloat of the Internet's routing tables. In theory, new networks (and their addresses) could be added to the Internet without increasing its routing tables if those network addresses were correlated regionally and/or by service provider. In other words, a clever approach to managing address assignment could accommodate tremendous growth with little to no impact on the Internet's routing tables simply by keeping numerically similar network addresses clumped together in a region. You could route from around the world to that region using only the highest-order bits of those related network addresses.

Although this might sound like a Utopian ideal, it is actually quite practical and realistic. To better demonstrate how such a scheme would work, consider the following example. An ISP is granted a relatively large block (let's say a Class B-sized block). It then carves this block into smaller blocks that it can assign to its customers. As far as the rest of the Internet is concerned, a single routing table entry would be required to that Class B-sized (/16) network address. This concept is known as route aggregation.

To see how this works, refer to Figures 5-1 and 5-2. Figure 5-1 demonstrates four ISP customers buying access to the Internet but using their own IP address space. For the sake of this example, I have used the addresses reserved in RFC 1918 instead of real addresses. The last time I used a real host address in a book, I caught an earful from the administrator of that box! Getting back to the point, even though these addresses are fictitious, they are too far apart numerically to be reliably aggregated. Thus, the ISP would have to advertise each one individually to the rest of the Internet.

Figure 5-1. ISP Supporting Directly-Registered Customer Address Blocks


Figure 5-2. Route Aggregation of an ISP Using ISP-Provided Address Blocks


In real life, a service provider would probably have hundreds of customers, but that would make for a very cluttered illustration! Figure 5-2 shows how those customers could each use a smaller block carved from the ISP's 16-bit network address.

Route aggregation, in the simplistic interpretation shown in Figure 5-2, directly contributes to a reduction in the size of the Internet's routing tables. If each of the ISP's customers had its own unique network addresses, each would have to be announced to the Internet. More to the point, each network address would require its own entry in every routing table of every router in the Internet. Instead, four routing table entries can be satisfied with a single entry in the Internet, because the route through the Internet is the same for each of the four customer networks. The routes start to differ only within the network of their mutual service provider. This is the only part of the Internet that needs to track routes to the four distinct customer network addresses. Thus, the entire world uses just the first two octets of the service provider's network address to reach all its customer networks. Within that service provider's network, the third octet becomes significant for routing packets to their destination.

When viewed from this perspective, it seems perfectly logical that the right way to provide Internet addressing is via a service provider's larger address blocks. But some problems are inherent with this approach. Implementing an IP address scheme represents a potentially huge investment in planning and effort. From an Internet user's perspective, changing IP addresses is undesirable, because it forces you to make the same investment in time and effort without having anything to show for it.

Thus, although using service provider addresses does minimize routing table bloat, and makes sense for the Internet, you have to realize that benefits of this approach are asymmetrically distributed. That's a euphemistic way of saying that it is better for the Internet and ISPs than it is for the owners/operators of networks that connect to the Internet. For those entities, this approach can actually be harmful!

The danger of obtaining and using IP addresses from a service provider is that the service provider "owns" them. In effect, you are leasing the addresses for the duration of your service contract with that provider. If you wanted to change service providers (maybe you found a better deal, or maybe your provider doesn't meet your performance requirements), you would have to relinquish the "leased" addresses and obtain a new range from your new provider. Changing service providers therefore would necessitate renumbering all your IP endpoints! Renumbering becomes a very effective barrier to changing service providers.

Directly-Registered Address Spaces

It's not difficult to see why ISP customers are motivated to avoid changing their IP addresses. Renumbering endpoints is that onerous and risky a proposition! The surest way to avoid having to renumber is to "own" your IP addresses. Of course, you know that nobody really "owns" IP addresses except IANA, but ownership in this sense means that an organization would have IP addresses registered directly in its name. As soon as an address block is registered directly to you, it is yours foreverprovided, of course, that you don't outgrow it or forget to pay the annual fee.

In the early days of the Internet, it was relatively easy to obtain directly registered address spaces. Such spaces were said to be portable, because they were independent of service provider address blocks. Thus, the holder of a directly registered address block enjoyed the unparalleled freedom to change service providers at will without having to worry about changing address blocks at the same time.

The drawback of this approach is, of course, the impact on the Internet's routing tables. Portability, more than any other factor, fragments large, contiguous (and therefore aggregatable) address blocks. Unfortunately for those end users, the Internet's routing tables were outpacing technology in their growth. They were becoming bigger at a faster pace than CPUs were increasing in speed. Consequently, Internet performance was deteriorating quickly, and the trend didn't look good. This was one of the facets of the impending Date of Doom that the IETF sought to obviate. One of the easy scapegoats was the portability of network addresses.

Preventing Further Fragmentation

End-user organizations highly prize portable address spaces. But they are the bane of the Internet. The IETF sought to protect the Internet by more specifically constraining address assignment practices to prevent any further address space fragmentation. This effort was documented in RFC 2050. RFC 2050 is still in effect and is also Internet Best Current Practice #12. Specifically, this document stipulated the rules and regulations regarding subassignments of IP address spaces by ISPs.

The way it works is simple. An ISP's customerif it didn't already have directly registered address blocks of its owncould obtain addresses from its service provider. However, to preserve the integrity of large service provider address blocks, those addresses had to be surrendered when the contract for service ended. In other words, these addresses were nonportable and remained with the service provider with which they were registered. That way, the ISP could advertise just a single, large network address to the Internet that encompassed all its customer networks created from that large block.

But if a customer wanted to move to a different service provider to obtain a lower monthly recurring cost, it found itself in the uncomfortable position of having to quickly renumber its entire network and all its addressed endpoints. Adding insult to injury, the range it had to migrate to was another nonportable address range supplied by the new service provider. Each time the customer wanted to change providers, it had to go through the same expensive, risky, painful process of renumbering endpoints.

RFC 2050/BCP 12 doesn't do anything to mitigate this pain for end users. Rather, it compels service providers to treat all their assignable address space as nonportable, regardless of whether any given subset of it may be globally routable. If an ISP chooses to disregard RFC 2050 and let ex-customers keep their assigned space, it will eventually exhaust its address space. That ISP will find it impossible to convince its regional Registry to entrust it with more address space. An ISP without an inventory of available network addresses cannot service any new customers. RFC 2050 seeks to prevent further fragmentation of the Internet's address space (which directly increases the size of the Internet's routing tables) by giving ISPs the incentive to preserve the integrity of their existing blocks.

Rationing Directly Registered Addresses

RFC 2050, in the absence of any other policy changes, was a tweak to the nose of the Internet's end-user community. It wasn't intended to be painful. Rather, it was designed as an emergency effort to ensure the Internet's continued operability. Such austerity measures are often palatable provided that the pain is shared somewhat equitably. Because the end-user community bore the brunt of the inconvenience caused by that RFC, they had every reason to be upset. The only mitigating factor was that those organizations could shield themselves from any pain just by having their own directly registered address spacesthat is, they could until IANA started rationing new directly registered address blocks.

This is where things started getting ugly. IANA exacerbated the routability versus portability conflict when it tightened its policy on directly registered address spaces. Organizations that wanted their "own" address spaces would have to meet very stringent requirements before that privilege would be granted. The cumbersome and bureaucratic application process alone was enough to deter most would-be applicants. Those persistent enough to successfully complete an application with their Registry quickly discovered that did not guarantee their request would be granted.

Although this policy shift was a necessity caused by the impending address-space crisis, it meant that it was now almost impossible for an end-user organization to obtain its own directly registered address space. When you view this fact in conjunction with the bias against end-user organizations in RFC 2050, it becomes clear that end-user organizations bore the full brunt of the policy changes necessary to prevent the Internet's address space from collapsing.

Ostensibly, this policy shift was designed to immediately curtail the outflow of the finite supply of the remaining IPv4 addresses. This, all by itself, would buy a lot of time and forestall the crisis. However, it also forced Internet users to seriously consider alternative options for their IP addressing, such as the following:

  • Using ISP-provided addresses

  • Using nonunique addresses

  • Using a translation function between your network and the Internet

We've already looked at why using ISP-provided addressing isn't very attractive to end-user organizations. We'll look at the other two options much more closely in the next chapter. For now, just realize that none of these were very palatable, nor as convenient as directly registered address space. The fact that these unattractive options were forced on end-user organizations created the potential for a tremendous backlash that is still being experienced today.

End-User Backlash

In some cases, the backlash has been extremely aggressive and creative. The goal is simple, and it can be summed up in just one word: portability. The extent to which end-user organizations pursue this grail is remarkable. It is characterized by aggressiveif not desperateattempts. The stakes are that high!

Part of the problem, or maybe I should say part of the game, is that there is tremendous confusion about what constitutes portability. Internet Registries, for example, interpret portability to mean global routability.

Determining global routability from a Registry's perspective is simple: Is the address block large enough to be worth routing on the Internet? That criterion has absolutely nothing to do with block ownership. As you have seen, service providers are bound by a different criterion: compliance with RFC 2050/BCP 12. Consequently, they have a different interpretation of portability. Service providers have an obligation to interpret portability to mean both global routability and directly registered to the end-user organization. They might provide a range of addresses to a customer that is large enough for routing over the Internet, but they cannot consider it portable, because it is theirs.

End-user organizations try desperately to exploit this chained ambiguity of definitions by insisting their service-provider blocks meet the criteria of global routability. Therefore, their argument continues, those addresses should be handed over when they change providers! Sometimes this argument works, and sometimes it doesn't. If it doesn't, the game doesn't end there for some end-user organizations. I've seen quite a few ex-customers simply refuse to return their address blocks! Although I won't pretend to understand the rule of law well enough to pass judgment on the legality of such hijacking of address space, I can tell you it certainly violates Internet BCP 12 and does nothing to help the Internet's performance.

Sadly, these unintended consequences caused by RFC 2050 and the emergency rationing of directly registered address spaces continue to haunt the Internet. We've spent quite a bit of time looking at them for the simple reason that they form an unavoidable constraint in today's IP network environment. The more you know about these and the other constraints, the better you'll be able to deal with them on the job.

Hijacked Address Spaces

Since its inception, RFC 2050 has done wonders for minimizing the expansion of Internet routing tables. However, it has created potentially serious tension between ISPs and their customersor should I say their ex-customers? One of the less-heralded side effects of RFC 2050, in combination with IANA's crackdown on granting privately registered address spaces, has been the hijacking of ISP address blocks.

When a customer leaves an ISP, it is required to return its IP address space to that ISP. That ISP can, in theory, put those addresses back in its inventory of unused addresses with which it satisfies new customer requests for addresses. However, it is not uncommon for a customer to terminate service with one ISP, start service with a new ISP, and then insist that new ISP support its existing blocks. By refusing to stop using an address space, an end-user organization can effectively convert a nonportable address space to a de facto portable space! Even though this is a violation of an Internet Best Current Practice, some ISPs are reluctant to do anything to upset a paying customer. Money talks!

An ISP whose addresses have been hijacked in this manner faces few good options. ARIN and IANA don't offer any active help, but they still insist that you reclaim your blocks. I've found that contacting the ex-customer's new service provider and demanding that it immediately cease supporting the hijack of addresses meets with only mixed success. Some ISPs are much more reputable than others and will jump at the opportunity to help you reclaim your lost property. Others require a bit of coercion. Letting them know that you will be advertising the hijacked addresses as null routes and that they should expect a trouble call from an unhappy and out-of-service customer usually does the trick.





IP Addressing Fundamentals
IP Addressing Fundamentals
ISBN: 1587050676
EAN: 2147483647
Year: 2002
Pages: 118
Authors: Mark Sportack

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net