1.1 The Growth of the Internet


In the last several years , the Internet has grown exponentially. At the close of 1993, there were approximately 623 sites on the Internet, as opposed to approximately 36,276,652 sites [1] at the end of 2001. Figure 1-1 shows the graphical representation of these numbers .

[1] See Robert H. Zakon, "Hobbes' Internet Timeline," 1993 “2002, at www.zakon.org/robert/internet/timeline.

Figure 1-1. Exponential Growth of the Internet

graphics/01fig01.gif

To look further into these numbers, you must first look at the history of the Internet as a whole. Providing a complete history of the Internet is beyond the scope of this text, but a very detailed history can be found in Robert H. Zakon's "Hobbes' Internet Timeline." The following discusses some of the major events during this evolution.

From the late 1950s through the end of the 1960s, significant research was being done on the concept of distributed-computing networks. This work led to the eventual activation of the ARPANET in 1969. The early 1970s brought us further research in the first host-to-host protocols, which preceded what we know of today as Transmission Control Protocol over Internet Protocol (TCP/IP). These first host-based networks were interconnected via 56Kbps point-to-point circuits. In 1986, the National Science Foundation's NSFNET followed. In 1988, the top backbone speed was upgraded to T1 (1.544Mbps), then again in 1991, to DS3 (44.7Mbps). During this evolution, several internetwork connection points were established for peering and connectivity through various provider backbones.

In 1995, the NSFNET reverted back to a research-only network, and the remaining networks evolved into what we now know as the Internet. The Internet now provides global connectivity to either backbone or subscriber points virtually anywhere in the world.

1.1.1 Use of the Internet ”Commercial and Noncommercial

As recently as 1993, the Internet was becoming a household buzzword , yet most people had no idea what it really was. All they knew, from their limited perspective, was that they were able to send a note to a colleague around the world and get a reply in less than 24 hours. This new method of rapid communication was quickly adopted as a business enabler . We can now send electronic greeting cards and trade stocks through an Internet connection, which has giving way to a new lifestyle.

The Internet has become a symbol of the change in our busy lives and hurried schedules. Although there have been significant changes in how business on the Internet is done, it still remains one of the highest-growth business sectors. Advocates of the Internet have gone as far to say that if your business does not have a presence on the Internet, then you are losing a significant amount of exposure and business potential.

This progress does not come without a price. The volatility of the networked business sector is greater than that of any other in history. We read about the failed Internet-based companies, or "dot bombs ," of yesterday and see the correction made in the venture capitalization of today's businesses. Today's companies have to show solid business plans to get the capital they need. Even then, they might not survive in a tough industry such as this. These players all need to compete, and they all need a solid infrastructure to compete on.

Noncommercial use of the Internet has evolved as well. In the early 1990s, it was considered a luxury to have a dedicated Internet connection in the home. Most people connected only through analog modems connected to their phone lines. Then, connection speeds were as low as 14Kbps and gradually went up from there. As any technology emerges, however, it typically becomes more available, more affordable, and more robust. Thus, the analog modems we used for connectivity in 1994 are being replaced by broadband xDSL and cable connections in 2002.

Our current Internet infrastructure is the best it has ever been. However, we still want to go faster and do more, all at lower prices! Users also demand constant uptime: They want quality service that is always available, like television. Service quality is rather predictable when things are constant, but with the introduction of variables (e.g., users, applications), quality becomes harder to guarantee. Juniper Networks' market advantage became apparent as the increasing demand for Internet bandwidth and service quality became a high-priority in the networking industry.

1.1.2 Internet and Router Architectures

From its humble beginnings and rapid growth, the Internet has grown exponentially. In turn , the routers supporting this growth have progressed too. This section examines this evolution, first by explaining the Integrated Services (IntServ) model and techniques proposed to assist in the delivery of these services; then by discussing past architectures and how the IntServ model struggled to become a reality on those platforms; and finally, by looking at how next -generation routers are being developed to solve IntServ and other service- related issues.

1.1.2.1 Past Architectures

In the early 1990s, before the major growth phase of the Internet, backbone transmission lines had reached DS3 (44.7Mbps) speeds. Up until this point, the demand for bandwidth was not as great. Routers of this period were based on a single central processing unit (CPU) and monolithic software design. However, as traffic volumes began to increase and contention and congestion issues became more prevalent , we began to see a change in how these routers handled the load.

In the single-CPU architecture, there are a finite number of CPU cycles per second that can be used by the operating system for routine or ad hoc tasks . Early attempts at traffic engineering were nothing more than additional processes for the CPU to handle. The CPU was already dealing with processes like packet forwarding, routing table lookups, access control lists (ACLs), and protocol processes, in addition to normal system-management functions. As with any situation where there are finite resources available, you must optimize them. The problem is that there is always a point where optimization ends and best effort begins. Prior to any end-to-end traffic engineering, packet delivery was always done on a best-effort basis. When best efforts no longer provided the level of service necessary, other methods were developed.

In addition to the single-CPU architecture, routers had monolithic operating systems. A router operating system typically performs four tasks:

  1. Process Management

  2. Memory Management

  3. File System Management

  4. I/O Device Management

These functions can be performed in one of two ways:

  1. At the user level

  2. At the system level

In a monolithic structure, any management function or module can call any other function or module because they are all running essentially at system level. When operating systems are designed in a monolithic fashion, they typically have very large kernels . The larger the base for a kernel, the greater the potential for instability.

In addition to kernel issues, the router might also have stability problems with individual processes. For instance, in a monolithic structure, if a routing process stops working, it may cause other processes, such as forwarding, to lock up as well, which can snowball into a systemwide problem. Such lock ups usually require a router reboot.

In the early 1990s, when the networking industry really began to evolve , computing platforms were just beginning to gain momentum. So, as the need to increase functionality grew, upward scalability hit a ceiling. In 1993, a 66-MHz Intel 486 processor was typical. Routing platforms were being developed to take advantage of similar processor architectures and speeds. With this, distributed-computing networks were becoming commonplace. Enterprise networks were becoming fairly large and spanning the globe in some cases. With this increase in disparate networks, the amount of wide-area network (WAN) traffic increased, as well.

When it came to computing power and software design, the use of centralized processing (single CPUs) and monolithic software became the dominating architecture. This type of platform had poor scalability. In a situation where the routing architecture was very stable and packet-forwarding rates were low, there was not much of a problem. As soon as new traffic-engineering methods and larger demands started hitting the marketplace , the routers soon began to experience problems due to their limitations. One of the major limiters was the need to provide IntServ end-to-end while still maintaining a certain level of quality. These technologies, as the next section explains, require performance guarantees based upon the particular service being used.

1.1.2.2 IntServ

IntServ consists of voice, video, and data, real-time or not, over a single communications infrastructure ”the Internet. The Internet is the place where you can find virtually all application technologies at work: voice communications, video conferencing solutions, streaming services, other multimedia applications, and typical data traffic. The goal of the network provider is to enable these services with end-to-end quality of service (QoS). The old best-effort data delivery model still works for a lot of services, such as e-mail and File Transfer Protocol (FTP), but voice and video are less tolerant of that type of delivery and require newer delivery models using QoS. The IntServ model was defined in Request for Comments (RFC) 1633, "Integrated Services in the Internet Architecture: An Overview." The following excerpt from RFC 1633 provides a solid definition for Integrated Services."The [IntServ] model we are proposing includes two sorts of service targeted towards real-time traffic: guaranteed and predictive service." The phrase " guaranteed and predictive service" says quite a bit in itself as this is what everyone is marketing in one form or another. In review, you cannot help but see how this can solve quite a number of our problems. It also motivates companies like Juniper Networks to create purpose-built products.

To achieve end-to-end guaranteed quality, several technologies have been introduced, such as Resource Reservation Protocol (RSVP) and Differentiated Services (DiffServ). Again, we are looking at a model and not necessarily at a single method. These technologies can be used to assist in the end-to-end QoS function and compliment one another versus other solutions.

End-to-end QoS can be achieved through a variety of methods. RSVP, for instance, is used as a signaling protocol to reserve resources through a network for a given flow. DiffServ, on the other hand, is typically used on a hop-by-hop basis, where classes of services are defined. Also on a per-hop basis, forwarding decisions are made. These services can be used in conjunction with one another or independently. So, you can see that there are mechanisms in the IntServ model that can be implemented to overcome the typical best-effort delivery model for network traffic.

RSVP is used to reserve a path within single or multiple networks for end-to-end communications. In other words, it is used to establish a flow from one end point to another. When this flow is complete, RSVP can then tear down the reservation and free up resources for other pending reservations. It provides the signaling required to prepare these reservations based upon IntServ-defined service classes. Ultimately, these service classes need to be defined with some equivalency between multiple networks. As with any QoS implementation, interdomain QoS is of concern. The administrators of one autonomous system (AS) are unlikely to allow automated or manual control of QoS mechanisms by an outside administration. So, commonality can only be predicted within a single AS.

DiffServ uses a concept known as the differential service code point (DSCP), which uses the type-of-service (ToS) bits in the Internet Protocol version 4 (IPv4) header and class-of-service (CoS) bits in the Internet Protocol version 6 (IPv6) header. Service-class parameters are set by the use of DSCP bits. These bits can be meaningful on a per-hop basis.

1.1.2.3 Next-Generation Platforms and Architectures

There are several approaches to the delivery of end-to-end quality. Through the use of technologies such as DiffServ and Multiprotocol Label Switching (MPLS), traffic-engineering concepts are being developed. One such method is bandwidth brokering, which allows a single network administration to dynamically manage the bandwidth requirements for a given flow of traffic. The model allows a single manager to control the flows within a single routing domain. If the flow for a given destination needs to traverse multiple domains, then the local bandwidth broker requests the resources from the gaining domain, and so on, until the traffic reaches its destination. Bandwidth brokering takes advantage of DSCP. For more information regarding bandwidth brokering, please read "Bandwidth Brokers" by Lance Freedman, Steve Ni, Jason Pinkett, and Laura Welsh of the Virginia Polytechnic Institute and State University Department of Engineering. This document can be found at www.ee.vt.edu/~ldasilva/6504/BB_rep.doc.

MPLS traffic engineering also allows us to provide network topologies that can be designed to optimize delivery methods for such services as voice and video. Through the use of such traffic-engineering techniques, successful end-to-end QoS is achievable.

Before the Internet boom of the mid 1990s, traffic volume in the Internet and quality were not very significant issues. However, once physical platforms began to reach line-rate forwarding, and available bandwidth was depleted, end-to-end service began to be a problem. Certainly, there is no single technology available to solve all congestion and contention issues associated with exponential growth. However, two distinct introductions occurred to help eliminate some of the bottleneck problems and get packets moving through the network somewhat better.

With Internet traffic jams becoming a reality, relief was necessary. One method was the simple introduction of "fatter pipes," or larger bandwidth circuits. Internet Protocol over Asynchronous Transfer Mode (IP/ATM) networks were also gaining popularity. The larger data pipes and use of IP/ATM overlay networks resolved a great number of the congestion and contention issues.

During this same period, MPLS was being developed. The concept was that through the use of labels only 4 bytes in length, lookup and forwarding could occur at much faster rates. (However, as you will see later in this book, MPLS is no longer being used in the way it was planned, and IP/ATM overlay networks are quickly being replaced by high-speed Synchronous Optical Network (SONET) transmission systems that create even larger bandwidth pipes.) At the same time, other advances in technology were being developed. Although some of this technology was not exploited fully, a true leader emerged in Juniper Networks.

When Juniper introduced its first product, the M40, it reinvented the wheel, so to speak. The wheel was now more efficient, however. Instead of using a monolithic-operating-systems approach, Juniper took a microkernel approach. Instead of using a single CPU for processing power, it moved to a distributed-processing architecture. This move was made possible by the advent of high-performance processors and high-performance application-specific integrated circuits (ASICs). Now, instead of having routing and forwarding running on the same processor, separate processors (ASICs) handle each task. Software no longer runs within the confines of one huge module; it is controlled by a kernel making all the calls to the dedicated processors. This efficiency becomes apparent when you have a problem with a single function. You can restart a process without having to reboot the entire router. This is one of the real strengths of the Juniper Networks Operating System (JUNOS): no more rebooting the entire router if a processor gets stuck.

As they have in the past, future Internet architectures will rely more on underlying physical technology. As advances are made and incorporated into our industry, network service providers are in a better position to provide the level of service necessary.



Juniper Networks Reference Guide. JUNOS Routing, Configuration, and Architecture
Juniper Networks Reference Guide: JUNOS Routing, Configuration, and Architecture: JUNOS Routing, Configuration, and Architecture
ISBN: 0201775921
EAN: 2147483647
Year: 2002
Pages: 176

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net