3.2. Design Principles of the Internet


The predecessor to today's Internet, called ARPANET (Advanced Research Project Agency Network), was born in the late 1960s when computers were not present in every home and office.[2] Rather, they existed in universities and research institutions, and were used by experienced and knowledgeable staff for scientific calculations. Computer security was viewed as purely host security, not network security, as most hosts were not networked yet. As those calculations became more advanced and computers started gaining a significant presence in research activities, people realized that interconnecting research networks and enabling them to talk to each other in a common language would advance scientific progress. As it turned out, the Internet advanced more than the field of science it transformed and revolutionized all aspects of human life, and introduced a few new problems along the way.

[2] For a historical overview of the ARPANET, see the University of Texas timeline at http://www.cs.utexas.edu/users/chris/think/ARPANET/Timeline/

3.2.1. Packet-Switched Networks

The key idea in the design of the Internet was the idea of the packet-switched network [Kle61, Bar64]. The birth of the Internet happened in the middle of the Cold War, when the threat of global war was hanging over the world. Network communications were already crucial for many critical operations in the national defense, and were performed over dedicated lines through a circuit-switched network. This was an expensive and vulnerable design. Government agencies had to pay to set up the network infrastructure everywhere they had critical nodes, then set up communication channels for every pair of nodes that wanted to talk to each other. During this communication, the route from the sender to the receiver was fixed. The intermediate network reserved resources for the communication that could not be shared by information from other source-destination pairs. These resources could only be freed once the communication ended. Thus, if the sender and the receiver talked infrequently but in high-volume bursts, they would tie down a considerable amount of resources that would lie unutilized most of the time.

The bigger problem was the vulnerability of the communication to intermediate node failures. The dedicated communication line was as reliable as the weakest of the intermediate nodes comprising it. Single node or link failure led to the tear-down of the whole communication line. If another line was available, a new channel between the sender and the receiver had to be set up from the start. Nodes in circuit-switched networks had only a few lines available, which were high-quality leased lines dedicated to point-to-point connectivity between computers. These were not only very expensive, but made the network topology extremely vulnerable to physical node and link failures and consequently could not provide reliable communications in case of targeted physical attack. A report [Bar64] not only discussed this in detail, but also offered simulation results that showed that adding more nodes and links to a circuit-switched network only marginally improves the situation.

The packet-switched network emerged as a new paradigm of network design. This network consists of numerous low-cost, unreliable nodes and links that connect senders and receivers. The low cost of network resources facilitates building a much larger and more tightly connected network than in the circuit-switched case, providing redundancy of paths. The reliability of communication over the unreliable fabric is achieved through link and node redundancy and robust transport protocols. Instead of having dedicated channels between the sender and the receiver, network resources are shared among many communication pairs. The senders and receivers communicate through packets, with each packet carrying the origin and the destination address, and some control information in its header. Intermediate nodes queue and interleave packets from many communications and send them out as fast as possible on the best available path to their destination. If the current path becomes unavailable due to node or link failure, a new route is quickly discovered by the intermediate nodes and subsequent packets are sent on this new route. To use the communication resource more efficiently, links may have variable data rates. To compensate for the occasional discrepancy in the incoming and outgoing link rates, and to accommodate traffic bursts, intermediate nodes use store-and-forward switching they store packets in a buffer until the link leading to the destination becomes available. Experiments have shown that store-and-forward switching can achieve a significant advantage with very little storage at the intermediate nodes [Bar64].

The packet-switched network paradigm revolutionized communication. All of its design principles greatly improved transmission speed and reliability and decreased communication cost, leading to the Internet we know today cheap, fast, and extremely reliable. However, they also created a fertile ground for misuse by malicious participants. Let's take a closer look at the design principles of the packet-switched network.

  • There are no dedicated resources between the sender and the receiver. This idea allowed a manifold increase in network throughput by multiplexing packets from many different communications. Instead of providing a dedicated channel with the peak bandwidth for each communication, packet-switched links can support numerous communications by taking advantage of the fact that peaks do not occur in all of them at once. A downside of this design is that there is no resource guarantee, and this is exactly why aggressive DDoS attack traffic can take over resources from legitimate users. Much research has been done in fair resource sharing and resource reservation at the intermediate nodes, to offer service guarantees to legitimate traffic in the presence of malicious users. While resource reservation and fair sharing ensure balanced use among legitimate users, they do not solve the DDoS problem. Resource reservation protocols are sparsely deployed and thus cannot make a large impact on DDoS traffic. Resource-sharing approaches assign a fair share of a resource to each user (e.g., bandwidth, CPU time, disk space). In the Internet context, the problem of establishing user identity is escalated due to IP spoofing. An attacker can thus fake as many identities as he wants and monopolize resources in spite of the fair sharing mechanism. Even if these problems were solved, an attacker could effectively slow service to legitimate users to an unacceptable rate by compromising enough machines and using their "fair shares" of the resources.

  • Packets can travel on any route between the sender and the receiver. Packet-switched network design facilitates the development of dynamic routing algorithms that quickly discover an alternative route if the primary one fails. This greatly enhances communication robustness as packets in flight can take a different route to the receiver from the one that was valid when they were sent. The route change and selection process are performed by the intermediate nodes directly affected by the route failure, and are transparent to the other participants, including the sender and the receiver. This facilitates fast packet forwarding and low routing message overhead, but has a side effect that no single network node knows the complete route between the packet origin and its destination.

    To illustrate this, let us observe the network in Figure 3.1. Assume that the path from A to B leads over one node, C, and that a node D is somewhere on the other side of the Internet. If D ever sees a packet from A to B, it cannot infer if the source address (A) is fake, or if C somehow failed and the packet is trying to follow an alternative path. Therefore, D will happily forward the packet to B. This example illustrates why it is difficult to detect and filter spoofed packets.[3] IP spoofing is one of the centerpieces of the DDoS problem. It is not necessary for many DDoS attacks, but it significantly aggravates the problem and challenges many DDoS defense approaches.

    [3] Packets that have a fake source IP address

    Figure 3.1. Routing in a packet-switched network


  • Different links have different data rates. This is a logical design principle, as some links will naturally be more heavily used than others. The Internet's usage patterns caused it to evolve a specific topology, resembling a spider with many legs. Nodes in the Internet core (the body of the spider) are heavily interconnected, while edge nodes (on the spider's legs) usually have one or two paths connecting them to the core. Core links provide sufficient bandwidth to accommodate heavy traffic from many different sources to many destinations, while the links closer to the edges need only support the end network traffic and need less bandwidth. A side effect of this design is that the traffic from the high-bandwidth core link can overwhelm the low-bandwidth edge link if many sources attempt to talk to the same destination simultaneously, which is exactly what happens during DDoS attacks.

3.2.2. Best-Effort Service Model and End-to-End Paradigm

The main purpose of the Internet is to move packets from source to destination quickly and at a low cost. In this endeavor, all packets are treated as equal and no service guarantees are given. This is the best-effort service model, one of the key principles of the Internet design. Since routers need only focus on traffic forwarding, their design is simple and highly specialized for this function.

It was understood early on that the Internet would likely be used for a variety of services, some of which were unpredictable. In order to keep the interconnection network scalable and to support all the services a user may need now and in the future, the Internet creators decided to make it simple. The end-to-end paradigm states that application-specific requirements, such as reliable delivery (i.e., assurance that no packet loss occurred), packet reorder detection, error detection and correction, quality-of-service requirements, encryption, and similar services, should not be supported by the interconnection network but by the higher-level transport protocols deployed at the end hosts the sender and the receiver. Thus, when a new application emerges, only the interested end hosts need to deploy the necessary services, while the interconnection network remains simple and invariant. The Internet Protocol (IP) manages basic packet manipulation, and is supported by all routers and end hosts in the Internet. End hosts additionally deploy a myriad of other higher-level protocols to get specific service guarantees: Transport Control Protocol (TCP) for reliable packet delivery; User Datagram Protocol (UDP) for simple traffic streaming; Real Time Protocol (RTP), Real Time Control Protocol (RTCP), and Real Time Streaming Protocol (RTSP) for streaming media traffic; and Internet Control Message Protocol (ICMP) for control messages. Even higher-level services are built on these, such as file transfer, Web browsing, instant messaging, e-mail, and videoconferencing.

The end-to-end argument is frequently understood as justification not to add new functionalities to the interconnection network. Following this reasoning, DDoS defenses located in the interconnection network would not be acceptable. However, the end-to-end argument, as originally stated, did not claim that the interior of the network should never again be augmented with any functionality, nor that all future changes in network behavior had to be implemented only on the ends. The crux of this argument was that only services that were required for all or most traffic belonged in the center of the network. Services that were specific to particular kinds of traffic were better placed at the edge of the network.

Another component of the end-to-end argument was that security functions (which include access control and response functions to mitigate attacks) were the responsibility of the edge devices (i.e., end hosts) and not something the network should do. This argument assumes that owners of end hosts:

  • Have the resources, including time, skills, and tools, to ensure the security of every end host

  • Will be able to notice malicious activity themselves and take response actions quickly

It also assumes that compromised hosts will themselves not become a threat to the availability of the network to other hosts.

These assumptions have increasingly proven to be incorrect, and network stability became a serious problem in 2003 and 2004 due to rampant worms and bot networks. (The mstream DDoS program, for example, caused routers to crash as a result of the way it spoofed source addresses, as did the Slammer worm.)

Taking that into account, there is a good case for putting DDoS defense mechanisms in the core of the network, since DDoS attacks can leverage any sort of packet whatsoever, and pure flooding attacks cannot be handled at an edge once they achieve a volume greater than the edge connection's bandwidth. DDoS defense mechanisms that add general defenses against attacks using any kind of traffic are not out of bounds by the definitions of the end-to-end argument, and should be considered, provided they can be demonstrated to be safe, effective, and cheap the latter especially for ordinary traffic when no attack is going on. It is not clear that any of the currently proposed DDoS defense mechanisms requiring core deployment meet those requirements yet, and obviously any serious candidate for such deployment must do so before it should even be considered for actual insertion into the routers making up the core of the Internet. However, both proponents and critics of core DDoS defenses should remember that the authors of the original end-to-end argument put it forward as a useful design principle, not absolute truth.

Critiques of DDoS defense solutions based solely on violation of the end-to-end argument miss the point. On the other hand, critiques of particular components of DDoS defense solutions on the basis that they could be performed as well or better on the edges are proper uses of the end-to-end argument.

These two above-mentioned ideas, the best-effort service model and the end-to-end paradigm, essentially define the same design principle: The core network should be kept simple; all the complexity should be pushed to the edge nodes. Thanks to this simplicity and division of functionalities between the core and the edges, the Internet easily met challenges of scale, the introduction of new applications and protocols, and a manifold increase in traffic while remaining a robust and cheap medium with ever-increasing bandwidth and speed. A downside of this simple design becomes evident when one of the parties in the end-to-end model is malicious and acts to damage the other party. Since the interconnection network is simple, intermediate nodes do not have the functionality necessary to step in and police the violator's traffic.

This is exactly what happens in DDoS attacks, IP spoofing, and congestion incidents. The problem first became evident in October 1986 when the Internet suffered a series of congestion collapses [Nag84]. End hosts were simply sending more traffic than could be supported by the interconnection network. The problem was quickly addressed by the design and deployment of several TCP congestion control mechanisms [Flo00]. These mechanisms augment end-host TCP implementations to detect packet drops as a sign of congestion and respond to them by rapidly reducing the sending rate. However, it soon became clear that end-to-end flow management cannot ensure a fair allocation of resources in the presence of aggressive flows. In other words, those users who would not deploy congestion control were able to easily steal bandwidth from well-behaved congestion-responsive flows. As congestion builds up and congestion-responsive flows reduce their sending rate, more bandwidth becomes available for the aggressive flows that keep on pounding.

This problem was finally handled by violating the end-to-end paradigm and enlisting the help of intermediate routers to monitor and police bandwidth allocation among flows to ensure fairness. There are two major mechanisms deployed in today's routers for congestion avoidance purposes active queue management and fair scheduling algorithms [BCC+98]. A similar approach may be needed to completely address the DDoS problem. We discuss this further in Section 5.5.3.

3.2.3. Internet Evolution

The Internet has experienced immense growth in size and popularity since its creation. The number of Internet hosts has been growing exponentially and currently (in 2004) there are over 170 million computers online. Thanks to its cheap and fast message delivery, the Internet has become extremely popular and its use has spread from scientific institutions into companies, government, public works, schools, homes, banks, and many other places.

This immense growth has also brought on several issues that affect Internet security.

  • Scale. In the early days of the ARPANET, there was a maximum of 64 hosts allowed in the network, and if a new host needed to be added to the network, another had to leave. In 1971, there were 23 hosts and 15 connection hosts (nowadays called routers). In August 1981, when these restrictions no longer applied (NCP, with 6-bit address fields allowing only 64 hosts, was being phased out it was being replaced by TCP, which was specified in 1974 and initially deployed in 1975), there were only 213 hosts online. By 1983, there were more than 1,000, by 1987 more than 10,000, and by 1989 (when the ARPANET was shut down) more than 100,000 hosts. In January 2003, there were more than 170 million Internet hosts. It is quite feasible to manage several hundred hosts, but it is impossible to manage 170 million of them. Poorly managed machines tend to be easy to compromise. Consequently, in spite of continuing efforts to secure online machines, the pool of vulnerable (easily compromised) hosts does not get any smaller. This means that attackers can easily enlist hundreds or thousands of agents for DDoS attacks, and will be able to obtain even more in the future.

  • User profile. A common ARPANET user in the 1970s was a scientist who had a fair knowledge of computers and accessed a small, fairly constant set of machines at remote sites that typically ran a well-known and static set of software. A large number of today's Internet users are home users who need Internet access for Web browsing, game downloads, e-mail, and chat. Those users usually lack knowledge to properly secure and administer their machines. Moreover, they commonly download binary files (e.g., games) from unknown Internet sites or receive them in e-mail. A very effective way for the attacker to spread his malicious code is to disguise it to look like a useful application (a Trojan horse), and post it online or send it in an e-mail. The unwitting user executes the code and his machine gets compromised and recruited into the agent army. An ever-growing percentage of the Internet users are home users whose machines are constantly online and poorly secured, representing an easy recruiting pool for an attacker assembling a DDoS agent army.

  • Popularity. Today, Internet use is no longer limited to universities and research institutions, but permeates many aspects of everyday life. Since connectivity plays an important role in many businesses and infrastructure functions, it is an attractive target for the attackers. Internet attacks inflict great financial damage and affect many daily activities.

The evolution of the Internet from a wide-area research network into the global communication backbone exposed security flaws inherent in the Internet design and made the task of correcting them both pressing and extremely challenging.

3.2.4. Internet Management

The way the Internet is managed creates additional challenges for DDoS defense. The Internet is not a hierarchy but a community of numerous networks, interconnected to provide global access to their customers. As early as the days of NSFnet,[4] there existed little islands of self-managed networks as part of the noncommercial network. Each of the Internet networks is managed locally and run according to policies defined by its owners. There is no central authority. Thanks to this management approach, the Internet has remained a free medium where any opinion can be heard. On the other hand, there is no way to enforce global deployment of any particular security mechanism or policy. Many promising DDoS defense solutions need to be deployed at numerous points in the Internet to be effective, as illustrated in Chapter 5. IP spoofing is another problem that will likely need a distributed solution. The distributed nature of these threats will make it very difficult for single-node solutions to counteract them. However, the impossibility of enforcing global deployment makes highly distributed solutions unattractive. See Chapter 5 for a detailed discussion of defense solutions and their deployment patterns.

[4] NSFnet, founded in 1986, was the National Science Foundation offspring from the ARPANET, meant to connect educational and research institutions.

Due to privacy and business concerns, network service providers typically do not wish to provide information on cross-network traffic behavior and may be reluctant to cooperate in tracing attacks (see [Lip02] for further discussion of Internet tracing challenges and possible solutions, as well as the legal issues in Chapter 8). Furthermore, there is no automated support for tracing attacks across several networks. Each request needs to be authorized and put into effect by a human at each network. This introduces large delays. Since many DDoS attacks are shorter than a few hours, they will likely end before agent machines can be located.



Internet Denial of Service. Attack and Defense Mechanisms
Internet Denial of Service: Attack and Defense Mechanisms
ISBN: 0131475738
EAN: 2147483647
Year: 2003
Pages: 126

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net