6.3. Preparing to Handle a DDoS Attack


As in any risk management activity, preparation is crucial. Understanding how your network is organized and how it works will help you identify weak spots that may be a target of the attack. Fortifying those weak spots and organizing your network to be robust and self-contained will hinder most simple attacks and minimize the damage that can be inflicted. Finally, preparing emergency procedures, knowing your contacts, and having multiple ways to reach them (including out-of-band, in terms of your network), will enable you to respond quickly to an ongoing attack and improve your chances of weathering it.

6.3.1. Understanding Your Network

The DoS effect usually manifests itself through large network delays and loss in connectivity. Depending on the targeted resource, your whole network may experience a DoS effect, or only specific services, hosts, or subnetworks may be unavailable. Understanding how your network functions will aid in risk assessment efforts by establishing:

  • How important network connectivity is in your daily business model

  • How much it would cost to lose it

  • Which services are more important than others

  • The costs of added latency, or complete loss of connectivity, to your key services

Most businesses today rely on the public Internet for daily activities, such as e-mail, ordering supplies online, contacting customers, videoconferencing, providing Web content, and voice-over-IP services. Some of those activities may be critical for the company's business, and may have no backup solutions. For instance, if supplies are ordered daily and must be ordered through online forms, or your company uses "voice-over-IP" exclusively for all business telephone calls, losing network connectivity may mean stalling the business for a few days. Other activities may have alternatives that do not require Internet access e-mails can be replaced by telephone calls, videoconferencing by conference calls or live meetings; some activities can even be postponed for a few days. In this case, Internet access increases effectiveness but is not critical for business continuity.

Some companies make their profit by conducting business over the Internet. Take, for example, a company that sells cat food through online orders. Certain products or services are at a higher risk of loss due to even short-duration DDoS attacks. These include:

  • Products with a short shelf-life that must be sold quickly, such as flowers or specialized holiday foods

  • Commodities that could easily be obtained from many sources, so customers would simply leave and go somewhere else if they cannot get immediate access, such as pornography

  • Time-critical transactions, such as betting on sports events, stock trading, mortgage applications, news delivery and major media events, and event or transportation ticket sales

  • Low-margin, high-volume purchases that require a constant transaction rate to maintain viability of the business, such as major online booksellers and airline ticket services

  • Businesses that offer free services supported by advertising, such as search engines and portals

Network connectivity is a crucial asset in these business models, and losing connectivity means losing daily revenue (possiblya a lot of it). Additionally, if a company is well known, the fact that it was out of business for even a few hours can make headline news and damage its reputation a fact that may lose them more business than a few hours of network outage.

The first step in risk assessment is making a list of business-related activities that depend on constant Internet access. Each item on the list should be evaluated for:

  • Alternative solutions that do not require Internet access

  • Frequency of the activity

  • Estimated cost if the activity cannot be performed

In addition to costs relating directly to loss of connectivity, there may be hidden costs of a DDoS attack from handling extreme traffic loads, or diverting staff attention to mitigate the problem. In some cases, diverting attention and/or causing disruption of logging is a prime motivation of some DDoS attacks, overwhelming firewall or IDS logging to blind the victim to some other attack, or allowing a blatant action to go unnoticed. An attacker who wants to slip in "under the radar" can do so better if the radar screen is filled with moving dots.

For example, a DDoS attack may fill your logs. Logging traffic may amplify the DoS effect, clogging your internal network with warning messages. Understanding how your logging is set up will help you identify hot spots ahead of time and fortify them, for instance, by providing for more log space or sending log messages out of band.

Sophisticated DDoS attacks may manifest themselves not as abrupt service loss but as a persistent increase in incoming traffic load. This may lead you to believe that your customer traffic has increased and purchase more assets for handling additional load. Imagine your (or your stockholders') disappointment when the truth finally becomes clear. You may be paying for something that is not even necessary due to the way your ISP charges you. If you pay per byte of incoming bandwidth, this cost will skyrocket in such a subtle, slowly increasing attack. Some ISPs will be willing to waive this cost, but others will not. Whether they do may depend on how long the situation existed before you noticed it. Understanding how conditions of your service agreement apply to the DDoS attack case ahead of time will enable you to negotiate the contract or change the provider. Other hidden costs include increased insurance premiums and legal costs for prosecuting the attacker. If your assets are misused to launch the attack on someone else, you may even face civil liability.

Once critical services are identified, it is important to understand how they depend on other services. For instance, e-mail service needs DNS to function properly. If e-mail is deemed critical, then DNS service is also critical and must be protected.

6.3.2. Securing End Hosts on Your Network

Preparation for dealing with both phases of DDoS attacks starts with addressing end-host security issues. These include reducing vulnerabilities that could result in compromise of systems, and tuning systems for high-performance and high-resilience against attack.

Reducing Vulnerabilities on End Hosts

While the most common strategy for creating the DoS effect is to generate excess traffic, if the target host has a software vulnerability, misusing it may take only a few packets that shut down the host and effectively deny service with much less effort and exposure to the attacker. There are many attacks that function this way. Additionally, all techniques for acquiring agent machines are based on exploiting some vulnerability to gain access. Fixing vulnerabilities on your systems not only improves your security toward many threats (worms, viruses, intrusions, denial of service), it also makes you a good network citizen who will not participate in attacks on other sites.

It is not uncommon today for applications and operating systems that run on end hosts and routers to have bugs that require regular patching and upgrading. Many vendors have an automatic update system to which you can subscribe. This system will inform you when new vulnerabilities have been discovered, and it will usually deliver patches and updates to your hosts, ready to be installed. For example, Microsoft maintains a Windows update Web site (http://www.windowsupdate.com) where users can have their machines scanned for vulnerabilities and obtain relevant patches and updates. Users subscribing to automatic Windows updates would have them delivered directly to their computer. Red Hat, a commercial Linux distributor, maintains a Web site with security alerts and bug fixes for current products at http://www.redhat.com/apps/support/errata/. Users can subscribe for automatic updates at http://www.redhat.com/software/rhn/update/. Other desktop systems, such as MacOS, also offer software update services.

Virus detection software needs to be frequently updated with new virus signatures. In many cases this can help detect and thwart intrusion attempts on your hosts and keep your network secure. Each major virus detection product comes with an option to enable automatic updates. If you enable this option, new virus signatures will be automatically downloaded to your machine. However, any kind of automatic action may inflict accidental damage to your computer because of incompatibility between the update and other installed software, or may be subverted by an attacker to compromise your computer. Automatic features should be carefully scrutinized and supported by a form of authentication and by backups and extra monitoring to quickly detect and react to failures.

Some protocols are asymmetric; they make one party commit more resources than the other party. These protocols are fertile ground for DDoS attacks, as they enable the attacker to create a heavy load at the target with only a few packets. The TCP SYN attack, discussed in Chapter 4, is one example, based on filling the victim's connection table.

A modification of the TCP protocol, the TCP syncookie [Ber] patch (discussed in more detail in Chapter 5), successfully handles this attack by changing connection establishment steps so that server resources are allocated later in the protocol. This patch is compatible with the original protocol: Only the server side has to be updated. Linux, FreeBSD, and other Unix-based operating systems have deployed a TCP syncookie mechanism that can be enabled by the user (it is disabled by default). For instance, to enable TCP syncookies in Linux you should include "echo 1 > /proc/sys/net/ipv4/tcp_syncookies" in one of the system startup scripts. WindowsNT protects from SYN flood attacks by detecting the high number of half-open connections and modifying the retransmission and buffer allocation behavior, and timing of the TCP protocol. This option can be enabled by setting the parameter value HKLMSystemCurrentControlSetServicesTcpipSynAttackProtect in the System Registry to 1 or 2. FreeBSD implements syncookie protection by default, but if you wanted to control this setting, you would use "sysctl -w net.inet.tcp.syncookies=1" to enable it.

An authentication protocol is another example of an asymmetric protocol. It takes a fairly long time to verify an authentication request, but bogus requests can be generated easily. Potential solutions for this problem involve generating a challenge to the client requesting authentication, and forcing him to spend resources to reply to the challenge before verifying his authentication request. Unfortunately, this requires clients to understand the challenge/response protocol, which might not always be possible. Other alternatives are to consider whether you really need to authenticate your clients or to perform a stepwise authentication by first deploying weak authentication protocols that provide for cheap verification and then deploying strong ones when you are further along in your interactions with the client. If you have a fixed client pool, a reasonable alternative is to accept authentication requests only from approved IP source addresses. In any case, strengthening your authentication service by providing ample resources and deploying several authentication servers behind a load balancer is a wise decision. Understanding how the protocols deployed on your network function, and keeping them as symmetric as possible, reduces the number of vulnerabilities that can be misused for DDoS attack.

Many operating system installations enable default services that your organization may never use. Those services listen on the open ports for incoming service requests and may provide a backdoor to your machine. It is prudent to disable all but necessary services by closing unneeded ports and filtering incoming traffic for nonexistent services. For instance, if a host does not act as a DNS server, there is no need to allow incoming DNS requests to reach it. This filtering should be done as close to the end host as possible. For example, ideally you would filter at the host itself, using host-based firewall features (such as iptables or pf on Unix systems or personal firewall products on Windows or MacOS). This is the easiest place, resource-wise, to do the filtering but has implications if you do not control the end hosts on your network. Filtering at the local area network router (i.e., using a "screened subnet" style of firewalling) would be a good backup and is one way of implementing a robust layered security model. If your network has a perimeter firewall (which is not always the case), traffic can be blocked there as well. Trying to block at the core of your network, or at your border routers, may not be possible if your network design does not provide for sufficient router processing overhead (and this also violates the spirit of the end-to-end network design paradigm). To discover services that are currently active on your hosts, you can look at list of processes or a list of open ports or do a ports can using, for instance, the nmap tool an open source vulnerability scanner freely downloadable from http://www.insecure.org/nmap/index.html. If you opt for port scanning, be advised that some types of scans may harm your machines. Inform yourself thoroughly about the features your port scan tool of choice supports and use only the most benign ones that do not violate protocols and scan at moderate speed.

Many DDoS attacks employ IP spoofing, i.e., faking the source address in the IP header to hide the identity of actual attacking agents. This avoids detection of the agent and creates more work for the DDoS defense system. It is a good security measure to deploy both ingress and egress filtering [FS00] at your network perimeter to remove such spoofed packets. (Because "ingress/egress filtering" can be a confusing concept, please refer to the descriptions in Chapter 4 and Appendix A.) As seen by an edge network, antispoofing egress filters remove those outgoing packets that bear source addresses that do not belong to your network. The equivalent ingress filters similarly remove those incoming packets that bear source addresses that belong to your network. Both of those packet address categories are clearly fake, and many firewalls and routers can be easily configured to perform ingress/egress filtering (as well as other types of filtering). Advanced techniques for elimination of spoofed addresses include detecting source addresses from those network prefixes that are reserved and filtering them from your input stream, or using a list of live internal network addresses to allow only their traffic to get to the outside world.

In general, fixing vulnerabilities raises the bar for the attacker, making him work harder to deny service. It protects your network from specific, low-traffic-rate attacks. In the absence of vulnerabilities that could quickly disable network services, the attacker instead must generate a high traffic rate to overwhelm your resources.

Tuning System Parameters

Beyond just fixing vulnerabilities, one of the first things that should be done in cases in which a DDoS attack is affecting a service or end system but is not large enough to cause noticeable network disruption is to ensure that the target of the attack is adequately provisioned and tuned. Even in cases in which DDoS mitigation tools are in place at the perimeter of your network, and are functioning as they should to scrub incoming packets and ensure that only "legitimate" requests are coming in, the server may still be overwhelmed and cease to function.

Examples of things to look for include:

  • Processor utilization. Programs like top, ps, and vmstat are useful for checking processor utilization. The uptime program also shows processor load averages. If those programs indicate a single application that is consuming an unusually high amount of CPU time (e.g., 90%) this may be a vulnerable application targeted by a DoS attack. You will need to keep a model of normal CPU utilization to quickly spot applications gone wild due to vulnerability exploits or specifically crafted attacks.

  • Disk I/O performance. Disk I/O performance can be determined using programs like vmstat, iostat, and top. If you are using NFS, the nfsstat program should be used. Tuning of IDE drives can be accomplished on Linux systems using hdparm. If disk-monitoring programs, such as iostat, indicate unusually high disk activity, this may be a vulnerable application under attack. Again, you will need a model of normal disk activity to be able to spot anomalies.

  • Network I/O performance. If the network interface is saturated or there are other problems on the Local Area Network (LAN) segment, you may see dropped packets or a high rate of collisions. You can see these using the statistics option of netstat. Network socket state information can also be seen with netstat. If netstat indicates that you are experiencing a significant number of dropped packets and collisions when no attack is under way, then you are already running close to capacity. An attacker can tip you over into an overload situation by adding a rather small quantity of traffic. You need to improve the bandwidth on the highly utilized links if you want to avoid falling easy prey to DDoS. This observation is most important for those links that are publicly accessible. A fairly congested link on a purely internal LAN that does not directly accept traffic from the outside network is less of a risk for a DDoS attack than the same situation on the main link between your subnetwork and your ISP.

  • Memory utilization. If memory is low, use ps and top to determine which programs have the largest Resident State Sizes (RSS). You may need to kill some of them off to free up memory. Memory is very cheap these days, so make sure that you have as much as your motherboard and budget can afford. If you cannot increase memory, yet still have problems with memory utilization, consider upgrading your hardware.

  • Swapping/paging activity. Paging is a normal activity that occurs when portions of a program that are necessary for execution must be brought in from disk. Swapping occurs when physical memory is low and least-recently used pages of programs are written out to disk temporarily. Because disk speed is far slower than memory, any reading or writing of the disk can have significant performance impacts. You can check on paging and swapping activity and swap utilization using vmstat, top, or iostat.

  • Number of server processes. Web servers typically have one process responsible for listening for new connections, and several child processes to handle actual HTTP requests. If you have a very high rate of incoming requests, you may need to increase the number of server processes to ensure that you are not overloading the existing processes and delaying new requests. Use top, ps, and netstat to check for overloading of server processes.

Tuning system parameters will help protect your network from small- to moderaterate attacks, and in-depth monitoring of resource usage should ensure better detection of attacks that consume a large amount of resources.

6.3.3. Fortifying Your Network

In addition to securing the hosts on your network, there are also things you should do to improve the security of the network as a whole. These include how the network is provisioned and how it is designed and implemented.

Provisioning the Network

A straightforward approach to handle high traffic loads is to buy more resources, a tactic known as overprovisioning. This tactic is discussed in detail in Chapter 5. There are many ways in which overprovisioning can be used to mitigate the effects of a DDoS attack.

The problem here, from a cost perspective, is the asymmetry of resources. There is very little or no cost for an attacker to acquire more agent machines to overwhelm any additional resources that a victim may employ, while the victim must invest money and time to acquire these added resources. Still, this technique raises the bar for the attacker and will also accommodate increased network usage for legitimate reasons.[2]

[2] For instance, overprovisioning can help in the case of "flash crowds," when some popular event motivates many clients to access your network simultaneously.

One form of overprovisioning has to do with available network bandwidth for normal traffic. This approach is taken by many large companies that conduct business on the Internet. The attacker looking to overwhelm your network with a high traffic load will either target the network bandwidth by generating numerous large packets of any possible type and contents, or he will target a specific service by generating a high number of seemingly legitimate requests. Acquiring more network bandwidth than needed is sometimes affordable, and makes it less likely that bandwidth will be exhausted by the attack. A smartly configured network-level defense system sitting on this highly provisioned link will then have a chance to sift through the packets and keep only those that seem legitimate

Another form of overprovisioning is to have highly resourced servers. This can be (1) keeping hosts updated with the fastest processors, network interfaces, disk drives, and memory; (2) purchasing as much main memory as possible; or (3) using multiple network interface cards to prevent network I/O bottlenecks. Further, duplicating critical servers and placing them in a server pool behind a load balancer multiplies your ability to handle seemingly legitimate requests. This may also prove useful for legitimate users in the long run as your business grows your network will have to be expanded, anyway. Performing this expansion sooner than you may otherwise plan also helps withstand moderate DDoS attacks.

These two forms of overprovisioning can be combined in a holistic manner, ensuring fewer potential bottlenecks in the entire system. In the electric sector, for example, sites are required to have nominal utilization rates for the entire system that are no larger than 25% of capacity, to provide an adequate margin of unused processing capacity in the event of an emergency. This minimizes the chances that an emergency situation causes monitoring to fail, escalating the potential damage to the system as a whole.

If your network management and communication to the outside world is done in-band over the same network as potential DDoS targets, (e.g., if you use voice-over IP for your phone line), the DDoS attack may take out not only your services, but all means of communicating with your upstream provider, law enforcement, vendors, customers in short everyone you need to communicate with in a crisis. The added cost of purchasing extra bandwidth can be viewed as a cheaper form of insurance.

Designing for Survivability

Since some DoS attacks send a high amount of seemingly legitimate requests for service, techniques that improve scalability and fault tolerance of network services directly increase your ability to weather DDoS attacks. Designing for survivability means organizing your network in such a way such that it can sustain and survive attacks, failures, and accidents (see [EFL+99, LF00]). Sometimes, adding survivability provisions has surprising benefits in protecting against unexpected events [Die01].

Survivability design techniques include:

  • Separation of critical from noncritical services. Separating those services that are critical for your daily business from those that are not facilitates deployment of different defense techniques for each resource group and keeps your network simple. Separation can occur at many levels you can provide different physical networks for different services, connect to different service providers, or use subnetting to logically separate networks. A good approach is to separate public from private services, and to split an n-tier architecture into host groups containing Web servers, application servers, database servers, etc. Once resources are separated into groups, communication between resource groups can be clearly defined and policed, for instance, by placing a firewall between each pair of groups.

  • Segregating services within hosts. Rather than assigning several services to a single host, it is better to use single-purpose servers with well-defined functionality.

    As an example of segregated services, having a dedicated mail server means that all but a few ports can be closed to incoming traffic and only a few well-defined packet types should be exchanged between the server and the rest of the network. It also means that monitoring server operation and detecting anomalies will be simplified. Having several services assigned to a single host not only makes monitoring and policing difficult, but finding and exploiting a vulnerability in any of these services may effectively deny all of them.

    An advanced alternative for segmenting services that addresses just the issue of process separation is to use highly secure operating systems such as SELinux or TrustedBSD which implement Mandatory Access Control (MAC) for processes.[3] This will not help very much for DDoS mitigation, however, unless these systems are also configured not just to use MAC, but to also limit particular services to maximum usage of particular system resources. If both a DNS service and a Web server are run on such a machine and the Web service receives many seemingly legitimate requests that take a long time to process, MAC will state that all of them should get service. After all, the Web server must be permitted to do such things in its legitimate operations. Only enforcement of QoS guarantees for the DNS server by the OS can prevent the Web requests from eating up all the machine's resources and starving the DNS server.

    [3] Such systems are not yet commonly used by businesses, as their policy management aspects and debugging are quite complicated, so they will not be covered here. Still, you should watch for them to start making inroads in commercial and open-source operating system offerings in the next few years.

  • Compartmentalizing the network. Identifying bottlenecks in the risk assessment step should provide guidelines for reorganizing the network to avoid single points of failure. Dividing the network into self-contained compartments that communicate with the outside only when necessary enables your business to continue operation even when some portions of the network are down. For example, having a database server distributed across several physically disjoint networks that connect to the Internet via different ISPs may require a lot of management, but may enable you to continue serving database requests even if several servers are targeted by the attack or their incoming links are overwhelmed. Having a mail server for each department may mean that when the finance department is under attack, employees from the planning division can still send and receive e-mails. As always, decisions about which services to replicate and to what extent demand detailed cost/benefit analysis.

  • Reducing outside visibility. If attackers are able to learn your network organization, they can also assess the risks and identify bottlenecks that can be targeted by the attack. There are numerous techniques that aid in hiding the internals of your network. Blocking ICMP replies to outside hosts prevents attackers from learning your routing topology. Using a split DNS effectively creates two separate DNS zones for your network with the same DNS names. One zone will contain a list of externally visible services that can be accessed through the firewall. Your external DNS server will serve this zone. Your external customers will be able to access only the external DNS server and will see only this list of services. Another zone will contain a list of internally accessible services that are available to your employees. This zone will be served by your internal DNS server. Your employees' machines will be configured with the address of the internal DNS server and will access these services through your internal network. Separating this information minimizes the data you leak about your internal network organization. It also provides separate access paths for external and internal clients, enabling you to enforce different policies.

Network Address Translation (NAT) hides the internals of the network by providing a single address to the outside clients that of the firewall. All outside clients send their requests for service to the firewall, which rewrites packets with address of the internal server and forwards the request. Replies are similarly rewritten with the firewall's address. This technique creates a burden on the firewall and may make it a single point of failure, but this problem can be addressed by distributing firewall service among several routers. Generally, if attackers can only see a large, opaque network, they must either attack the ingress points (which are probably the most capable and highly provisioned spots, and thus the easiest to defend) or use e-mail or Web-based attacks to gain control of a host inside the perimeter and tunnel back out.

6.3.4. Preparing to Respond to the Attack

Having a ready incident response plan and defense measures that you can quickly deploy before the attack hits will make it easier to respond to the attack. Further, knowing who to contact for help will reduce your damages and downtime.

Responding to the attack requires fast detection and accurate characterization of the attack streams so that they can be filtered or rate limited. Devising detailed monitoring techniques will help you determine "normal" traffic behavior at various points in your network and easily spot anomalies. Detection, characterization, and response can be accomplished either by designing a custom DDoS defense system or by buying one of the available commercial products.

You should choose wisely how you configure and use a DDoS defense mechanism. For example, if you choose to make its actions fully automated, an attacker may be able to exploit the unauthenticated nature of most Internet traffic (at least traffic that does not use IPSec Authentication or Encapsulation features) to trick the DDoS defense system into acting incorrectly and producing a DoS effect that way. Assume an attacker knows that your site uses a DDoS defense mechanism that does not keep state for UDP flows, and that you have it configured to automatically block inbound UDP packet floods. She also knows that many of your users frequent Web sites that rely on some company's distributed DNS caching service. The cache provider has thousands of DNS servers spread around the Internet, which the attacker can determine by having her bots all do DNS lookups for the same sites and storing the results. DNS typically uses UDP for simple requests. Putting these facts together, the attacker could forge bogus UDP packets with the source address of all of the DNS caching service's servers, sending them all to random IP addresses at your site. Depending on how the DDoS defense system is programmed, it might assume that this is a distributed UDP flood attack and start filtering out all UDP traffic from these thousands of "agents" (really the caching DNS servers), preventing your users from getting to any Web sites served by this DNS cache provider, because their DNS replies are blocked as they come back.

Fully manual operation has its own problems. The reaction time can be significantly slowed, and in some cases slow defense may cause more problems than it solves. The network operations and security operations staff should both well understand the capabilities and features of the DDoS mitigation system and how best to deploy and control it. Appendix B offers a detailed overview of some currently available commercial products. This should serve only as an introduction to a wide variety of security products that are currently available. The reader is advised to investigate the market thoroughly before making any buying decisions.

Creating your own DDoS defense system generally makes sense only for large organizations with sophisticated network security professionals on staff. If your organization has those characteristics, a careful analysis of your system's needs and capacities might allow you to handcraft a better DDoS solution than that offered by a commercial vendor. Building a good defense solution from scratch will be an expensive and somewhat risky venture, however, so you should not make this choice lightly. If you do go in this direction, make sure you have a thorough understanding of all aspects of your network and systems and a deep understanding of DDoS attack tools and methods, as well as the pros and cons of various defensive approaches.

It is also important to balance the requirements of the network operations and security operations staff. Many sites separate these groups, often having an imbalance of staffing and resources dedicated to building and supporting the complex and expensive network infrastructure vis-à-vis handling security incidents and mitigation at the host level. Having these groups work closely together and share common goals improves the situation and causes less internal friction.

Many sites with large networks and high bandwidth costs are now purchasing or implementing traffic analysis tools, but often more with an eye toward cost containment and billing, as opposed to security. When an attack occurs, there is sometimes a distinct issue of not having visibility below the level of overall bandwidth utilization and router/switch availability statistics. There may not be flow- or packet content monitoring tools in place, or the ability to preserve this data at the time of an attack to facilitate investigation or prosecution. Network traffic analysis is a critical component of response. It is said that one person's DoS attack is another person's physics experiment dataset.

There may also be a lack of tools, such as network taps, systems capable of logging packets at the network borders without dropping a high percentage of the packets, or traffic analysis programs. These tools, and the skills to efficiently use them, are necessary to dig down to the packet level, identify features of the attack and the hosts involved, etc. Many high-profile attacks reported by the press have been accompanied by conflicting reports of the DDoS tool involved, the type of attack, or even whether it was a single attack or multiple concurrent attacks. In some cases, sites have believed they were experiencing "Smurf" (ICMP Echo Reply flood) attacks inbound, because they saw a spike in ICMP packets. In fact, some host within their network was participating in an attack with an outbound SYN or UDP flood that was well within the normal bandwidth utilization of the site, and they were getting ICMP replies from the target's closed port. In other cases "attacks" have turned out to be misconfigured Web servers, failed routers, or even a site's own buggy client-side Web application that was flooding their own servers![4]

[4] The situation of self-inflicted DoS happens more than one would think. Some victims of "DDoS attacks" have suffered costly downtime and customer complaints for hours. Their ISPs often confirm the "attack" from packet counts and flow direction information obtained from simple network-monitoring tools, but nobody gathers even a short sample of network traffic to analyze. When expensive consultants are brought in and manage to get the provider to find a way to gather network traffic, or make an on-site call to the victim's server location, it is quickly determined that the problem has to do with a software bug and the "attack" is then halted in a matter of minutes. Had the victim, or its ISP, been prepared to analyze network traffic, the event would have lasted only a brief time and had minimal cost.

Investing in backup servers and load balancers may provide you with assets that you can turn on in case of an emergency. Backup servers can take the load off the current server pool or can supplement hosts that are crashing. Of course, there is a cost to having spare equipment lying around, and an attack may be over by the time someone is able to reconfigure a server room and add in the spare hardware. If you provide static services, such as hosting Web pages or a read-only database, it may be possible to replicate your services on demand and thus distribute the load. Service replication can even help if you deliver dynamic content. Contracting with a service that will replicate your content at multiple network locations and transparently direct users to the nearest replica provides high resiliency in case of a DDoS attack. Since these services are themselves highly provisioned and distributed, even if they are successfully attacked at one location, the other replicas will still offer service to some of your clients. The downside to such highly provisioned and distributed services is an equally high cost. This option may be viable financially for only very large sites. Very careful risk analysis and cost/benefit calculations are necessary, as well as investigation of alternatives for a disruption of business operations, such as insurance coverage.[5] Another consideration is that while replication services are highly provisioned, they may themselves be taken down by a sufficiently large attack, so this approach is not a panacea.

[5] Without recommending specific companies, search the Web for "DDoS insurance coverage" to find some options. Credit card companies, as well as some insurance companies, are now offering such protection.

Another alternative is to consider gracefully degrading your services to avoid complete denial. For instance, when a high traffic load is detected, your server can go into a special mode of selectively processing incoming requests. Thus, some customers will still be able to receive service.

If the DDoS attack targets network bandwidth, no edge network defense will ameliorate the situation. Regardless of sophisticated solutions you may install, legitimate users' packets will be lost because your upstream network link is saturated and packets are dropped before they ever reach your defense system. In this situation, you need all the help you can get from your upstream ISP, and perhaps also their peers, to handle the problem.

Cultivating a good relationship with your ISP ahead of time and locating telephone numbers and names of people to reach in case of a DDoS attack will speed up the response when you need it. Many ISPs will gladly put specific filters on your incoming traffic or apply rate limiting. Be aware of the information you need to provide in order to get maximum benefit from the filtering. Sometimes it may be necessary to trace the attack traffic and have filters placed further upstream to maximize the amount of legitimate traffic you are receiving.[6] This usually involves contacting other ISPs that will generally be unwilling to help you, given that you are not their customer. If you have a good relationship with your ISP, they will be in a far better position to negotiate tracing with their upstream peers. Specifying the responsibilities and services your ISP is willing to offer in case of DDoS attack in a service agreement will also simplify things once an attack occurs.

[6] Filtering is usually imprecise and inflicts some collateral damage. If the route that attack traffic takes can be identified and filtering placed as much upstream as possible, this minimizes the amount of legitimate traffic passing the filter and thus minimizes the collateral damage from the response.



Internet Denial of Service. Attack and Defense Mechanisms
Internet Denial of Service: Attack and Defense Mechanisms
ISBN: 0131475738
EAN: 2147483647
Year: 2003
Pages: 126

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net