8.8 Additional Router Firewall Features


8.8 Additional Router Firewall Features

Whether used as a stand-alone firewall or in combination with dedicated hardware, a router is capable of providing security features beyond that of simple traffic filtering. This section discusses some of the other security features that can be included on routers to protect both your perimeter and the Internet as a whole.

8.8.1 Limiting Denial-of-Service Attacks

When an attacker is frustrated with your overall network security and has no other recourse, a denial-of-service (DoS) attack will be his or her avenue of attack. While DoS attacks generally do not damage your information, they certainly do affect the availability of information to authorized parties. DoS attacks can take a number of forms, but are generally sorted into two categories:

  1. Bandwidth-based attacks. These attacks simply try to send more packets over your network than your network can handle. Because the access link is generally the lowest-speed link in your network, this is where the attack effects are felt. The goal is to send enough traffic into your network over this link so that the routers start discarding packets that build up in the interface service queues. When the upstream router begins discarding packets, legitimate traffic is also discarded. Likewise, packets that do make it to the remote side of the network will generally create an error condition of some sort, requiring return traffic to be sent back over the link, and congesting your outbound access-router queues as well.

  2. Operating system attacks. Instead of flooding your network with so many packets that the network collapses, a better approach is to send a single packet that simply crashes or restarts your server. This is just as effective in rendering the target unavailable as a bandwidth-based attack. Common examples of this attack include the historic PING of Death. Many programmers built their networking stacks to accept the normally encountered packets. By sending purposefully malformed packets, these systems would collapse.

There are a two common ways to reduce the risk of a DoS attack. We will see that neither is foolproof, yet each contributes to the overall defense in depth to assure the availability of our network resources.

The most common way of dealing with these attacks is to configure bandwidth throttling on a router. Let us assume that your HTTP server is the subject of a bandwidth-based DoS attack from a single source. The easiest solution is to contact your ISP and have it configure an access-list preventing traffic from that source from being forwarded on to your network. The most difficult part of this solution is actually being able to contact the correct person at the ISP who can sit down and configure the access-list in a reasonable timeframe. The actual creation of the list is trivial.

The problem, however, becomes more complicated when the attacker launches an attack from multiple sources or modifies the source address of the attack point to make it look like entire networks of random sources are attacking your network. How do you know which requests from the Internet are parts of the DoS and which requests are from legitimate users? There is no easy way to create a blanket access-list covering all possible sources of attacks without affecting legitimate users of the networked resources. The two techniques discussed below are an attempt to respond to DoS attacks while still providing the maximum availability for legitimate users or other networked services.

8.8.1.1 Committed Access Rate.

Bandwidth-based attacks are successful when the attacker has more bandwidth available than you do. Difficult to accomplish when the majority of Internet access was dial-up links, these attacks are made much easier due to distributed denial-of-service (DoS) clients that enlist thousands of cable, DSL, and dial-up connections to attack sites with high-bandwidth access-links. Instead of needing a government or university link to attack another organization using a bandwidth-based DoS attack, millions of residential users can unwittingly participate to saturate even the highest bandwidth links. A common method of protecting your network against such attacks is a technology that is commonly applied to quality-of-service issues. This technology is known as committed access rate (CAR). Committed access rate attempts to slow down certain types of traffic to a given bandwidth by selectively processing traffic. Rate limits can be "hard" or "soft." Hard limits will discard certain types of traffic over a given threshold. Soft limits will allow rate limited traffic to burst above its CAR, thus allowing more flexibility in controlling competing traffic types. When applied to reducing the impact of DoS attacks, we are interested in the hard limits imposed by CAR.

To explain the operation of CAR, let us use an example. A company has an HTTP server, an SMTP/POP server, and a DNS server hosted at its site via a T-1 connection at 1.544 Mbps. The HTTP server becomes the target of a DoS attack. In response to the attack, the network administrator of the company asks its upstream ISP to configure CAR for HTTP traffic at 512 Kbps. Noticing the attack in progress and agreeing, the ISP makes the requested configuration and suddenly any HTTP traffic over 512 Kbps destined for the customer HTTP server is dropped. The result for the customer is that DNS and SMTP traffic can still pass from the ISP to the customer over the T-1. The customer still suffers from an HTTP DoS but the other network applications that they rely upon are still available for legitimate use.

This solution is predicated on an important assumption. You must have an ISP that is willing to work with you in this manner. It has been difficult to either convince ISPs to configure this or find someone qualified to do so. This is changing as the adoption of the process is gaining recognition and support in the networking community. CAR is commonly configured as part of a network quality-of-service plan and when CAR is used to mitigate DoS attacks; it is generally applied in a temporary manner. This means that users interested in security will not always have CAR configured, while those that take quality-of-service seriously generally will.

While you are welcome to configure your own CAR on your access router for traffic flowing from the ISP, the best effect of this technique is for the ISP to police traffic heading to your network. To understand this, we must take a slight detour into the realm of quality-of-service (QoS).

Most QoS, no matter the name, is simply the attempt to manage an output queue on a device. You have two packets in a queue for the same interface. How do you determine which one goes first? The device must be configured with some way of marking the packets just as it would for an access list that filters traffic for our routers. Congestion and thus a denial-of-service is simply more traffic entering an outbound queue than the router can process.

Note the emphasis on the outbound queue. Many people will incorrectly note that my "link is congested." This is not entirely accurate. A "link" can never be congested. A T-1 line will always process 1.54 million bits per second. It never tries to cram 1.6 million bits per second onto the link. The congestion is actually occurring on the interface that sends information over that link. The router interface may have 1.6 million bits to process in a given second, but the T-1 will only support 1.544 million bits in that same timeframe. Thus, the router is forced to store the extra information in a queue, hoping that the next time increment will provide space for the extra bits; but if information is entering the router faster than 1.54 Mbps, eventually the queue will fill up and the router has no choice but to drop the excess traffic. This is the principle of a DoS attack.

Based on this information, we can see that configuring CAR at your router to affect traffic it receives from the ISP has limited effect. The damage has already been done as the high-bandwidth backbone links of the ISP are suddenly throttled into the 1.54 Mbps of your T-1. The policing of the output queues that CAR allows needs to be configured where the congestion is.

8.8.1.2 TCP Interception.

While the PING of Death and other attacks based on operating system flaws are characterized by programming that does not take into account all possible invalid responses, some attacks take advantage of the normal operation of a protocol. The best example of this is the TCP SYN attack. Knowing that a TCP session is initiated with a three-way handshake, an attacker making a TCP SYN attack simply sends a connection request to a server. The server dutifully sends the second SYN ACK packet as an acknowledgement of the request. At this point, the connection is known as "half-open," in that the server is keeping the connection information in its memory, waiting for the third and final packet from the client to fully establish the TCP session. The client, of course, never sends the last packet, but instead sends a barrage of additional TCP connection requests to the same server. Within a short period of time, the available memory on the TCP server is used up with half-open connections and legitimate connection requests have to be refused. The problem is solved because the TCP server is programmed to wait a period of seconds for TCP requests to finish. In the early days of the Internet, it would not be uncommon for a packet to be delayed up to 120 seconds before reaching its final destination. Thus, TCP stacks were configured to wait out this period in hopes that the final packet would eventually show up.

The first way of reducing the effect of one of these attacks is to employ load-balancing hardware to distribute the load from a single server to multiple servers. This countermeasure is effective but expensive and simply raises the stakes. The hope is that your network will have more memory available for connections than the attack will have to make connection requests. The danger with this reasoning is that the war of resources simply escalates. As with distributed bandwidth-based DoS attacks, the potential is there to employ thousands of hosts making TCP SYN attacks, forcing you to respond with more servers and more load balancing.

The effects of this type of attack can be mitigated with the help of a router or firewall through the use of a TCP intercept configuration. This is a simple configuration that configures the router to act as a proxy for all incoming TCP requests to the servers it protects. When the router detects a new connection attempt to an internal server, it responds to the request as if it were the server. Internal servers can be defined individually, allowing a single router to protect multiple servers or only those that are most likely to be subject to an attack such as an HTTP server. If the request is legitimate and the final packet in the connection request is received, then the router completes the connection request to the TCP server on behalf of the remote client. If the connection is an attempted TCP SYN attack, then the router waits a much shorter timeout period than the server and discards the connection information, saving the server from having to spend its resources maintaining illegitimate connection requests.

When a full-scale TCP SYN attack is launched on a site, with thousands of TCP requests per second, the router will then enter an aggressive mode where TCP connections are discarded after just a few seconds if the connection is not finalized by the remote client, thus further reducing the impact of TCP SYN attacks on the local TCP server. When traffic has returned to more normal levels (that you define), the aggressive mode behavior will end and TCP interception will operate as normal.

You have no doubt surmised that the TCP intercept feature of routers is not bullet-proof protection against TCP SYN attacks. Eventually, enough requests will overrun the available bandwidth and turn the SYN attack into a bandwidth-based DoS. Nevertheless, as a countermeasure to increase the security of your network, it is difficult to argue against a feature that is built into most routers for sites that have average hosting needs. For very high volume sites, load balancing hardware, redundant servers, and high-bandwidth connections will remain the primary protection mechanism for the near future.

8.8.2 Reverse Path Forwarding

This section addresses good citizenship. The Internet would be much more secure for the rest of us if every network administrator would take the time to properly implement good information security practices. The fact that network security has recently gained a lot of momentum and consideration by vendors, management, and government is a good sign that good information security is being taken more seriously. Reverse path forwarding (RPF) is a process that attempts to eliminate spoofed traffic from originating from your network.

Assume that your Internal LAN network uses the network block 200.1.1.0/28. If this were the case, it would be a simple matter to create an access-list on your access router that only allowed traffic from this network block to exit your router to the Internet. It becomes more complicated, however, when you have 15 network blocks that you are responsible for over multiple far-flung subnets. If you use an access-list, the list must be updated to reflect any changes in the network topology; otherwise, legitimate traffic may be blocked from leaving the network.

A much simpler and robust solution would be to examine the routers' routing table. Referring back to our 15 hypothetical network blocks, if the source network of a packet entering the LAN interface of a router is not listed in the routing table as being located on the same LAN interface, the router will assume that the packet has been spoofed and drop the packet.

We see in Exhibit 10 that our access router has a routing table showing three networks on the LAN side of the router. When a packet from a host enters the LAN interface of the access router, it is checked against the routing table. If the source of the packet does not match what interface it should be received upon from the point of view of the router, the packet will be dropped. RPF is superior to access lists in preventing spoofed traffic from leaving the LAN for two reasons. First, as noted above, it is easier to configure and maintain. Instead of relying on the network administrator to get the configuration of the access-list correct, [6] we let the routing protocols already in use on the router maintain the proper network filtering.

Exhibit 10: Source Routing Operation

start example

click to expand

end example

The second primary advantage is that the RPF function performs better than access-lists. This is because RPF uses a router cache table, instead of an access-list, to match packets. Explaining this requires just a little bit of knowledge of how routers forward information. It used to be that a router would consult a routing table for each packet it forwarded. This software process was inherently slow from a processing point of view. To enable routers to forward packets at astounding rates, the forwarding logic has been moved to the hardware on the router interfaces. Routers now create a forwarding table based on the routing table, with each interface knowing only what networks are reachable through it. RPF takes advantage of this information that already exists in the cache and, because it is implemented in the hardware itself, is able to operate much more quickly.

Despite the advantages, RPF is not ideal for all situations. Some older routers, for example, do not have the ability to create express forwarding tables on the interface. These routers then would be unable to support the RPF function.

Furthermore, RPF is not suitable for routers that are in the core of a network. The reason for this is that it is not uncommon for IP packets to take one route to a destination and another route on the return path. With complex network topologies, RPF may end up incorrectly discarding packets. Because RPF only examines the local forwarding tables, this will result in information that is inconsistent with the packets that the router is receiving and cause the router to incorrectly drop packets. Thus, RPF is best configured only on access-links for traffic leaving the LAN because the traffic flows are generally tightly constrained on these routers.

8.8.3 Null Interface Routing

As an additional courtesy to the Internet as a whole, packets that clearly have invalid destinations should be dropped. The use of a null interface instead of access-lists is a minor trick that is sometimes useful when filtering traffic. The idea is to route invalid packet destinations to the null interface.

A null interface is simply a virtual trash bin on a router. Although the router treats a null interface as a normal interface, it does not really exist and any packets to be forwarded over that interface are discarded. If you were to enter this command on a router —" Forward all packets destined for the 10.0.0.0/8, 172.16.0.0/12, and 192.168.0.0/16 networks to the null interface." — then these packets would effectively be discarded. The addresses do not need to be destined to private networks but can be used for any network destination that you wish to block.

Although this accomplishes the same thing as an access-list, the process is much more efficient for a router because the router does not need to take the time to consult the access-filter, but instead follows its normal, high-speed routing procedures.

8.8.4 Source Routing

A technique sometimes used in conjunction with routing to the null interface is the use of source routing. Most routing decisions are based on the destination of a packet only. As the name implies, source routing also allows you to create custom forwarding tables based on the source of a packet. Instead of forwarding packets with a destination IP address of a private network as we did above, we could also create forwarding tables that route all packets with sources of the private address space (or any other network block you wish to discard) to the null interface.

Once created, access-lists need to be applied to an interface for them to be effective (see Exhibit 11). Packets can be checked relative to the interface itself. That means that a packet can be checked as it enters an interface or a packet can be checked as it exits an interface. Note that this is per interface and not per firewall. Unless there is a specific need otherwise, packets should generally be checked as they enter an interface. To understand why this is, refer to Exhibit 11. In the first example (egress filtering), the firewall device accepts the packet, routes it by determining which interface the packet should be forwarded out of, and then filters it as it exits the interface. Assume that the rule that matches the packet as it is filtered causes it to be dropped. We have wasted queue and processor resources to drop a packet. In the second example (ingress filtering), we filter the same packet as it enters the interface of the firewall. Because our matching rule causes us to drop the packet, the packet is discarded without utilizing resources on the outbound interface or the processor in routing the packet. There may be exceptions to this rule but they are generally minor and are to meet the requirements of specific implementations. For example, if a firewall has more than two interfaces such as a DMZ or another LAN or WAN segment, then outbound rules on an interface may be required to ensure secure operation. In general, the most efficient operation is to filter and drop packets as they enter a firewall interface as much as possible.

Exhibit 11: Where to Place Firewall Filters?

start example

click to expand

end example

[6]It is easy to argue that many security lapses in networks are the result of misconfiguration on the part of a human being. Thus, the more we let the computer do, the better the chance of a correct and secure implementation.




Network Perimeter Security. Building Defense In-Depth
Network Perimeter Security: Building Defense In-Depth
ISBN: 0849316286
EAN: 2147483647
Year: 2004
Pages: 119
Authors: Cliff Riggs

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net