Chapter 8: Firewalls


While strictly an extension of the network elements section, firewalls are divergent enough to warrant their own discussion. The first thing to establish is that a firewall is more of an idea than it is a single device. Many network administrators, when asked to display their firewall, will proudly point to a box of some sort with a bunch of network interfaces on it and say, "That there is our firewall." It would be more accurate to say, "That there is a box that is part of our firewall." Being more than a semantic issue, a firewall is the sum total of devices that are used to protect an inside network from an outside network. Most companies have at least two pieces of hardware that serve as their firewall — an access router and a hardened bastion host that acts as a filter of some sort upon data. A company could also include a proxy server or mail relay/attachment scanning station as part of its firewall. In the end, the firewall is everything that a company uses to protect the "inside" from the "outside." This distinction drives the configuration options of most firewalls, as rules can be independently configured for both traffic passing from the inside to the outside and vice versa.

8.1 Types of Firewalls

While entire books have been written about the different types of firewalls, components that can be used in creating firewalls, and their configuration options, they can be broadly summarized as follows.

8.1.1 Packet Filters

Packet filters are often referred to as "first-generation" firewalls in that they are straightforward and somewhat unsophisticated compared to more-modern firewall technologies. This is not to say that they are obsolete, only that they employ a fairly simple logic that is fast, easy to configure, found in any firewall application, and reasonably simple for a determined attacker to get around.

Packet filtering firewalls make forwarding decisions based on the contents of the IP header or the TCP header. Applied to interfaces, either "inbound" or "outbound" packet filters use a list of rules to examine packets as they enter or leave the network interface. In most packet filtering implementations, these rule sets can be applied differently to each interface on a firewall device.

A set of common packet filtering rules may read something like the following:

  • Allow any inbound traffic as long is it is part of an established TCP connection.

  • Allow inbound connections from any source as long as it is destined to our Web server application on this host.

  • Allow return DNS traffic in reply to UDP DNS queries.

  • Allow zone transfers for DNS zone files only between our primary internal DNS server and a defined remote DNS server.

  • Allow inbound e-mail traffic to the mail server.

  • Allow remote POP3 clients to check their e-mail on the POP3 server.

Of course, the actual rules will vary, depending on the security policy chosen by the company, but the above represent a pretty typical rule set for a company that hosts its own Web site, DNS servers, and mail services. The above is also only an inbound list. An equivalent set of rules must be established to control outbound traffic as well. We will discuss the creation of firewall rules in accordance with our security policy a bit later. For now, we are interested in examining how a packet filter would act upon the above rules.

To enforce the above rule set, a packet filter firewall inspects information that is in the packet headers. The IP header provides information about the source and destination IP addresses and the fragmentation status of the packet. The TCP header contains information about the connection status of a TCP circuit and the source and destination ports of the TCP segment. This information is used to determine which application the data is to be forwarded to on the receiving host. Our rule set will have a series of statements in them that will:

  1. Look for the connection status of TCP segments. If the connection status indicates that the packet is part of an established connection, let it through. Otherwise, check the next rule.

  2. Look at the destination IP address of the packet. If its destination is the company Web server IP address, then look at the port of the packet. If the TCP destination port is port 80, allow the packet through. Otherwise, check the next rule.

  3. Look at the source port of a UDP packet. If that source port is equal to 53 (the DNS port), allow the UDP packet into the network. Otherwise, check the next rule.

  4. Look at the source IP address and destination IP address of the packet. If the source is the remote secondary DNS server and the destination of the packet is the local primary DNS server, then check the TCP port. If the port number is 53, used to DNS zone transfers, then let the packet through. Otherwise, check the next rule.

  5. Check the destination of the IP packet; if the destination of the IP packet is that of our mail server and the TCP port is equal to the one that our mail server runs on (port 25 by default), then allow the packet through. Otherwise, check the next rule.

  6. Check the destination of the IP packet; if the destination of the IP packet is that of our POP3 server and the TCP port is equal to the default POP3 port of 110, then allow that packet through.

  7. Deny any packet, regardless of the information in the packet.

The last rule is usually not configured, but it is important to know that firewalls frequently add this rule. The default "deny all" that is the end of rule set means that any traffic that is not specifically permitted in the preceding rules is dropped. When discussing firewall theory in an academic situation, there are two generally accepted modes of configuring network-based access controls. The first method is to only block that traffic which you do not want to enter or leave your network. Given the ease with which most applications can change ports, however, this is a futile gesture. Of the 65,536 ports available in TCP and UDP, blocking ten ports because they are used for prohibited applications is like putting a steel pole in the middle of a soccer field and hoping that the intruders happen to hit the pole as they run across the field. So, while it is useful in discussing packet filtering academically, practically speaking, the converse rule is more useful and is normally applied — that which is not specifically allowed is denied. The default deny all rule at the end of every firewall enforces this concept.

You will notice a great emphasis on mentioning that if a particular rule is matched, then the packet is processed. If there is no match, then the packet is checked against the next rule. This is one of the most fundamental issues of firewall design. The order of the rules is very important. As an example, let us add another rule. We do not want any packets from the outside that have a source of our networks IP address into our network — a process known as spoofing. Clearly, if a packet coming from the Internet has the source IP address of our internal network, something is afoot. Creating a filter to protect against this is known as configuring anti-spoofing rules.

Where we place that rule into the above rule set is vitally important if we want the security policy to be enforced. If the rule is applied at the end of the policy, then there are a number of opportunities to get around it. For example, Rule 5 only checks the destination of the IP address and matches to that of the internal mail server. If someone on the Internet were to send a packet to that IP address with a spoofed source, the packet filter would allow it through the firewall. In this case, the rule needs to be placed at the top of the list. In this way, any packet is first checked against the anti-spoofing rule and discarded before any other rules are processed.

The primary advantages of packet filters are that they are fast and fairly straightforward to configure. They are fast because, comparatively speaking, packet filters do not have much work to do on each packet. They simply examine them and make a forward or drop decision. No extra effort is spent interpreting what the information in the packet means and no "state" or memory is kept in the firewall device to associate one group of packets with another. Each vendor does have its own particular syntax to use in creating the rules, but generally the pattern is very similar and is based on defining protocols and IP addresses. Packet filters also enjoy broad support due to their low overhead. Indeed, it would be difficult to find a general-purpose firewall device that did not support at least packet filtering.

For all of these advantages, packet filters are not ideal firewalls on their own, although they play an important part in a more complete firewall configuration. For example, Rule 1 above examines inbound packets to see if they are part of existing connections. Examining the TCP header and looking for the ACK bit, which is a response to a packet that has already been sent, do this. The logic is that if the ACK bit is sent, then something on the inside must have sent out a packet in the first place. This simple logic is easy to defeat by those with malicious intent. It is a trivial exercise to set the ACK bit on any TCP packet and then just send it through the firewall. A straightforward packet generator will allow any attacker access through the firewall by fabricating a packet complying with Rule 1. Granted, the attacker could not establish a new connection, but there is plenty of harm that an attacker could do just by getting packets to the device in the first place. At the very least, it allows an attacker to map out the internal network by launching a network scan.

Setting the TCP ACK bit works because the packet filter does not have any context to work with inasmuch as it does not keep any state information about the packets that it forwards. The packet filtering firewall, for example, does not know that for the past two hours there has been no TCP traffic headed to the internal host at 200.200.1.10. When a packet arrives from the outside destined for this host, as long as the ACK bit is set, the firewall assumes that it is legitimate traffic.

This problem is compounded when trying to allow UDP applications through the firewall. Unlike TCP, UDP has no method of maintaining the state of the connection between two hosts. Indeed, the entire concept of a "connection" using UDP is foreign to the protocol. In this case, the network administrator has a tough decision to make. The administrator must either allow UDP traffic based only on port information — in which case traffic can be sent at any time or disallow access for UDP traffic altogether. One decision will increase the security on the network, but will not make the users who depend on those UDP applications to get their jobs done very happy.

Because many popular applications use UDP traffic, network administrators are often forced to allow the protocol through the packet filtering firewall. To further complicate things, many of the applications that depend on UDP traffic, such as video, voice, and real-time media, tend to use a wide range of UDP ports. The network administrator can only narrowly reduce the numbers of ports that these applications use and thus must open wide ranges of UDP port space on the packet filtering firewall. Because there is no state, or remembrance of what has been sent and what should be arriving back on the packet filter, these ports end up accepting traffic at all times.

While packet filters were a great security concept when first introduced — and they still serve an important function in an overall firewall configuration— they are only the first step. What we are really interested in is more functionality. Enter the stateful packet filter.

8.1.2 Stateful Packet Filters

When we give a packet filter "memory" and some understanding of normal protocol behavior, we have created a stateful packet filter. This is a firewall device that understands that it should not receive an ACK packet inbound to a given host, because the host in question has not recently established a connection to the source of the packet. The net result is to make the network more secure and easier to configure. As you may recall, one of the characteristics of a good countermeasure is ease of configuration, so this is not an insignificant consideration.

To understand how the stateful packet filter operates, let us look at Rule 1 in our sample inbound firewall rule set:

"1. Allow any inbound traffic as long is it is part of an established TCP connection."

A stateful packet filter would alter this rule just a bit to say:

"1. Monitor any new outbound connections from the inside network. Only allow return traffic back into the network if it matches the IP addresses and port numbers used in a recorded outbound connection."

Thus far, all we have done is add overhead to the packet filter. Now it must maintain connection information between hosts based on the traffic it monitors. The effect, however, is to increase the security on our network. No longer will tricks such as setting the ACK bit work. Furthermore, the concept of an "established" connection is unique to TCP. Other protocols, such as UDP and IP, do not have the established concept to work from. With a stateful packet filter, however, the state mechanism can be configured to logically associate connections. For example, if a DNS query is sent from the inside network to an outside server, the stateful packet filter notes the source and destination of the IP packet and the source and destination UDP port numbers. If a UDP packet returns with source and destination values reversed, then the stateful packet filter can safely assume that the return packet is associated with the packet that was just sent. To prevent misuse of this feature, most stateful packet filters allow a network administrator to configure a timeout value for connectionless protocols. Therefore, the stateful packet filter can keep the UDP connection information active for 60 seconds. If no packets are observed for a period of 60 seconds, the connection information is removed.

We see that this allows us to shorten even our simplified rule set by at least one entry. We no longer need to specifically allow return DNS traffic back into the network. If other UDP protocols need to be used by users of the network, then this also does not require additional configuration to the filter list. In a more complex environment, the rule set would most likely be able to eliminate even more distinct rule set entries. Shorter rules are less complex and easier to manage. Shorter rules are less likely to be misconfigured because they are easier to follow. Shorter rules have less overhead for the firewall device. Shorter rule sets are better!

For all of their advantages over traditional packet filters, stateful packet filters are not without their shortcomings. First, they do have additional overhead associated with the process of keeping track of connections. While this was more of a problem in the past, computing power has so out-paced the speed of the average company Internet access line that this is not really a pressing issue in more recent configurations. This means that even a fairly low powered and very inexpensive hardware platform would be sufficient to keep up with the average T-1 access rates that most companies use for WAN access.

The real problem with stateful packet filters is that they still only have limited intelligence when working with packets. The problem might best be explained through the use of a couple of examples. The default installation of Windows 2000 IIS 5.0, a popular Web server for Microsoft Windows networks, has a security flaw that allows an attacker to send a specially constructed URL, such as

www.proteris.com/scripts/..%c1%c1../winnt/system32/cmd.exe?/c+copy+c:\winnt\system32\cmd.exe+c:\winnt\system32\root.exe

that allows remote users to move between directories on the IIS server and change files to suit their goals. This means that a Web user can enter a specially constructed URL similar to that above and read the contents of almost any drive. Depending on the permissions given to the Web server, the user is also allowed to create, delete, and otherwise manipulate server files. This clearly can be considered a serious vulnerability.

When the above attack is in progress, the stateful packet filter, following our rule set, would allow traffic to the Web server and keep track of the connection information. After all, to the stateful packet filter, the packet simply looks like an IP packet with source and destination IP addresses that match our rules and the destination TCP port is allowed as well. In this case, however, the packet itself is legitimate but the instructions contained in the packet constitute the real risk.

The concept works the same in the reverse direction, when users connect to the Internet. Like many companies, we will assume that our out-bound filters have a rule similar to the following:

  1. Allow users internal to our network to view normal and secure Web pages.

This would translate into two rules that would monitor outgoing IP packets for the following information:

  1. Allow packets with a source IP address of our network and any destination IP address that have a TCP destination port of 80 for normal Web traffic. Otherwise, check the next rule.

  2. Allow packets with a source IP address of our network and any destination IP address that have a TCP destination port of 443 for secure Web traffic. Otherwise, check the next rule.

The stateful packet filter will duly note the establishment of a new connection that matches the above rules and allow the return traffic back into the network. This is where the functionality of the stateful packet filter will stop, however. The traffic is in no way monitored to see what the user is accessing via this connection, be it work-related information, stock quotes, pornography, or downloading viruses via HTTP. Even the URL itself may be dangerous to the health of the connecting computer, as recent operating system security reports have shown us. The stateful packet filtering any more than the traditional packet filter cannot prevent this activity.

What is required is some way to be able to interpret the user data and make forwarding decisions based on the information inside the packet. The simple way to do this is through a proxy server.

8.1.3 Proxies

Over the course of several years, I have had reason to designate my wife as a proxy in certain circumstances. Generally, this is due the need to travel on business during important personal legal exchanges. When acting as a proxy, my wife is authorized to do things on my behalf. In the networking world, proxies operate with the same concept. A proxy will act on the behalf of another host in the network. While not strictly a firewall technology, proxy servers are often used as part of an overall firewall solution. An example is in order to explain the proxy concept.

A host internal to the network creates a connection to a Web page in the outside world. The Web browser has been configured with the IP address or host name of the proxy server (see Exhibit 1). Instead of sending the request to the Web page itself, the host browser application sends the request to the proxy server. The proxy will then make a decision based on its configuration to either deny the request or make a connection to the requested Web page on behalf of the internal host. The host itself does not make outside connections; the proxy does. The primary advantage of this approach in a corporate setting is that the proxy can be configured to allow or deny access to certain types of resources. For example, a list of allowed or prohibited Web sites can be configured into the proxy. When a user request is made, the proxy checks its lists and forwards or drops traffic according to its configuration. A proxy can also be configured to make this decision based upon the username or group permissions used to access the proxy itself.

Exhibit 1: TCP Proxy Example

start example

click to expand

end example

Proxy servers have two main classifications. The first is that of the application proxy. The application proxy is, as the name suggests, specific to a single application. You may have a server on your network that will operate as an HTTP or FTP proxy. To further confuse the issue, proxy servers are readily available that combine a number of application proxy servers. This means a single server may operate as both an HTTP and FTP proxy. The second class of proxies is known as a circuit-level proxy. This proxy differs from the application-level proxy in that it will proxy all TCP/IP traffic. Instead of operating only for a single application layer protocol, circuit-level proxies act as intermediaries for all transport layer traffic.

For most application layer proxies, there is a secondary performance benefit that makes the use of a proxy advantageous to network users as well. If a second host inside the company wished to access the same Web page as the earlier host, the proxy could be configured to cache, or store, the Web page locally. For the second and subsequent requests, the proxy need not make a connection to the remote site but can serve the page locally. This improves performance for the network users and reduces bandwidth consumption on the WAN link.

Proxies can also serve an important role in the auditing of Web traffic patterns. Most proxy applications will also record the Web sites that users access, allowing a convenient way to track individual usage patterns on company time.

The placement of a proxy is an important consideration as to the amount of security and monitoring of network traffic they can provide (see Exhibit 2). If the network configuration is not designed so that egress traffic must use the proxy, an enterprising user can simply choose to bypass it. This is because most proxy configuration is done as part of the configuration for the application itself. For example, if I were to configure the Mozilla Web browser to use a proxy server, I would enter the appropriate IP address or proxy name in the "Proxy" preferences dialog box. Each HTTP request sent from the Mozilla Web browser would then be directed to the configured proxy. If I open up the Internet Explorer Web browser without a proxy configuration, my requests are sent directly to the remote Web site, bypassing the proxy. From this, it becomes clear that the protections offered by an application proxy may require a willing public. There are several ways around this issue. The first is that if your operating system allows it, configure the user profiles so that they cannot change the settings on the Web browser. This may be effective for those who access the Web through a host on your network, but does little to control those who connect via other programs or rogue hosts. The second option is to configure what is known as a transparent or in-line proxy server. In examining Exhibit 2, the configuration on the left is a typical proxy configuration where users are allowed to connect to the Internet via the proxy or without the proxy. The configuration on the right, on the other hand, has the proxy server inline with outbound requests. The only option for Internet connections in this case is through the proxy server.

Exhibit 2: Proxy Implementation Considerations

start example

click to expand

end example

There are other options to prevent users from directly accessing the Internet without the mediation of a proxy. One mechanism is to configure any firewalls to only allow outbound application traffic sourced from the proxy server. In this way, users who do not use the proxy are blocked by the firewall, but proxy connections are allowed through. Another option is that some firewall products will transparently redirect traffic to a specified proxy server. In this case, all traffic to be proxied is redirected, regardless of the wishes of the user.

The final option is to configure a circuit-level gateway that proxies all incoming connections. The advantage of this approach is that it can intercept all connections without end users being aware of or having to configure their hosts at all. For those who travel frequently and stay in hotel rooms equipped with high-speed Internet service, you may have noticed that some service providers allow you to connect to the Internet without changing any IP information on your computer. Assuming that people with all sorts of host configurations are connecting to the hotel network, how is that possible? By placing a circuit-level gateway into the network, the hotel gateway is able to accept connections from any IP configuration and proxy it to a consistent configuration in the same manner as network address translation.

Disadvantages of an application-based proxy server are that they require client software that is aware of and can interact with the proxy server itself. For the most common applications, such as HTTP, HTTPS (SSL), and FTP, this is not a problem and most Web browsers and other client software are preconfigured to support proxy servers. Newer or less commonly used protocols, however, may not have proxy support built into them. When a new application is developed, there is commonly a lag time between the time that the protocol becomes popular and the time that the proxies are developed to support secure and controlled usage of the protocols. This can leave network administrators in a difficult spot in the meantime.

By necessity, proxy servers can also slow network response times. Each time user traffic is redirected to the proxy server, there is a short delay as the proxy checks its cache for a local copy of the resource the user is requesting. If it is in the cache, then user performance is greatly improved. If it is not in the cache, however, the proxy must then initiate the outbound connection. While the delay is not great, depending on the size of the proxy, the speed of the hard drives, and the number of requests per second, the proxy can increase delays in a busy network. For the sake of comparison, depending on the usage patterns of users, most proxy administrators feel that they are getting high marks if their proxy can locally service 30 percent of network requests. If the caching feature of proxy servers is not utilized, then the delay that is introduced is minimal; however, there still may be overhead associated with maintaining the logical network connections for a great number of hosts. It should be noted that for the 30 percent of network requests that the proxy can serve locally, the performance for those users is excellent and bandwidth utilization on the company access link is likewise reduced.

While proxies can be an improvement in the amount of control over network traffic, what is really required is an application that can dig into the user data and make judgments as to what should be in the data itself. These devices are known as application layer firewalls.

8.1.4 Application Layer Firewalls

While packet filters are primarily interested in the IP, TCP, and UDP information that is in a packet, application layer firewalls are interested in the application layer data found within these headers.

For the sake of example, let us recall the protection that a packet filter would provide our internal Web server from the outside world. Data entering the network would be controlled through the examination of the packet IP address — the destination IP address of traffic entering the network must match that of our Web server. Furthermore, the TCP port must match that of the process on the Web server, typically port 80. Once those two elements have matched the access-list, the packet is forwarded to the Web server, regardless of the data that the packet actually contained.

In this specific example, there is really only a fairly small set of valid requests that the client application should be sending to the Web server. An application layer firewall could check the data of the packet against a database of attack patterns and a database of allowed and normal page requests. If the packet is found to be valid, it would then continue to the Web server. If not, it would be rejected.

The application layer firewall has an understanding of the normal structure of data used by a particular application. It can check the data of a received packet against a database of what is "good" or "bad" and make independent decisions based upon the packet.

Application layer firewalls offer a great deal of control and protection for our network traffic. As usual, however, we cannot have it all. Based on their operation, application layer firewalls are very specific to a particular type of application. While the most popular application services are supported, they may not all be supported on one device. For example, you would need a separate program to check your e-mail for attachments and viruses, a different one to monitor user Web traffic for downloads and harmful applets, and yet another to protect your own Web server against attacks from the Internet. Not all applications are supported either. As with proxy servers, there is a market lag between the development of a protocol and the ability to application firewall it.

8.1.5 Network Address Translation

While we commonly refer to a firewall as a device on a network by pointing to it with a stubby finger and saying, "That is our firewall," the reality of it is that a firewall is a service more than a device. Likewise, a firewall is the entirety of protections that we put into place on our networks to protect the inside from the outside, and vice versa. Based on this, our definition of a firewall must include network address translation (NAT). The advantages and disadvantages of network address translation were discussed in an earlier chapter, but it is fair to include NAT as part of an overall firewall solution, regardless of any IP addressing issues that may exist.




Network Perimeter Security. Building Defense In-Depth
Network Perimeter Security: Building Defense In-Depth
ISBN: 0849316286
EAN: 2147483647
Year: 2004
Pages: 119
Authors: Cliff Riggs

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net