Types of Firewall Technologies

Traditionally, firewall technologies can be categorized into a finite number of classes. Each class, in turn , has its own set of advantages and disadvantages. It is important to understand these core capabilities when making a purchasing decision.

Proxy-Based Firewalls

Also known as application-level or circuit-level gateway firewalls, proxy-based firewalls have been in development since the early 1990s and are still touted as one of the most secure firewall technologies today. This is in large part due to the amount of validation that these firewalls perform on network, transport, and application layer protocols.

Traditional proxy-based firewalls work by fully terminating both ends of a session on the firewall system itself (sometimes using the native operating system's TCP/IP stack, sometimes using a custom network stack developed by the firewall vendor). By terminating, we mean that the firewall impersonates the destination server, resulting in a full TCP connection between the client application and the firewall. A second, new connection is then created from the firewall to the real destination server. Depending on the firewall, this can occur with or without the client application's knowledge. Some proxy-based firewalls require that client applications be "proxy aware," and know they are talking to an intermediary proxy server. Today, most firewalls do not require a client to have this knowledge, as most proxy-based firewalls transparently perform this termination and forwarding without it.

Once these connections have been established to the firewall, the native operating system stack on the firewall (or a stack provided by the vendor) then passes all application-level data to a "proxy" program for validation and policy enforcement. Proxy programs are often user -land programs running on the firewall system, like any standard application. In some cases, however, these proxies can be kernel resident for added performance. In a proxy-based firewall, one proxy program exists for each protocol that the firewall supportsfor example, one for SMTP, one for DNS, one for HTTP.

image from book

Proxy-based firewalls have historically provided a number of security benefits over other firewall technologies:

  • All application-level data flows through the proxy, allowing full analysis of application-level protocol options, fields, and their content.

  • They rely on a full protocol stack to perform IP fragmentation reassembly, as well as TCP segment reordering . This makes the firewall resilient to attackers manipulating these protocol traits, which have been used in the past to attempt to bypass other firewall technologies.

  • This full analysis and reliable availability of application-level data facilitates easier content filtering, as well as the incorporation of anti-virus and malicious code scanning into proxy-based firewalls.

Given their ability to analyze application layer data, proxy-based firewalls have historically provided the most complete visibility into network traffic.

image from book

While proxy-based firewalls have many security benefits, they have also had one significant drawback that has made other technologies more prominent in the market: performance. The process of passing all application layer data up through a protocol stack, and to a user-land process for analysis, is not an optimum one. While this works quite well in low-bandwidth networks, it falls short in a high-speed network with a substantial number of parallel connections. Today most proxy-based firewall vendors provide a hybrid approach, using other technologies in high-bandwidth scenarios.

Some organizations requiring a high level of security, such as financial institutions and the military, have standards requiring two firewalls in series at the perimeter. In such cases they may deploy both a proxy firewall and a stateful packet-filtering firewall in succession, providing "defense in depth."

Stateless Packet Filtering

Stateless packet-filtering firewalls provide filtering based on specific protocol header values. Each packet is examined independent of another and passed based on a set of rudimentary rules used to evaluate packets. Stateless packet filters look at a fixed offset within a packet for a specific value, and either pass or drop packets based on this value. Common values that are examined are

  • Network layer (IP header):

    • IP source address

    • IP destination address

    • Transport layer protocol carried by this packet

    • Presence of IP options, such as loose or strict source routing, and record route

    • Whether fragmentation is present, and the fragment size

  • Transport layer (TCP and UDP headers):

    • TCP or UDP source port

    • TCP or UDP destination port

  • Transport layer (ICMP header):

    • ICMP type value

    • ICMP code value

Stateless packet filters, as their name implies, do not retain knowledge of session state (that is, whether a TCP session has been established, or whether an outgoing DNS query has been seen). As a result, their application in enforcing stringent perimeter security is limited. This weakness is offset somewhat by the high performance they can achieve, certainly not making them useless.

Most routers support some level of stateless packet filtering, and for many organizations this serves as the first line of defense in order to filter out unwanted activity at the network and transport layers . Many organizations implement ingress and egress address filtering here, in order to both block unwanted visitors from entering their network and preventing packets with spoofed source addresses from leaving their network. The latter is a common scenario in the case of distributed denial-of-service attacks. Many tools used to launch distributed denial-of-service attacks choose random source IP addresses that are not registered by the organization from where the attack is originating. This results in IP packets with these forged source addresses passing out through the organization's gateway. By adding explicit rules to only permit IP packets with addresses registered to the organization from leaving their network, this scenario can be avoided. RFC 2267 discusses the benefits of implementing network ingress filtering to prevent these attacks.

While stateless packet filters excel in performance, due in large part to their simplicity, they have a number of drawbacks as a result of the limited intelligence they possess. They are not effective at enforcing policy at any layer beyond the transport layer (TCP, UDP, and ICMP).

image from book

The inability to effectively police beyond these layers is a result of fragmentation that is supported at the network (IP) layer, as well as the dynamic nature of application layer protocols. IP fragmentation was created to handle the scenario where a given packet is larger than the next network segment's MTU size. The following scenario illustrates how fragmentation occurs in normal network scenarios:

image from book

After the first packet passes through a router and is too large for the next network segment, it is fragmented into multiple smaller packets. The result is two (or more) IP packets that each contain enough data to meet the MTU requirement of the subsequent network segment. The IP protocol implementation on the receiving system is ultimately responsible for queuing these fragments , reassembling them back into the original request, before passing the whole packet up to the next layer (TCP) for further processing.

In practice, modern TCP/IP stacks attempt to avoid fragmentation using mechanisms such as path -MTU discovery. That said, IP fragmentation is required and supported by all network stacks, and as such, network security devices must process and handle fragmented packets appropriately. Also, it is not the common scenarios that we are concerned about, but rather an attacker who is able to craft his own fragmented packets to exploit the limitations of stateless packet filters.

Since stateless packet filters look at fixed offsets within packets, and since fragmentation can result in application-level data being present at almost any offset, they are not well suited for filtering beyond the network and transport layers.

Historically, fragmentation has also been used to evade packet filters at the transport layer. In the mid-1990s many firewalls had a shortcoming whereby an attacker could cause the TCP header to be fragmented across several packets. This resulted in portions of the TCP header that were being inspected for security purposes to be in a second packet, rather than at a specific offset in the first packet. RFC 1858, Security Considerations for IP Fragment Filtering, discusses these concerns and the resulting solutions in depth. Modern packet filters are no longer prone to this attack, and simply drop any attempt to fragment the TCP packet header, as no network should ever have an MTU so small as to require fragmentation at this level!

Stateful Packet Filtering

Stateful packet filters solve the shortcomings of looking at fixed offsets within a packet and maintaining the context of only a single packet at a time. This breed of firewall maintains full state of the protocols that are traveling through it. This applies to both session-oriented and non-session-oriented protocols. The state that is maintained by these firewalls varies depending on the protocol that is being examined. Stateful packet filters maintain stateful for two primary reasons:

  • To prevent unsolicited requests from passing from the external network to the internal network. Only packets responding to requests that originated from the internal network are allowed to pass in from outside.

  • To overcome the shortcomings of stateless packet filtering firewalls, such as the lack of IP fragmentation tracking, or the tracking of data across multiple packets. Stateful firewalls can track IP fragments, as well as out-of-order TCP segments, in order to correctly inspect data above the network and transport layers.

Stateful packet filters use more system resources in order to maintain state. Whereas stateless packet filters simply look at offsets within a packet, stateful packet filters retain some level knowledge of previous packets. The creation and deletion of state structures uses both memory and CPU cycles on the firewall system. These state tables must be managed intelligently in order to avoid denial-of-service attacks against the firewall itself. By intentionally sending the appropriate packet sequences, it is not difficult for an attacker to quickly create an overwhelming number of new state entries on the firewall, using both memory and processor time. Some firewalls allow adjustment of the maximum time that a state entry can exist. This is relevant primarily to TCP, where a half- open TCP connection can use unnecessary resources.

As a result of the required state management, stateful packet filters are slower than stateless packet filters, but still significantly faster than proxy-based firewalls. The following are several examples of the types of states that are maintained by a stateful packet-filtering firewall.

TCP Protocol State

TCP, being a reliable transport protocol, is also one of the most complex to track. Each endpoint of a TCP connection can be in one of eleven different states at any given time. These states exist in order to establish a reliable connection using a three-way handshake, to handle error conditions, and to disconnect an established session.

When tracking TCP session state, we are primarily interested in keeping track of the following TCP variables :

  • IP source address

  • IP destination address

  • TCP source port

  • TCP destination port

  • TCP sequence number

  • TCP acknowledgment number

  • TCP window size

  • Current connection state for each endpoint

It should be noted that the current state of a TCP connection is not sent as a part of the TCP packet, but is determined by examining flags that are present in the TCP flags field, as well as the sequence and acknowledgment numbers . Stateful firewalls must have robust TCP state tracking implementations in order to avoid attacks targeted specifically towards the firewall, in an attempt to either evade or starve the firewall system of resources.

UDP Protocol State

UDP is not a reliable transport protocol and does not guarantee the delivery of data. As such, it does not possess the connection-oriented characteristics of TCP and as a result less effort is required to track it. For most UDP-based protocols, state is maintained using only the following variables:

  • IP source address

  • IP destination address

  • UDP source port

  • UDP destination port

In most cases, the primary goal when maintaining state for UDP is to create a state entry for a request that is traveling out of the network through the firewall, and to allow only the passing of responses to that request. Normally, there is a limited period of time in which these responses can appear, and if it doesn't, this state entry is removed and a response is no longer allowed through. Some firewalls may also choose to keep additional state by examining application layer protocol characteristics.

A common example of where this occurs is when stateful firewalls handle the Domain Name System (DNS) protocol. Outbound DNS query requests result in a state entry being created, ensuring that only inbound DNS responses from the server are allowed back in. Once a response has been seen, this state entry is removed.

ICMP Protocol State

Like UDP, ICMP is also not a reliable transport protocol. State is maintained primarily to ensure that only responses to legitimate queries are passed back in from the external network. State is maintained by examining the following variables in the IP and ICMP packet headers:

  • IP source address

  • IP destination address

  • ICMP type (for example, ICMP_DEST_UNREACH)

  • ICMP code (for example, ICMP_HOST_UNREACH)

A common scenario where ICMP is tracked occurs when an outbound ICMP ECHO request is observed , resulting in the creation of an ICMP state entry. If seen within the configured time window, only an ICMP ECHO REPLY response from the destination host will be permitted back in. ICMP ECHO requests originating from the external network are therefore not permitted in, thwarting someone who may attempt to perform network reconnaissance and host discovery.

Application-Level State

In the last three examples we discussed how stateful packet filters maintain the state of transport layer protocols (TCP, UDP, and ICMP). Most stateful packet filters also maintain the state of application layer protocols. For some protocols this is required for them to even function correctly, while for others it is an added security benefit.

A common example where it is necessary to keep state at the application layer is in the FTP protocol when using ACTIVE mode FTP. In this mode, the FTP protocol uses two independent sessions in order to transmit files:

  • A control channel This is a connection that originates from an ephemeral TCP port (an arbitrary port assigned by the client's IP stack) and connects to TCP port 21 on the FTP server. This connection is used to authenticate to the FTP server. It is also the connection over which all commands are transmitted.

  • A data channel This is a new connection that is created, over which the actual data being retrieved is transmitted. This is used to transmit the file itself or a listing of directory contents when listing directories. When using ACTIVE mode FTP, this connection originates from TCP port 20 on the FTP server, and connects back to a new ephemeral port on the FTP client. The client tells the FTP server which port to connect to via the FTP PORT command.

image from book

The actual port that the FTP server must connect back to on the client system is sent in the PORT command, shown above. The client has allocated a new port (in this case 4249) to receive data from the FTP server. This presents a problem from a firewall standpoint, since the client-side port is not a fixed port and is dynamically allocated by the client. Since most firewalls are configured to block incoming connections on ephemeral ports, the data connection would normally be blocked. Stateful firewalls, however, track the FTP control channel for the PORT command and, if present, will temporarily allow a connection from the FTP data port on the server to the ephemeral port on the FTP client. Without this capability, FTP would not function when used in ACTIVE mode.

It should be noted that FTP also supports a PASSIVE mode, whereby the data connection originates from the client to an ephemeral port on the FTP server. Since the data connection is now outbound from the client network, rather than inbound from the server, the client side's firewall need not track the FTP PORT command. Now the problem has moved to the server side! If the server is behind a firewall, the same scenario now exists on the server's network, and the server's firewall must track the FTP PORT command to allow the incoming data connection from the client. FTP could not even function correctly if both the client and server systems were behind firewalls that did not track the FTP PORT command!

Since stateful packet filters provide some visibility into application layer data, they have been the firewall of choice for many organizations.

image from book

Deep Packet Inspection

The newest breeds of firewalls are those that perform what is known as deep packet inspection. Deep packet inspection involves performing even more validation of application layer data and more thorough application layer state tracking, as well as introducing intrusion detection and intrusion prevention capabilities into the firewall. The incorporation of these technologies into the firewall is a natural evolution as the concept of passive intrusion detection did not provide the value that the security industry had preached.

The ability for firewalls to examine application-level data has become increasingly important as today's networking protocols continue to become more and more complex. In addition, the tendency for application developers to encapsulate one protocol within another further drives this. Many applications today use the HTTP protocol in order to tunnel application-level communications in and out of an organization. In fact, many application developers intentionally chose HTTP as their communication mechanism due to the tendency for organizations to permit it through their firewall. The development of web services and SOAP-based applications further complicates policy enforcement.

In order to address these trends, firewalls must have increasingly more knowledge of application-level protocols. As they acquire this knowledge, their capability moves closer to that of the traditional proxy-based firewall, providing their security benefits, but still retaining the performance benefits of stateful packet filters.

image from book

Deep packet inspection firewalls continue to evolve , and as such, the definition differs from vendor to vendor. For some, simply running traditional intrusion detection signatures on the application payload is considered sufficient, while others perform thorough analysis and validation of individual protocols.

Deep packet inspection firewalls hold much promise, as they have many of the benefits of the previously mentioned firewall technologies and few of the shortcomings.

Web/XML Firewalls

Web firewalls are another new breed of firewall that has been developed specifically to protect an organization's web server infrastructure from the growing number of web application threats. As opposed to protecting an entire network, these firewalls are placed directly in front of one or more web servers. With the increasing use of HTTP as the protocol of choice, the sophistication and complexity of web-based applications has accordingly led to an increase in HTTP-based vulnerabilities. The current market for web firewalls is somewhat scattered , with a varying degree of functionality among vendors. Some provide visibility into the HTTP protocol, while others dig deeper into the XML/SOAP web services layer in order to identify and thwart attacks.

Traditional firewalls (primarily proxy-based firewalls) have performed rudimentary validation and consistency checking of HTTP traffic for many years . Web firewalls take this one step further and possess an in-depth knowledge of both the HTTP protocol and the applications running on top of it.

The majority of web firewalls do not, however, provide much enforcement of security policy at layers beneath the application layer, or even other protocols besides HTTP.

image from book

Passive Behavioral Profiling

Due to the variety of protocols and the dynamic nature of web content, defining the acceptable behavior to be allowed through a web firewall is next to impossible if done manually. One prominent feature that many web firewall vendors have incorporated are "learning" or "profiling" modes in order to sample and analyze the use of an organization's web sites as they are used daily by visitors. The web firewall is placed in this mode for a sufficient period of time in order to profile web traffic as it passes through the device. In doing so, a profile is built that includes many attributes that can later be used to apply a security policy, and to thwart attackers attempting to manipulate applications on the web server. Some of the common variables that are profiled include

  • Cookies that are used by the web site and their format

  • Valid web pages that are served by the web server

  • Common parameters passed when invoking these web pages

  • Form fields, their values, and their ranges that are used in transactions

  • Applications that are accessed on the web server and their parameters

Active Profiling

Since passive profiling may only observe a fragment of a web server's content, some vendors also provide an active profiling capability. This active profiling involves actively traversing a web site's entire content space to seed this profile database.

The combination of these two mechanisms can prove very effective in building a knowledge base robust enough to later thwart attackers. Web firewalls excel in preventing certain varieties of attacks:

  • SQL injection attacks, where attackers manipulate form values in an attempt to inject database commands into an HTTP GET or POST request. In a worst-case scenario, this can lead to full access to a web site's database. In other cases, it can lead to the running of database commands, and dropping of tables.

  • The outbound transmission of social security and credit card numbers.

  • Cross-site scripting, where attackers provide links via e-mail or a web site that contain malicious embedded script code. When selected, this link leads the user to the organization's web site, and the script code is then run under the context of the organization's web site, allowing the potential theft of cookies and other information.

  • Form field manipulation, where attackers modify visible or hidden form fields in an attempt to bypass authentication or to manipulate applications. A common example is the manipulation of an item's price in a poorly written shopping cart application in order to acquire it at no cost.

  • Web services/SOAP layer attacks such as manipulation of application variables, in an attempt to subvert applications. By ensuring that all web services requests conform to the XML/SOAP schema or Document Type Definitions (DTD) for the web service, these attacks can be avoided.



Extreme Exploits. Advanced Defenses Against Hardcore Hacks
Extreme Exploits: Advanced Defenses Against Hardcore Hacks (Hacking Exposed)
ISBN: 0072259558
EAN: 2147483647
Year: 2005
Pages: 120

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net