Dealing with Layer 7 Traffic


The process of dealing with Layer 7 traffic inspection, be it for server load balancing, firewall load balancing, Web cache redirection, or any other application, is inherently different from that of Layer 4. We've seen in our coverage of how Layer 4 protocols, such as TCP, and Layer 7 protocols, such as HTTP, operate so that certain information is only available at certain times during the session. Figure 6-1 shows a simplified HTTP session, demonstrating that any useful Layer 7 information which may be needed for a server load balancing decision does not appear until at least the fourth packet of the session.

Figure 6-1. An example of one issue when dealing with Layer 7 inspection in content switching.

graphics/06fig01.gif

The consequence of this behavior has a major impact on the performance of many content switches. In general terms, dealing with Layer 4 processed traffic is computationally easier than Layer 7. If the decision-making information is not available until the fourth packet of a TCP session, the content switch must be able to buffer and store these packets until the relevant information arrives. This process is commonly referred to as delayed binding .

Immediate vs. Delayed Binding of Sessions

All of the examples we saw in Chapter 5, Basic Server Load Balancing , are what we will refer to as immediate bindings . What we mean by this is that the very first packet in the session contains sufficient information to make a load balancing decision. When using a standard IP hashing distribution, for example, the TCP SYN packet from the client that initiates the session will contain the VIP, the destination TCP port, which identifies the service being requested , and the client IP and TCP details to enable a hash calculation to be made. Figure 6-2 shows a simple traffic flow in an immediate binding session.

Figure 6-2. Traffic flow example in an immediate binding.

graphics/06fig02.gif

For Layer 7 services, delayed binding of sessions needs to be implemented. In order for the content switch to parse the required information, it must perform the following tasks :

  1. Terminate the TCP from the client by completing the three-way TCP handshake.

  2. Buffer the incoming packets containing the user data. This might not necessarily be as simple as buffering the first frame as, within HTTP for example, the information may not be contained in the first packet.

  3. Parse these packets for the required Layer 7 information, such as URL, HTTP headers, FTP control commands, or DNS requests .

  4. Make a load balancing decision based on the information found in the user request.

  5. Open a new TCP connection from the content switch to the appropriate real server.

  6. Forward the buffer request packets on to the real server.

  7. For all subsequent packets in the session, the content switch must "splice" these two separate TCP connections together by altering information such as TCP ports and sequence numbers .

Figure 6-3 shows a traffic flow example for a delayed binding session.

Figure 6-3. Traffic flow example in a delayed binding.

graphics/06fig03.gif

It's easy to see from Figure 6-3 that the amount of work required within the content switch to implement Layer 7 server load balancing is considerable when compared to simple Layer 4 processing.

Using Delayed Binding as a Security Mechanism

Delayed binding has a secondary use when implemented in content switchesas a security mechanism to help prevent denial-of-service (DoS) attacks and in particular, SYN flooding attacks. SYN flooding is an easily instigated attack that aims to fill the backlog buffer of an operating system.

The SYN Flood Attack

All operating systems maintain what is known as a backlog buffer . This is an area of memory reserved to handle new TCP sessions that are not yet currently fully established. When a client sends a TCP SYN to an object server, the server operating system will create an entry in the backlog buffer as a record, which is used to show that the connection is under establishment. Once this buffer entry has been created, the server will send back a SYN-ACK to the client that, typically, would be responded to with a final ACK and thus complete the session establishment.

During a SYN flooding attack, however, the attacking client or clients will send the SYN packet but never the final ACK of the handshake, thus leaving the backlog buffer with an entry that will never be completed. Such an entry would have to be allowed to time out based on criteria specific to the operating system. If the attacking client can send enough SYN only packets, convincing the operating system that each is part of a valid new session, the backlog buffer can be filled easily, thus preventing other, valid client connections to be established. For a valid client during this period, the server would appear to be unavailable because none of the client-side SYN packets would ever be acknowledged .

The process of generating a SYN flooding attack is trivial, and many tools exist to perpetrate such attacks with minimal resources. Most content switches are capable of providing a layer of protection against SYN flooding attacks given that they are typically one of the first devices in the topology that are session aware. Inherently, the content switch must be capable of protecting both itself (as it too has a session table that may be susceptible to such attacks) and the object servers it is load balancing.

Solution 1SYN Cookies

The first solution is for the content switch to implement a mechanism known as SYN cookies . The concept of SYN cookies uses two ideas: first, not creating a session entry in the content switch until the third packet (the ACK) has been received; and second, using information contained in the initial client-side SYN to generate the sequence number for the resulting SYN-ACK. By taking information from the client-side SYN packet, hashing information from the IP and TCP layers , and using this together with some form of time-dependent information to form the initial server-side TCP sequence number, the content switch can delay making a session entry until a valid third packet in the handshake arrives.

This approach means that only sessions originating from valid Internet clients will complete the TCP three-way handshake and consequently occupy session table space on the content switch.

Solution 2Session Table Management

The second approach for dealing with this DoS threat is in how the content switch manages entries in its session table. A content switch will typically have a session table much larger than the backlog buffer on an object server, which will offer some resistance in the first instance. Many content switches will also implement an aging process for dealing with sessions that are either idle or have terminated ungracefully, and this process can be extended to deal with "half-open" TCP sessions where the handshake has not completed correctly; for example, during a SYN flood attack. Most implementations of this type of mechanism will offer configurable timers for aging out the half-open sessions.

No content switch manufacturer would position their platforms as a single security point against unauthorized entry or attack prevention. However, when used as part of a many layered security approach, techniques such as those described previously can be very effective in providing extra network security.

Layer 7 Parsing and the Connection:Keep- Alive Header

From our description in Chapter 2, Understanding Layer 2, 3, and 4 Protocols , you'll remember that the client-to-server connection does not necessarily follow the model of one TCP request per object being retrieved. Indeed, it is most common in modern browsers for the underlying TCP connection to remain alive across the retrieval of numerous HTTP objects, and this has a knock-on effect in Layer 7 traffic handling from the content switch's point of view. Imagine that the content switch has gone through the process described previously and offered the client a delayed binding in order to parse the Layer 7 information that may have arrived several frames later. Once the server-side connection for the session has been made, there is nothing to stop the browser sending a request for a different content type, not suited to the selected server, across the existing TCP session.

What the content switch must be able to do in this situation is maintain and manage the tear-up and tear-down of multiple backend, server-side connections, while parsing the client-side connection for incoming HTTP requests. Take the example show in Figure 6-4on the client side, a single HTTP/1.1 connection will present two GET requests for two different content types. On the server side, the content switch must bring up a connection to the first object server to service the request for "index.html," and then tear down the connection and establish a new connection to the second server group for the request of "other.asp."

Figure 6-4. If the client uses HTTP/1.1 or the Connection: Keep-Alive HTTP header, the content switch must manage the tear-up and tear-down of TCP connections to different servers in the back end.

graphics/06fig04.gif



Optimizing Network Performance with Content Switching
Optimizing Network Performance with Content Switching: Server, Firewall and Cache Load Balancing
ISBN: 0131014684
EAN: 2147483647
Year: 2003
Pages: 85

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net