Emerging Network Services and Appliances

 < Day Day Up > 

Over the past years, enterprise networks have evolved significantly to handle Web traffic. Enterprise customers are realizing the benefits as a result, embracing intelligent IP-based services in addition to traditional stateless Layer 2 and Layer 3 services at the data center edge. Services such as SLB, Web caching, SSL accelerators, NAT, QoS, firewalls, and others are now common in every data center edge. These devices are either deployed adjacent to network switches or integrated as an added service inside the network switch. Often a multitude of vendors can potentially implement a particular set of functions. The following sections describe some the key IP services you can use in the process of crafting high-quality network designs.

Server Load Balancing

Network SLB is essentially the distribution of load across a pool of servers. Incoming client requests that are destined to a specific IP address and port are redirected to a pool of servers. The SLB algorithm determines the actual target server. The first form of server load balancing was introduced using DNS round-robin, where a Domain Name Service (DNS) resource record allowed multiple IP addresses to be mapped to a single domain name. The DNS server then returned one of the IP addresses using a round-robin scheme. Round-robin provides a crude way to distribute the load across different servers. The limitations of this scheme include the need for a service provider to register each IP address. Some Web farms now grow to hundreds of front-end servers, and every client migh inject a different load, resulting in uneven distribution of load. Modern SLB, where one virtual IP address maps to a pool of real servers, was introduced in the mid 1990s. One of the early successes included the introduction of the Cisco local director, where it became apparent that round-robin was also an ideal solution for increasing not only the availability but also the aggregate service capacity for HTTP-based Web requests.

FIGURE 4-2 describes a high-level model of server load balancing.

Figure 4-2. High-Level Model of Server Load Balancing


In FIGURE 4-2 the incoming load = l. It is spread out evenly across N servers, each having a service capacity rate = µ. How does the SLB device determine where to forward the client request? The answer depends on the algorithm.

One of the challenges faced by network architects is choosing the right SLB algorithm from the plethora of SLB algorithms and techniques available. The following sections explore the more important SLB derivatives, as well as which technique is best for which problem.

Hash

The hash algorithm pulls certain key fields from the client incoming request packet, usually the source/destination IP address and TCP/UDP port numbers, and uses their values as an index to a table that maps to the target server and port. This is a highly efficient operation because the network processor can execute this instruction in very few clock cycles, only performing expensive read operations for the index table lookup. However, the network architect needs to be careful about the following pitfalls:

  • Megaproxy architectures, such as those used by some ISPs, remap the dial-in client's source IP addresses to that of the megaproxy, not the client's actual dynamically allocated IP address, which might not be routable. So be careful not to assume stickiness properties for the hash algorithm.

  • Hashing bases its assumption of even load distribution on heuristics, which require careful monitoring. It is entirely possible that due to the mathematics, the hash values will skew the load distribution, resulting in worse performance than round-robin.

Round-Robin

Round-robin (RR) or weighted round-robin (WRR) is the most widely used SLB algorithm because it is simple to implement efficiently. The RR/WRR algorithm looks at the incoming packet and remaps the destination IP address/port combination to the target IP/port from a fixed table and moving pointer. The Least Connections algorithm requires at least one more process to continually monitor the requests sent or received to or from each server, hence estimating the queue occupancy. From that information, the incoming packet can determine the target IP/port. The major flaw with this algorithm is that the servers must be evenly loaded or the resulting architecture will be unstable, as requests can build up on one server and eventually overload it.

Smallest Queue First /Least Connections

The Smallest Queue First (SQF) is one of the best SLB algorithms because it is self-adapting. This method considers the actual capabilities of the server and knows exactly which server can best absorb the next request. It also provides the least average delay and above all is stable. In commercial switches, this is close to what is referred to as Least Connections. However, in commercial implementations, there are some cost reduction short-cuts that approximate SQF. FIGURE 4-3 provides a high-level model of the SQF algorithm.

Figure 4-3. High-Level Model of the Shortest Queue First Technique


Data centers often have servers that all perform the same function but vary in processing speed. Even when the servers have identical hardware and software, the actual client requests may exercise different code paths on the servers, hence injecting different loads on each server. This results in an uneven distribution of load. The SQF algorithm determines where to spread out the incoming load by looking at the queue occupancies. If server i is more overloaded than the other servers, the Queue i of one server i begins to build up. The SQF algorithm automatically adjusts itself and stops forwarding requests to server i. Because the other SLB variations do not have this crucial property, SQF is the best SLB algorithm. Further analysis shows that SQF has another more important property: stability. Stability describes the long-term behavior of the system.

Figure 4-4. Round-Robin and Weighted Round-Robin


Finding the Best SLB Algorithm

Recently, savvy customers have begun to ask network architects to substantiate why one SLB algorithm is better than another. Although this section requires significant technical background knowledge, it provides definite proof and explains why the SQF is the best algorithm in terms of system stability.

The SLB system, which is composed of client requests and the servers, can be abstracted for the purposes of analysis as shown in FIGURE 4-5. This shows that initial client Web requests that is, when the client picks the first home page, excluding correlated subsequent requests can be modeled as a Poisson process with rate l. The Poisson process is a probability function with an exponential distribution, and it is reasonably accurate for telecommunication network theory as well as Internet session initiation traffic analysis. The Web servers or application servers can be modeled as M/M/1 queues. We have a number (N) of independent and potentially different capacities. Hence you can model each one with its own range of service times and corresponding average. This model is reasonable because it captures the fact that the client request can invoke software code path traversals that vary as well as hardware configuration differences. The SLB shown is subjected to an aggregate load from many clients. Each client has its own Poisson process request traffic. However, because one fundamental property of the Poisson process is that the sum of all Poisson processes is also a Poisson process, we can simplify the complete client side, which we can now model as one large Poisson process of rate l. The SLB device forwards the initial client request to the least-occupied queue. There are N queues, each with a Poisson arrival process and an exponential service time. Hence we can model all the servers as N M/M/1 queues.

Figure 4-5. Server Load Balanced System Modeled as N - M/M/1 Queues


To prove that this system is stable, we must show that under all admissible time and injected load conditions the queues will never grow without any bounds. There are two approaches we can take:

  • Model the state of the queues as a stochastic process, determine the Markov Chain, and then solve the long-term equilibrium distribution p.

  • Craft and utilize a Lyapunov Function L (t) which accurately models the growth of the queues, and then show that over the long term that is, after the system has time to warm up and reach a steady state and a certain threshold the rate of change of queue size is negative and remains negative for large enough L (t). This is a common and proven technique found in many network analysis research papers.

    We will show that:

    dL/dt = some negative value, for all values of L (t) greater than some threshold. It turns out that the Expected Value of the single step drift is equivalent, but much easier to calculate, which is the technique that we will use.

We will perform this analysis by first obtaining the discrete time model of one particular queue and then generalizing the result to all the N queues, as shown in the system model. If we take the discrete model, the state of one of the queues can be modeled as shown in FIGURE 4-6.

Figure 4-6. System Model of One Queue


Queue Occupancy

at time t+1 = Queue Occupancy at t + Number of Arrivals(t+1) - Number of Departures(Serviced)(t+1)

Q(t+1) = Q (t) + A (t+1) - D (t+1)

Because the state of the queue depends only on the previous state, this is easily modeled as a valid Markov Process, for which there are known, proven methods of analysis to find the steady-state distribution. However, since we have N queues, the actual mathematics is very complex. The Lyapunov function is an extremely powerful and accurate method to obtain the same results, and it is far simpler. See Appendix A for more information about the Lyapunov analysis.

How the Proxy Mode Works

The SQF algorithm is only one component in understanding how to best deploy SLB in network architectures. There are several different deployment scenarios available for creating solutions. In Proxy Mode, the client points to the server load balancing device, and the server load balancer remaps the destination IP address and port to the target server as selected by the SLB algorithm. Additionally, the source IP/port is changed so that the server will return the response to the server load balancer and not to the client directly. The server load balancer keeps state information to return the packet to the correct client.

FIGURE 4-7 illustrates how the packet is modified from client to SLB to server, back to SLB, and finally back to client. The following numbered list correlates with the numbers in FIGURE 4-7.

  1. The client submits an initial service request targeted to the virtual IP (VIP) address of 120.141.0.19 on port 80. This VIP address is configured as the IP address of the SLB appliance.

  2. The SLB receives this packet from the client and recognizes that this incoming packet must be forwarded to a server selected by the SLB algorithm.

  3. The SLB algorithm identifies server 10.0.0.1 at port 80 to receive this client request and modifies the packet so that the server sends it to the SLB and not to the client. Hence, the source and port are also modified.

  4. The server receives the client request.

  5. Perceiving that the request has come from the SLB, the server returns the requested Web page back to the SLB device.

  6. The SLB receives this packet from the server. Based on the state information, it knows that this packet must be sent back to client 192.191.3.89.

  7. The SLB device rewrites the packet and sends it out the appropriate egress port.

  8. Client receives receives response packet.

Figure 4-7. Server Load Balance Packet Flow: Proxy Mode


Advantages of Using Proxy Mode
  • Increases security and flexibility by decoupling the client from the backend servers

  • Increases switch manageability because servers can be added and removed dynamically without any modifications to the SLB device configuration after it is initially configured

  • Increases server manageability because any IP address can be used

Disadvantages of Using Proxy Mode
  • Limits throughput because the SLB must process packets on ingress as well as return traffic from server to client

  • Increases client delays because each packet requires more processing

How Direct Server Return Works

One of the main limitations of Proxy Mode is performance. Proxy Mode requires double work in the sense that incoming traffic from client to servers must be intercepted and processed, as well as return traffic from server to clients. Direct Server Return (DSR) addresses this limitation by requiring that only incoming traffic be processed by the SLB, thereby increasing performance considerably. To better understand how this works, see FIGURE 4-8. In DSR Mode, the client points to the SLB device, which only remaps the destination MAC address. This is accomplished by leveraging the loopback interface of the Sun Solaris servers and other servers that support loopback. Every server has a regular unique IP address and a loopback IP address, which is the same as the external VIP address of the SLB. When the SLB forwards a packet to a particular server, the server looks at the MAC address to determine whether this packet should be forwarded up to the IP stack. The IP stack recognizes that the destination IP address of this packet is not the same as the physical interface, but it is identical to the loopback IP address. Hence, the stack will forward the packet to the listening port.

Figure 4-8. Direct Server Return Packet Flow


FIGURE 4-8 shows the DSR packet flow process. The following numbered list correlates with the numbers in FIGURE 4-8.

  1. The client submits an initial service request targeted to the VIP address of 120.141.0.19 port 80. This VIP address is configured as the IP address of the SLB appliance.

  2. The SLB receives this packet from the client and forwards this incoming packet to a server selected by the SLB algorithm.

  3. The SLB algorithm identifies server 10.0.0.1 port 80 to receive this client request and modifies the packet by only changing the destination MAC Address to 0:8:3e:4:4c:84 which is the MAC address of the real server.

    Note

    Statement 3 implies that the SLB and the servers must be on the same Layer 2 VLAN. Hence, DSR is less secure than the proxy mode approach.


  4. The server receives the client request and processes the incoming packet.

  5. The server returns the incoming packet directly back to the client by swapping the destination/source IP and TCP address pair.

  6. The destination IP address is the same as that configured on the loopback and is sent back directly to the client.

Advantages of Direct Server Return
  • Increases security and flexibility by decoupling the client from the back-end servers.

  • Increases switch manageability because servers can be added and removed dynamically without any modifications to the SLB device configuration after it is initially configured.

  • Increases performance and scalability. The server load-balancing work is reduced by half because the return path is the same as the incoming path. Thus, more cycles are free to process more incoming traffic.

Disadvantages of Direct Server Return
  • The SLB must be on same Layer 2 network as the server because they have the same IP network number, only differing by MAC address.

  • All the servers must be configured with the same loopback address as the SLB VIP. This might be an issue for securing critical servers.

Server Monitoring

All SLB algorithms, except the family of fixed round-robin, require knowledge of the state of the servers. SLB implementations vary enormously from vendor to vendor. Some poor implementations simply monitor link state on the port to which the real server is attached. Some monitor using ping request on Layer 3. Port-based health checks are superior because the actual target application is verified for availability and response time. In some cases, the Layer 2 state might be fine, but the actual application has failed, and the SLB device mistakenly forwards requests to that failed real server. The features and capabilities of switches are changing rapidly, often in simple flash updates, and you must be aware of the limitations.

Persistence

Often when a client is initially load-balanced to a specific server, it is crucial that subsequent requests are forwarded to the same server within the pool. There are several approaches to accomplishing this:

  • Allow the server to insert a cookie in the client's HTTP request.

  • Configure the SLB to look for a cookie pattern and make a forwarding decision based on the cookie. The first request of the client will have no cookie, so the SLB will forward to the best server based on the algorithm. The server will install a cookie, which is a name-value pair. On the return of the packet, the SLB will read the cookie value and record client-server pair. Subsequent requests from the same client will have a cookie, which triggers the SLB to forward based on the recorded cookie information, not on the SLB algorithm.

  • Hash, based on the client's source IP address. This is risky if the client request comes from a megaproxy.

It is best to avoid persistence because HTTP was designed to be stateless. Trying to maintain state across many stateless transactions causes serious issues if there are failures. In many cases, the application software can maintain state. For example, when a servlet receives a request, it can identify the client based on its own cookie value and retrieve state information from the database. However, switch persistence might be required. If so, you should look at the exact capabilities of each vendor and decide which features are most critical.

Commercial Server Load Balancing Solutions

Many commercial SLB implementations are available both hardware and software.

  • Resonate provides a Solaris library offering, where a STREAMS Module/Driver is installed on a server that accepts all traffic, inspects the ingress packet, and forwards it to another server that actually services the request. As the cost of hardware devices falls and performance increases, the Resonate product is less popular.

  • Various companies such as Cisco, F5, and Foundry ServerIron sell hardware appliances that perform only server load balancing. One important factor to examine carefully is the method used to implement the server load-balancing function.

  • The F5 is limited because it is a PC Intel box, running BSD UNIX®, with two or more network interface cards.

Wirespeed performance can be limited because these general purpose computer-based appliances are not optimized for packet forwarding. When a packet arrives at a NIC, an interrupt must first be generated and serviced by the CPU. Then the PCI bus arbitration process will grant access to traverse the bus. Finally, the packet is copied into memory. These events cumulatively contribute to significant delays. In some newer implementations, wirespeed SLB forwarding can be achieved. Data Plane Layer 2/Layer 3 forwarding tables are integrated with the server load-balancing updates. Hence as soon as a packet is received, a packet classifier immediately performs an SLB lookup in the data plane with hardware using tables populated and maintained by the SLB process that resides in the control plane, which also monitors the health of the servers.

Foundry ServerIron XL Direct Server Return Mode

CODE EXAMPLE 4-1 shows the configuration file for the setup of a simple server load balancer. Refer to the Foundry ServerIron XL user guide for detailed explanations of configuration parameters. This shows the level of complexity for configuring a typical SLB device. This device is assigned a VIP address of 172.0.0.11, which is the IP address exposed to the outside world. On the internal LAN, this SLB device is assigned an IP address of 20.20.0.50, which can be used as the source IP address that is sent to the servers if you are using proxy mode. However, this device is configured in DSR mode, where the SLB forwards to the servers, which then return directly to the client. Notice that the servers are on the same VLAN as this SLB device on the internal LAN side of the 20.0.0.0 network.

Code example 4-1. Configuration for a Simple Server Load Balancer
 ! ver 07.3.05T12 global-protocol-vlan ! ! server source-ip 20.20.0.50 255.255.255.0 172.0.0.10 ! !! !! ! server real s1 20.20.0.1  port http  port http url "HEAD /" ! server real s2 20.20.0.2  port http  port http url "HEAD /" ! ! server virtual vip1 172.0.0.11  port http  port http dsr  bind http s1 http s2 http ! vlan 1 name DEFAULT-VLAN by port  no spanning-tree ! hostname SLB0                                                      ip address 172.0.0.111 255.255.255.0 ip default-gateway 172.0.0.10 web-management allow-no-password banner motd ^C Reference Architecture -- Enterprise Engineering^C Server Load Balancer-- SLB0 129.146.138.12/24^C !! 

Extreme Networks BlackDiamond 6800 Integrated SLB Proxy Mode

CODE EXAMPLE 4-2 shows an excerpt of the SLB configuration for a large chassis-based Layer 2/Layer 3 switch with integrated SLB capabilities. Various VLANs and IP addresses are configured on this switch in addition to the SLB. Pools of servers with real IP addresses are configured. The difference is that this switch is configured in the more secure proxy mode instead of DSR, shown in the previous example.

Code example 4-2. SLB Configuration for a Chassis-based Switch
 # # MSM64 Configuration generated Thu Dec 6 21:27:26 2001 # Software Version 6.1.9 (Build 11)   By Release_Master on 08/30/01 11:34:27 .. # Config information for VLAN app. config vlan "app" tag 40     # VLAN-ID=0x28  Global Tag 8 config vlan "app" protocol "ANY" config vlan "app" qosprofile "QP1"  config vlan "app" ipaddress 10.40.0.1 255.255.255.0  configure vlan "app" add port 4:1 untagged .. # # Config information for VLAN dns. .. configure vlan "dns" add port 5:3 untagged configure vlan "dns" add port 5:4 untagged configure vlan "dns" add port 5:5 untagged .. .. configure vlan "dns" add port 8:8 untagged config vlan "dns" add port 6:1 tagged # # Config information for VLAN super. config vlan "super" tag 1111     # VLAN-ID=0x457  Global Tag 10 config vlan "super" protocol "ANY" config vlan "super" qosprofile "QP1"  # No IP address is configured for VLAN super. config vlan "super" add port 1:1 tagged config vlan "super" add port 1:2 tagged config vlan "super" add port 1:3 tagged config vlan "super" add port 1:4 tagged config vlan "super" add port 1:5 tagged config vlan "super" add port 1:6 tagged config vlan "super" add port 1:7 tagged # config vlan "super" add port 1:8 tagged config .. .. .. config vlan "super" add port 6:4 tagged config vlan "super" add port 6:5 tagged config vlan "super" add port 6:6 tagged config vlan "super" add port 6:7 tagged config vlan "super" add port 6:8 tagged .. enable web access-profile none port 80 configure snmp access-profile readonly None configure snmp access-profile readwrite None enable snmp access disable snmp dot1dTpFdbTable enable snmp trap configure snmp community readwrite encrypted "r~`|kug" configure snmp community readonly encrypted "rykfcb" configure snmp sysName "MLS1" configure snmp sysLocation "" configure snmp sysContact "Deepak Kakadia, Enterprise Engineering" .. # ESRP Interface Configuration config vlan "edge" esrp priority 0 config vlan "edge" esrp group 0 config vlan "edge" esrp timer 2 config vlan "edge" esrp esrp-election ports-track-priority-mac .. .. # SLB Configuration enable slb config slb global ping-check frequency 1 timeout 2 config vlan "dns" slb-type server config vlan "app" slb-type server config vlan "db" slb-type server config vlan "ds" slb-type server config vlan "web" slb-type server config vlan "edge" slb-type client create slb pool webpool lb-method round-robin config slb pool webpool add 10.10.0.10 : 0 config slb pool webpool add 10.10.0.11 : 0 create slb pool dspool lb-method least-connection # config slb pool dspool add 10.20.0.20 : 0 config slb pool dspool add 10.20.0.21 : 0 create slb pool dbpool lb-method least-connection config slb pool dbpool add 10.30.0.30 : 0 config slb pool dbpool add 10.30.0.31 : 0 create slb pool apppool lb-method least-connection config slb pool apppool add 10.40.0.40 : 0 config slb pool apppool add 10.40.0.41 : 0 create slb pool dnspool lb-method least-connection config slb pool dnspool add 10.50.0.50 : 0 config slb pool dnspool add 10.50.0.51 : 0 create slb vip webvip pool webpool mode translation 10.10.0.200 : 0 unit 1 create slb vip dsvip pool dspool mode translation 10.20.0.200 : 0 unit 1 create slb vip dbvip pool dbpool mode translation 10.30.0.200 : 0 unit 1 create slb vip appvip pool apppool mode translation 10.40.0.200 : 0 unit 1 create slb vip dnsvip pool dnspool mode translation 10.50.0.200 : 0 unit 1 .. .. 

     < Day Day Up > 


    Networking Concepts and Technology. A Designer's Resource
    Networking Concepts and Technology: A Designers Resource
    ISBN: 0131482076
    EAN: 2147483647
    Year: 2003
    Pages: 116

    flylib.com © 2008-2017.
    If you may any questions please contact us: flylib@qtcs.net