Chapter 14: The Load Balancer


In this chapter, we take a closer look at what happens when the Director receives a packet destined for a real server (a cluster node). This will lead us into a discussion of LVS persistence—the technique LVS uses to assign the same client computer to a particular real server. We'll also describe how packets can be selected with the iptables utility for processing by LVS using a technique called packet marking.

LVS and Netfilter

In Figure 14-1, the five Netfilter hooks that were introduced in Chapter 2 are shown. Superimposed on top of these hooks is a series of small black boxes representing packets passing through the kernel. The kernel places each packet it receives into a memory structure called a socket buffer, or sk_buff for short. Each of the little black boxes in Figure 14-1 is thus representing an sk_buff inside the kernel, but in this discussion we'll continue to call them packets. The gray arrows in the figure represent the path that all incoming LVS packets (packets from client computers) take as they pass through the Director on their way to a real server (cluster node).

image from book
Figure 14-1: Incoming packets inside the Director

Let's begin by looking at the five Netfilter hooks introduced in Chapter 2. Figure 14-1 shows these five hooks in the kernel.

Notice in Figure 14-1 that incoming LVS packets hit only three of the five Netfilter hooks: PRE_ROUTING, LOCAL_IN, and POST_ROUTING.[1] Later in this chapter, we'll discuss the significance of these Netfilter hooks as they relate to your ability to control the fate of a packet on the Director. For the moment, we want to focus on the path that incoming packets take as they pass through the Director on their way to a cluster node, as represented by the two gray arrows in Figure 14-1.

The first gray arrow in Figure 14-1 represents a packet passing from the PRE_ROUTING hook to the LOCAL_IN hook inside the kernel. Every packet received by the Director that is destined for a cluster service regardless of the LVS forwarding method you've chosen[2] must pass from the PRE_ROUTING hook to the LOCAL_IN hook inside the kernel on the Director. The packet routing rules inside the kernel on the Director, in other words, must send all packets for cluster services to the LOCAL_IN hook. This is easy to accomplish when you build a Director because all packets for cluster services that arrive on the Director will have a destination address of the virtual IP (VIP) address. Recall from our discussion of the LVS forwarding methods in the last three chapters that the VIP is an IP alias or secondary IP address that is owned by the Director. Because the VIP is a local IP address owned by the Director, the routing table inside the kernel on the Director will always try to deliver packets locally. Packets received by the Director that have the VIP as a destination address will therefore always hit the LOCAL_IN hook.

The second gray arrow in Figure 14-1 represents the path taken by the incoming packets after the kernel has recognized that a packet is a request for a virtual service. When a packet hits the LOCAL_IN hook, the LVS software running inside the kernel knows the packet is a request for a cluster service (called a virtual service in LVS terminology) because the packet is destined for a VIP address. When you build your cluster, you use the ipvsadm utility to add virtual service VIP addresses to the kernel so LVS can recognize incoming packets sent to the VIP in the LOCAL_IN hook. If you had not already added the VIP to the LVS virtual service table (also known as the IPVS table), the packets destined for the VIP address would be delivered to the locally running daemons on the Director. But because LVS knows the VIP address,[3] it can check each packet as it hits the LOCAL_IN hook to decide if the packet is a request for a cluster service. LVS can then alter the fate of the packet before it reaches the locally running daemons on the Director. LVS does this by telling the kernel that it should not deliver a packet destined for the VIP address locally, but should send the packet to a cluster node instead. This causes the packet to be sent to the POST_ROUTING hook as depicted by the second gray arrow in Figure 14-1. The packets are sent out the NIC connected to the D/RIP network.[4]

The power of the LVS Director to alter the fate of packets is therefore made possible by the Netfilter hooks and the IPVS table you construct with the ipvsadm utility. The Director's ability to alter the fate of network packets makes it possible to distribute incoming requests for cluster services across multiple real servers (cluster nodes), but to do this, the Director must keep track of which real servers have been assigned to each client computer so the client computer will always talk to the same real server.[5] The Director does this by maintaining a table in memory called the connection tracking table.

Note 

As we'll see later in this chapter, there is a difference between making sure all requests for services (all new connections) go to the same cluster node and making sure all packets for an established connection return to the same cluster node. The former is called persistence and the latter is handled by connection tracking.

[1]LVS also uses the IP_FORWARD hook to recognize LVS-NAT reply packets (packets sent from the cluster nodes to client computers).

[2]LVS forwarding methods were introduced in Chapter 11.

[3]Or VIP addresses.

[4]The network that connects the real servers to the Director.

[5]Throughout the IP network conversation or session for a particular cluster service.



The Linux Enterprise Cluster. Build a Highly Available Cluster with Commodity Hardware and Free Software
Linux Enterprise Cluster: Build a Highly Available Cluster with Commodity Hardware and Free Software
ISBN: 1593270364
EAN: 2147483647
Year: 2003
Pages: 219
Authors: Karl Kopper

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net