Building an LVS-NAT Web Cluster


This recipe describes how to build an LVS-NAT web cluster consisting of a Director and a real server, using the Apache web server, as shown in Figure 12-7.

image from book
Figure 12-7: LVS-NAT web cluster

The LVS-NAT web cluster we'll build can be connected to the Internet, as shown in Figure 12-7. Client computers connected to the internal network (connected to the network switch shown in Figure 12-7) can also access the LVS-NAT cluster. Recall from our previous discussion, however, that the client computers must be outside the cluster (and must access the cluster services on the VIP).

Note 

Figure 12-7 also shows a mini hub connecting the Director and the real server, but a network switch and a separate VLAN would work just as well.

Before we build our first LVS-NAT cluster, we need to decide on an IP address range to use for our cluster network. Because these IP addresses do not need to be known by any computers outside the cluster network, the numbers you use are unimportant, but they should conform to RFC 1918.[5] We'll use this IP addressing scheme:

10.1.1.0

LVS-NAT cluster network (10.1.1.)

10.1.1.255

LVS-NAT cluster broadcast address

255.255.255.0

LVS-NAT cluster subnet mask

Assign the VIP address by picking a free IP address on your network. We'll use a fictitious VIP address of 209.100.100.100.

Let's continue with our recipe metaphor.

Recipe for LVS-NAT

List of ingredients:

  • 2 servers running Linux[6]

  • 1 client computer running an OS capable of supporting a web browser

  • 3 network interface cards (2 for the Director and 1 for the real server)

  • 1 mini hub with 2 twisted-pair cables (or 1 crossover cable)[7]

Step 1: Install the Operating System

When you install Linux on the two servers, be sure to configure the systems as web servers without any iptables (firewall or security) rules. The normal Red Hat installation process, for example, will automatically load Apache and create the /etc/httpd directory containing the Apache configuration files when you tell it that you would like your server to be a web server. Also, you do not need to load any X applications or a display manager for this recipe.

Step 2: Configure and Start Apache on the Real Server

In this step you have to select which of the two servers will become the real server in your cluster. On the machine you select, make sure that the Apache daemon starts each time the system boots (see Chapter 1 for a description of the system boot process). You may have to use the chkconfig command to cause the httpd boot script to run at your default system runlevel. (See Chapter 1 for complete instructions.)

Next, you should modify the Apache configuration file on this system so it knows how to display the web content you will use to test your LVS-NAT cluster. (See Appendix F for a detailed discussion.)

After you've saved your changes to the httpd.conf file, make sure the error log and access log files you specified are available by creating empty files with these commands:

 #touch /var/log/httpd/error_log #touch /var/log/httpd/access_log 

Note 

The touch command creates an empty file.

Also make sure your DocumentRoot exists and contains an index.html file—this is the default file the web server will display when you call up the URL. Use these commands:

 #mkdir -p /www/htdocs #echo "This is a test (from the real server)" > /www/htdocs/index.html 

Start Apache on the Real Server

You are now ready to start Apache using one of these commands:

 #/etc/init.d/httpd start 

or

 #service httpd start 

If the HTTPd daemon was already running, you will first need to enter one of these commands to stop it:

 /etc/init.d/httpd stop 

or

 service httpd stop 

The script should display this response: OK. If it doesn't, you probably have made an error in your configuration file (see Appendix F).

If it worked, confirm that Apache is running with this command:

 #ps -elf | grep httpd 

You should see several lines of output indicating that several HTTPd daemons are running. If not, check for errors with these commands:

 #tail /var/log/messages #cat /var/log/httpd/* 

If the HTTPd daemons are running, see if you can display your web page locally by using the Lynx command-line web browser:[8]

 #lynx -dump 10.1.1.2 

If everything is set up correctly, you should see the contents of your index.html file dumped to the screen by the Lynx web browser:

 This is a test (from the real server) 

Step 3: Set the Default Route on the Real Server

Real servers in an LVS-NAT cluster need to send all of their replies to client computers back through the Director. To accomplish this, we must set the default route for the real servers to the DIP. We can do so on Red Hat Linux by setting the GATEWAY variable in the /etc/sysconfig/network file.

Open the file in the vi text editor with this command:

 #vi /etc/sysconfig/network 

Add, or set, the GATEWAY variable to the Director's IP (DIP) address with an entry like this:

 GATEWAY=10.1.1.1 

We can enable this default route by rebooting (or re-running the /etc/ init.d/network script) or by manually entering it into the route table with the following command:

 #/sbin/route add default gw 10.1.1.1 

This command might complain with the following error:

 SIOCADDTR: File exists 

If it does, this means the default route entry in the routing table has already been defined. You should reboot your system or remove the conflicting default gateway with the route del command.

Step 4: Install the LVS Software on the Director

In this step you will make changes to the second server—the Director. (For the moment we are done making changes on the server you selected to be the real server.) The changes described here do not require you to reload your Linux distribution, but you will need to install a new kernel.

The Linux kernel is included on the CD-ROM that accompanies this book. This kernel contains the LVS software, but you'll need to compile and install this kernel with the LVS options enabled. You'll also need to install the ipvsadm utility to configure the Director. Older versions of the stock Linux kernel do not contain the LVS code, so you'll have to download the LVS patch and apply it to the kernel if you must use a kernel older than 2.4.23.

Copy the kernel source files included on the CD-ROM with this book (or download a kernel from http://www.kernel.org) and run the make menuconfig utility to enable the proper LVS and network options. Then use the instructions in Chapter 3 to compile and install this kernel on your system.

Note 

You can avoid compiling the kernel by downloading a kernel that already contains the LVS patches from a distribution vendor. The Ultra Monkey project at http://www.ultramonkey.org also has patched versions of the Red Hat Linux kernel that can be downloaded as RPM files.

Once you have rebooted your system on the new kernel,[9] you are ready to install the ipvsadm utility, which is also included on the CD-ROM. With the CD mounted, type these commands:

 #cp /mnt/cdrom/chapter12/ipvsadm* /tmp #cd /tmp #tar xvf ipvsadm* #cd ipvsadm-* #make #make install 

If this command completes without errors, you should now have the ipvsadm utility installed in the /sbin directory. If it does not work, make sure the /usr/src/linux directory (or symbolic link) contains the source code for the kernel with the LVS patches applied to it.

Step 5: Configure LVS on the Director

Now we need to tell the Director how to forward packets to the cluster node (the real servers) using the ipvsadm utility we compiled and installed in the previous step.

One way to do this is to use the configure script included with the LVS distribution. (See the LVS HOWTO at http://www.linuxvirtualserver.org for a description of how to use this method to configure an LVS cluster.) In this chapter, however, we will use our own custom script to create our LVS cluster so that we can learn more about the ipvsadm utility.

Note 

We will abandon this method in Chapter 15 when we use the ldirectord daemon to create an LVS Director configuration—ldirectord will enter the ipvsadm commands to create the IPVS table automatically.

Create an /etc/init.d/lvs script that looks like this (this script is included in the chapter12 subdirectory on the CD-ROM that accompanies this book):

 #!/bin/bash # # LVS script # # chkconfig: 2345 99 90 # description: LVS sample script # case "$1" in start)            # Bring up the VIP (Normally this should be under Heartbeat's control.)            /sbin/ifconfig eth0:1 209.100.100.3 netmask 255.255.255.0 up # Since this is the Director we must be # able to forward packets.[10]            echo 1 > /proc/sys/net/ipv4/ip_forward # Clear all iptables rules.            /sbin/iptables -F # Reset iptables counters.            /sbin/iptables -Z # Clear all ipvsadm rules/services.            /sbin/ipvsadm -C # Add an IP virtual service for VIP 209.100.100.3 port 80            /sbin/ipvsadm -A -t 209.100.100.3:80 -s rr # Now direct packets for this VIP to # to the real server IP (RIP) inside the cluster            /sbin/ipvsadm -a -t 209.100.100.3:80 -r 10.1.1.2 -m            ;; stop)         # Stop forwarding packets         echo 0 > /proc/sys/net/ipv4/ip_forward         # Reset ipvsadm         /sbin/ipvsadm -C         # Bring down the VIP interface         ifconfig eth0:1 down         ;; *)         echo "Usage: $0 {start|stop}"     ;; esac 

Note 

If you are running on a version of the kernel prior to version 2.4, you will also need to configure the masquerade for the reply packets that pass back through the Director. Do this with the ipchains utility by using a command such as the following:

 /sbin/ipchains -A forward -j MASQ -s 10.1.1.0/24 -d 0.0.0.0/0 

Starting with kernel 2.4, however, you do not need to enter this command because LVS does not use the kernel's NAT code. The 2.0 series kernels also needed to use the ipfwadm utility, which you may see mentioned on the LVS website, but this is no longer required to build an LVS cluster.

The two most important lines in the preceding script are the lines that create the IP virtual server:

 /sbin/ipvsadm -A -t 209.100.100.3:80 -s rr /sbin/ipvsadm -a -t 209.100.100.3:80 -r 10.1.1.2 -m 

The first line specifies the VIP address and the scheduling method (-s rr). The choices for scheduling methods (which were described in the previous chapter) are as follows:

ipvsadm Argument

Scheduling Method

-s rr

Round-robin

-s wrr

Weighted round-robin

-s lc

Least-connection

-s wlc

Weighted least-connection

-s lblc

Locality-based least-connection

-s lblcr

Locality-based least-connection with replication

-s dh

Destination hashing

-s sh

Source hashing

-s sed

Shortest expected delay

-s nq

Never queue

In this recipe, we will use the round-robin scheduling method. In production, however, you should use a weighted, dynamic scheduling method (see Chapter 11 for explanations of the various methods).

The second ipvsadm line listed above associates the real server's RIP (-r 10.1.1.2) with the VIP (or virtual server), and it specifies the forwarding method (-m). Each ipvsadm entry for the same virtual server can use a different forwarding method, but normally only one method is used. The choices are as follows:

ipvsadm Argument

Forwarding Method

-g

LVS-DR

-i

LVS-TUN

-m

LVS-NAT

In this recipe, we are building an LVS-NAT cluster, so we will use the -m option.[11]

When you have finished entering this script or modifying the copy on the CD-ROM, run it by typing this command:

 #/etc/init.d/lvs start 

If the VIP address has been added to the Director (which you can check by using the ifconfig command), log on to the real server and try to ping this VIP address. You should also be able to ping the VIP address from a client computer (so long as there are no intervening firewalls that block ICMP packets).

Note 

ICMP ping packets sent to the VIP will be handled by the Director (these packets are not forwarded to a real server). However, the Director will try to forward ICMP packets relating to a connection to the relevant real server.

To see the IP Virtual Server table (the table we have just created that tells the Director how to forward incoming requests for services to real servers inside the cluster), enter the following command on the server you have made into a Director:

 #/sbin/ipvsadm -L -n 

This command should show you the IPVS table:

  IP Virtual Server version 1.0.10 (size=4096) Prot LocalAddress:Port Scheduler Flags   -> RemoteAddress:Port             Forward Weight ActiveConn InActConn TCP  209.100.100.3:80 rr   -> 10.1.1.2:80                    Masq    1      0          0 

This report wraps both the column headers and each line of output onto the next line. We have only one IP virtual server pointing to one real server, so our one (wrapped) line of output is:

 TCP  209.100.100.3:80 rr   -> 10.1.1.2:80                    Masq    1      0          0 

This line says that our IP virtual server is using the TCP protocol for VIP address 209.100.100.3 on port 80. Packets are forwarded (->) to RIP address 10.1.1.2 on port 80, and our forwarding method is masquerading (Masq), which is another name for the Network Address Translation, or LVS-NAT. The LVS forwarding method reported in this field will be one of the following:

Report Output

LVS Forwarding Method

Masq

LVS-NAT

Route

LVS-DR

Tunnel

LVS-TUN

Step 6: Test the Cluster Configuration

The next step is to test the cluster configuration. In this step, <DR> will precede the commands that should be entered on the Director, and <RS> will precede the commands that should be entered on the real server.

The first thing to do is make sure the network interface cards are configured properly and are receiving packets. Enter the following command (on the Director):

 <DR>#ifconfig -a 

The important thing to look for in the output of this command is whether or not the interface is UP, and how many packets have been transmitted (TX) and received (RX). In the following sample output I've marked this information in bold:

 eth0     Link encap:Ethernet  HWaddr 00:10:5A:16:99:8A          inet addr:209.100.100.2  Bcast:209.100.100.255  Mask:255.255.255.0          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1          RX packets:481 errors:0 dropped:0 overruns:0 frame:0          TX packets:374 errors:0 dropped:0 overruns:0 carrier:0          collisions:0 txqueuelen:100          Interrupt:5 Base address:0x220 eth0:1   Link encap:Ethernet  HWaddr 00:10:5A:16:99:8A          inet addr:209.100.100.3  Bcast:209.100.100.255  Mask:255.255.255.0          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1          Interrupt:5 Base address:0x220       eth1      Link encap:Ethernet  HWaddr 00:80:5F:0E:AB:AB          inet addr:10.1.1.1  Bcast:10.1.1.255  Mask:255.255.255.0          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1          RX packets:210 errors:0 dropped:0 overruns:0 frame:0          TX packets:208 errors:0 dropped:0 overruns:0 carrier:0          collisions:0 txqueuelen:100          Interrupt:11 Base address:0x1400 

If you do not see the word UP displayed in the output of your ifconfig command for all of your network interface cards and network interface aliases (or IP aliases), as shown in the bold sections in this example, then the software driver for the missing network interface card is not configured properly (or the card is not installed properly). See Appendix C for more information on working with network interface cards.

Note 

If you use secondary IP addresses instead of IP aliases, use the ip addr sh command to check the status of the secondary IP addresses instead of the ifconfig -a command. You can then examine the output of this command for the same information (the report returned by the ip addr sh command is a slightly modified version of the sample output I've just given).

If the interface is UP but no packets have been transmitted (TX) or received (RX) on the card, you may have network cabling problems.

Once you have a working network configuration, test the communication from the real server to the Director by pinging the Director's DIP and then VIP addresses from the real server. To continue the example, you would enter the following commands on the real server:

 <RS>#ping 10.1.1.1 <RS>#ping 209.100.100.3 

The first of these commands pings the Director's cluster IP address (DIP), and the second pings the Director's virtual IP address (VIP). Both of these commands should report that packets can successfully travel from the real server through the cluster network to the Director and that the Director successfully recognizes both its DIP and VIP addresses. Once you are sure this basic cluster network's IP communication is working properly, use a client computer to ping the VIP address from outside the cluster.

When you have confirmed that all of these tests work, you are ready to test the service from the real server. Use the Lynx program to send an HTTP request to the locally running Apache server on the real server:

 <RS>#lynx -dump 10.1.1.2 

If this does not return the test web page message, check to make sure HTTPd is running by issuing the following commands (on the real server):

 <RS>#ps -elf | grep httpd <RS>#netstat -apn | grep httpd 

The first command should return several lines of output from the process table, indicating that several HTTPd daemons are running. The second command should show that these daemons are listening on the proper network ports (or at least on port 80). If either of these commands does not produce any output, you need to check your Apache configuration on the real server before continuing.

If the HTTPd daemons are running, you are ready to access the real server's HTTPd server from the Director. You can do so with the following command (on the Director):

 <DR>#lynx -dump 10.1.1.2 

This command, which specifies the real server's IP address (RIP), should display the test web page from the real server.

If all of these commands work properly, use the following command to watch your LVS connection table on the Director:

 <DR>#watch ipvsadm -Ln 

At first, this report should indicate that no active connections have been made through the Director to the real server, as shown here:

 IP Virtual Server version 1.0.10 (size=4096) Prot LocalAddress:Port Scheduler Flags   -> RemoteAddress:Port             Forward Weight ActiveConn InActConn TCP  209.100.100.3:80 rr   -> 10.1.1.2:80                    Masq    1      0          0 

Leave this command running on the Director (the watch command will automatically update the output every two seconds), and from a client computer use a web browser to display the URL that is the Director's virtual IP address (VIP):

 http://209.100.100.3/ 

If you see the test web page, you have successfully built your first LVS cluster.

If you look quickly back at the console on the Director, now, you should see an active connection (ActiveConn) in the LVS connection tracking table:

 IP Virtual Server version 1.0.10 (size=4096) Prot LocalAddress:Port Scheduler Flags   -> RemoteAddress:Port             Forward Weight ActiveConn InActConn TCP 209.100.100.3:80 rr   -> 10.1.1.2:80                    Masq    1      1          0 

In this report, the number 1 now appears in the ActiveConn column. If you wait a few seconds, and you don't make any further client connections to this VIP address, you should see the connection go from active status (ActiveConn) to inactive status (InActConn):

 IP Virtual Server version 1.0.10 (size=4096) Prot LocalAddress:Port Scheduler Flags   -> RemoteAddress:Port             Forward Weight ActiveConn InActConn TCP 209.100.100.3:80 rr   -> 10.1.1.2:80                    Masq    1      0          1 

In this report, the number 1 now appears in the InActConn column, and then it will finally drop out of the LVS connection tracking table altogether:

 IP Virtual Server version 1.0.10 (size=4096) Prot LocalAddress:Port Scheduler Flags   -> RemoteAddress:Port             Forward Weight ActiveConn InActConn TCP 209.100.100.3:80 rr   -> 10.1.1.2:80                    Masq    1      0          0 

The ActiveConn and InActConn columns are now both 0.

[5]RFC 1918 reserves the following IP address blocks for private intranets:

  • 10.0.0.0 through 10.255.255.255

  • 172.16.0.0 through 172.31.255.255

  • 192.168.0.0 through 192.168.255.255

[6]Technically the real server (cluster node) can run any OS capable of displaying a web page.

[7]As mentioned previously, you can use a separate VLAN on your switch instead of a mini hub.

[8]For this to work, you must have selected the option to install the text-based web browser when you loaded the operating system. If you didn't, you'll need to go get the Lynx program and install it.

[9]As the system boots, watch for IPVS messages (if you compiled the IPVS schedulers into the kernel and not as modules) for each of the scheduling methods you compiled for your kernel.

[10]Many distributions also include the sysctl command for modifying the /proc pseudo filesystem. On Red Hat systems, for example, this kernel capability is controlled by sysctl from the /etc/ rc.d/init.d/network script, as specified by a variable in the /etc/sysctl.conf file. This setting will override your custom LVS script if the network script runs after your custom LVS script. See Chapter 1 for more information about the boot order of init scripts.

[11]Note that you can't simply change this option to -g to build an LVS-DR cluster.



The Linux Enterprise Cluster. Build a Highly Available Cluster with Commodity Hardware and Free Software
Linux Enterprise Cluster: Build a Highly Available Cluster with Commodity Hardware and Free Software
ISBN: 1593270364
EAN: 2147483647
Year: 2003
Pages: 219
Authors: Karl Kopper

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net