Using the Red Hat init Scripts on Cluster Nodes


This section contains a description of the init scripts used on a normal Red Hat system along with a discussion of how the services started by the init script are used, or can be used, in a Linux Enterprise Cluster. Note that you may not have all of these scripts on your system due to a difference in the Red Hat release you are using or because you do not have the service installed.

aep1000

  • Supports the Accelerated Encryption Processing card to speed Secure Socket Layer (SSL) encryption. Not used to build the cluster in this book.

anacron

  • Used like cron and at to start commands at a particular time, even on systems that are not running continuously. On Red Hat, the /etc/anacrontab file lists the normal log rotation cron job that is executed daily. If you disable this, you will no longer perform log rotation, and your system disk will eventually fill up. In Chapters 18 and 19, we will describe two methods of running cron jobs on all cluster nodes without running the crond daemon on all nodes (the cron daemon runs on a single node and remotely initiates or starts the anacron program once a day on each cluster node).

apmd

  • Used for advanced power management. You only need this if you have an uninterruptible power supply (UPS) system connected to your system and want it to automatically shut down in the event of a power failure before the battery in your UPS system runs out. In Chapter 9, we'll see how the Heartbeat program can also control the power supply to a device through the use of a technology called STONITH (for "shoot the other node in the head").

arpwatch

  • Used to keep track of IP address-to-MAC address pairings. Normally, you do not need this daemon. As a side note, the Linux Virtual Server Direct Routing clusters (as described in Chapter 13) must contend with potential Address Resolution Protocol (ARP) problems introduced through the use of the cluster load-balancing technology, but arpwatch does not help with this problem and is normally not used on cluster nodes.

atd

  • Used like anacron and cron to schedule jobs for a particular time with the at command. This method of scheduling jobs is used infrequently, if at all. In Chapter 18, we'll describe how to build a no-single-point-of-failure batch scheduling mechanism for the cluster using Heartbeat, the cron daemon, and the clustersh script.

autofs

  • Used to automatically mount NFS directories from an NFS server. This script only needs to be enabled on an NFS client, and only if you want to automate mounting and unmounting NFS drives. In the cluster configuration described in this book, you will not need to mount NFS file systems on the fly and will therefore not need to use this service (though using it gives you a powerful and flexible means of mounting several NFS mount points from each cluster node on an as-needed basis.) If possible, avoid the complexity of the autofs mounting scheme and use only one NFS-mounted directory on your cluster nodes.

bcm5820

  • Support for the Broadcom Cryponet BCM5820 chip for speeding SSL communication. Not used in this book.

crond

  • Used like anacron and atd to schedule jobs. On a Red Hat system, the crond daemon starts anacron once a day. In Chapter 18, we'll see how you can build a no-single-point-of-failure batch job scheduling system using cron and the open source Ganglia package (or the clustersh script provided on the CD-ROM included with this book). To control cron job execution in the cluster, you may not want to start the cron daemon on all nodes. Instead, you may want to only run the cron daemon on one node and make it a high-availability service using the techniques described in Part II of this book. (On cluster nodes described in this book, the crond daemon is not run at system start up by init. The Heartbeat program launches the crond daemon based on an entry in the /etc/ ha.d/haresources file. Cron jobs can still run on all cluster nodes through remote shell capabilities provided by SSH.)

cups

  • The common Unix printing system. In this book, we will use LPRng instead of cups as the printing system for the cluster. See the description of the lpd script later in this chapter.

firstboot

  • Used as part of the system configuration process the first time the system boots.

functions

  • Contains library routines used by the other scripts in the /etc/init.d directory. Do not modify this script, or you will risk corrupting all of the init scripts on the system.

gpm

  • Makes it possible to use a mouse to cut and paste at the text-based console. You should not disable this service. Incidentally, cluster nodes will normally be connected to a KVM (keyboard video mouse) device that allows several servers to share one keyboard and one mouse. Due to the cost of these KVM devices, you may want to build your cluster nodes without connecting them to a KVM device (so that they boot without a keyboard and mouse connected).[5]

halt

  • Used to shut the system down cleanly.

httpd

  • Used to start the apache daemon(s) for a web server. You will probably want to use httpd even if you are not building a web cluster to help the cluster load balancer (the ldirectord program) decide when a node should be removed from the cluster. This is the concept of monitoring a parallel service discussed in Chapter 15.

identd

  • Used to help identify who is remotely connecting to your system. The theory sounds good, but you probably will never be saved from an attack by identd. In fact, hackers will attack the identd daemon on your system. Servers inside an LVS-NAT cluster also have problems sending identd requests back to client computers. Disable identd on your cluster nodes if possible.

ipchains or iptables

  • The ability to manipulate packets with the Linux kernel is provided by the ipchains (for kernels 2.2 and earlier) or iptables (for kernels 2.4 and later). This script on a Red Hat system runs the iptables or ipchains commands you have entered previously (and saved into a file in the /etc/ sysconfig directory). For more information, see Chapter 2.

irda

  • Used for wireless infrared communication. Not used to build the cluster described in this book.

isdn

  • Used for Integrated Services Digital Networks communication. Not used in this book.

kdcrotate

  • Used to administer Kerberos security on the system. See Chapter 19 for a discussion of methods for distributing user account information.

keytable

  • Used to load the keyboard table.

killall

  • Used to help shut down the system.

kudzu

  • Probes your system for new hardware at boot time—very useful when you change the hardware configuration on your system, but increases the amount of time required to boot a cluster node, especially when it is disconnected from the KVM device.

lisa

  • The LAN Information Server service. Normally, users on a Linux machine must know the name or address of a remote host before they can connect to it and exchange information with it. Using lisa, a user can perform a network discovery of other hosts on the network similar to the way Windows clients use the Network Neighborhood. This technology is not used in this book, and in fact, may confuse users if you are building the cluster described in this book (because the LVS-DR cluster nodes will be discovered by this software).

lpd

  • This is the script that starts the LPRng printing system. If you do not see the lpd script, you need to install the LPRng printing package. The latest copy of LPRng can be found at http://www.lprng.com. On a standard Red Hat installation, the LPRng printing system will use the /etc/printcap.local file to create an /etc/printcap file containing the list of printers. Cluster nodes (as described in this book) should run the lpd daemon. Cluster nodes will first spool print jobs locally to their local hard drives and then try (essentially, forever) to send the print jobs to a central print spooler that is also running the lpd daemon. See Chapter 19 for a discussion of the LPRng printing system.

netfs

  • Cluster nodes, as described in this book, will need to connect to a central file server (or NAS device) for lock arbitration. See Chapter 16 for a more detailed discussion of NFS. Cluster nodes will normally need this script to run at boot time to gain access to data files that are shared with the other cluster nodes.

network

  • Required to bring up the Ethernet interfaces and connect your system to the cluster network and NAS device. The Red Hat network configuration files are stored in the /etc/sysconfig directory. In Chapter 5, we will describe the files you need to look at to ensure a proper network configuration for each cluster node after you have finished cloning. This script uses these configuration files at boot time to configure your network interface cards and the network routing table on each cluster node.[6]

nfs

  • Cluster nodes will normally not act as NFS servers and will therefore not need to run this script.[7]

nfslock

  • The Linux kernel will start the proper in-kernel NFS locking mechanism (rpc.lockd is a kernel thread called klockds) and rpc.statd to ensure proper NFS file locking. However, cluster nodes that are NFS clients can run this script at boot time without harming anything (the kernel will always run the lock daemon when it needs it, whether or not this script was run at boot time). See Chapter 16 for more information about NFS lock arbitration.

nscd

  • Helps to speed name service lookups (for host names, for example) by caching this information. To build the cluster described in this book, using nscd is not required.

ntpd

  • This script starts the Network Time Protocol daemon. When you configure the name or IP address of a Network Time Protocol server in the /etc/ntp.conf file, you can run this service on all cluster nodes to keep their clocks synchronized. As the clock on the system drifts out of alignment, cluster nodes begin to disagree on the time, but running the ntpd service will help to prevent this problem. The clock on the system must be reasonably accurate before the ntpd daemon can begin to slew, or adjust, it and keep it synchronized with the time found on the NTP server. When using a distributed filesystem such as NFS, the time stored on the NFS server and the time kept on each cluster node should be kept synchronized. (See Chapter 16 for more information.) Also note that the value of the hardware clock and the system time may disagree. To set the hardware clock on your system, use the hwclock command. (Problems can also occur if the hardware clock and the system time diverge too much.)

pcmcia

  • Used to recognize and configure pcmcia devices that are normally only used on laptops (not used in this book).

portmap

  • Used by NFS and NIS to manage RPC connections. Required for normal operation of the nfslocking mechanism and covered in detail in Chapter 16.

pppoe

  • For Asymmetric Digital Subscriber Line (ADSL) connections. If you are not using ADSL, disable this script.

pxe

  • Some diskless clusters (or diskless workstations) will boot using the PXE protocol to locate and run an operating system (PXE stands for Preboot eXection Environment) based on the information provided by a PXE server. Running this service will enable your Linux machine to act as a PXE server; however, building a cluster of diskless nodes is outside the scope of this book.[8]

random

  • Helps your system generate better random numbers that are required for many encryption routines. The secure shell (SSH) method of encrypting data in a cluster environment is described in Chapter 4.

rawdevices

  • Used by the kernel for device management.

rhnsd

  • Connects to the Red Hat server to see if new software is available. How you want to upgrade software on your cluster nodes will dictate whether or not you use this script. Some system administrators shudder at the thought of automated software upgrades on production servers. Using a cluster, you have the advantage of upgrading the software on a single cluster node, testing it, and then deciding if all cluster nodes should be upgraded. If the upgrade on this single node (called aGolden Client in SystemImager terminology) is successful, it can be copied to all of the cluster nodes using the cloning process described in Chapter 5. (See the updateclient command in this chapter for how to update a node after it has been put into production.)

rstatd

  • Starts rpc.rstatd, which allows remote users to use the rup command to monitor system activity. System monitoring as described in this book will use the Ganglia package and Mon software monitoring packages that do not require this service.

rusersd

  • Lets other people see who is logged on to this machine. May be useful in a cluster environment, but will not be described in this book.

rwall

  • Lets remote users display messages on this machine with the rwall command. This may or may not be a useful feature in your cluster. Both this daemon and the next should only be started on systems running behind a firewall on a trusted network.

rwhod

  • Lets remote users see who is logged on.

saslauthd

  • Used to provide Simple Authentication and Security Layer (SASL) authentication (normally used with protocols like SMTP and POP). Building services that use SASL authentication is outside the scope of this book. To build the cluster described in this book, this service is not required.

sendmail

  • Will you allow remote users to send email messages directly to cluster nodes? For example, an email order-processing system requires each cluster node to receive the email order in order to balance the load of incoming email orders. If so, you will need to enable (and configure) this service on all cluster nodes. sendmail configuration for cluster nodes is discussed in Chapter 19.

single

  • Used to administer runlevels (by the init process).

smb

  • Provides a means of offering services such as file sharing and printer sharing to Windows clients using the package called Samba. Configuring a cluster to support PC clients in this manner is outside the scope of this book.

snmpd

  • Used for Simple Network Management Protocol (SNMP) administration. This daemon will be used in conjunction with the Mon monitoring package in Chapter 17 to monitor the health of the cluster nodes. You will almost certainly want to use SNMP on your cluster. If you are building a public web server, you may want to disable this daemon for security reasons (when you run snmpd on a server, you add an additional way for a hacker to try to break into your system). In practice, you may find that placing this script as a respawn service in the /etc/inittab file provides greater reliability than simply starting the daemon once at system boot time. (See the example earlier in this chapter—if you use this method, you will need to disable this init script so that the system won't try to launch snmpd twice.)

snmptrapd

  • SNMP can be configured on almost all network-connected devices to send traps or alerts to an SNMP Trap host. The SNMP Trap host runs monitoring software that logs these traps and then provides some mechanism to alert a human being to the fact that something has gone wrong. Running this daemon will turn your Linux server into an SNMP Trap server, which may be desirable for a system sitting outside the cluster, such as the cluster node manager.[9] A limitation of SNMP Traps, however, is the fact that they may be trying to indicate a serious problem with a single trap, or alert, message and this message may get lost. (The SNMP Trap server may miss its one and only opportunity to hear the alert.) The approach taken in this book is to use a server sitting outside the cluster (running the Mon software package) to poll the SNMP information stored on each cluster node and raise an alert when a threshold is violated or when the node does not respond. (See Chapter 17.)

squid

  • Provides a proxy server for caching web requests (among other things). squid is not used in the cluster described in this book.

sshd

  • Open SSH daemon. We will use this service to synchronize the files on the servers inside the cluster and for some of the system cloning recipes in this book (though using SSH for system cloning is not required).

syslog

  • The syslog daemon logs error messages from running daemons and programs based on configuration entries in the /etc/syslog.conf file. The log files are kept from growing indefinitely by the anacron daemon, which is started by an entry in the crontab file (by cron). See the logrotate man page. Note that you can cause all cluster nodes to send their syslog entries to a single host by creating configuration entries in the /etc/syslog.conf file using the @hostname syntax. (See the syslog man page for examples.) This method of error logging may, however, cause the entire cluster to slow down if the network becomes overloaded with error messages. In this book, we will use the default syslog method of sending error log entries to a locally attached disk drive to avoid this potential problem. (We'll proactively monitor for serious problems on the cluster using the Mon monitoring package in Part IV of this book.)[10]

tux

  • Instead of using the Apache httpd daemon, you may choose to run the Tux web server. This web server attempts to introduce performance improvements over the Apache web server daemon. This book will only describe how to install and configure Apache.

winbindd

  • Provides a means of authenticating users using the accounts found on a Windows server. This method of authentication is not used to build the cluster in this book. See Chapter 19 for a description of alternative methods available, and also see the discussion of the ypbind init script later in this chapter.

xfs

  • The X font server. If you are using only the text-based terminal (which is the assumption throughout this book) to administer your server (or telnet/ssh sessions from a remote machine), you will not need this service on your cluster nodes, and you can reduce system boot time and disk space by not installing any X applications on your cluster nodes. See Chapter 20 for an example of how to use X applications running on Thin Clients to access services inside the cluster.

xinetd[11]

  • Starts services such as FTP and telnet upon receipt of a remote connection request. Many daemons are started by xinetd as a result of an incoming TCP or UDP network request. Note that in a cluster configuration you may need to allow an unlimited number of connections for services started by xinetd (so that xinetd won't refuse a client computer's request for a service) by placing the instances = UNLIMITED line in the /etc/ xinetd.conf configuration file. (You have to restart xinetd or send it a SIGHUP signal with the kill command for it to see the changes you make to its configuration files.)

ypbind

  • Only used on NIS client machines. If you do not have an NIS server,[12] you do not need this service. One way to distribute cluster password and account information to all cluster nodes is to run an LDAP server and then send account information to all cluster nodes using the NIS system. This is made possible by a licensed commercial program from PADL software (http://www.padl.com) called the NIS/LDAP gateway (the daemon is called yplapd). Using the NIS/LDAP gateway on your LDAP server allows you to create simple cron scripts that copy all of the user accounts out of the LDAP database and install them into the local /etc/ passwd file on each cluster node. You can then use the /etc/nsswitch.conf file to point user authentication programs at the local /etc/passwd file, thus avoiding the need to send passwords over the network and reducing the impact of a failure of the LDAP (or NIS) server on the cluster. See the /etc/nsswitch.conf file for examples and more information. Changes to the /etc/xinetd.conf file are recognized by the system immediately (they do not require a reboot).

    Note 

    These entries will not rely on the /etc/shadow file but will instead contain the encrypted password in the /etc/passwd file.

    The cron job that creates the local passwd entries only needs to contain a line such as the following:

     ypcat passwd > /etc/passwd 

    This command will overwrite all of the /etc/passwd entries on the system with the NIS (or LDAP) account entries, so you will need to be sure to create all of the normal system accounts (especially the root user account) on the LDAP server.

    An even better method of applying the changes to each server is available through the use of the patch and diff commands. For example, a shell script could do the following:

        ypcat passwd > /tmp/passwd    diff -e /etc/passwd /tmp/passwd > /tmp/passwd.diff    patch -be /etc/passwd /tmp/passwd.diff 

    These commands use the ed editor (the -e option) to modify only the lines in the passwd file that have changed. Additionally, this script should check to make sure the NIS server is operating normally (if no entries are returned by the ypcat command, the local /etc/passwd file will be erased unless your script checks for this condition and aborts). This makes the program safe to run in the middle of the day without affecting normal system operation. (Again, note that this method does not use the /etc/shadow file to store password information.) In addition to the passwd database, similar commands should be used on the group and hosts files.

    If you use this method to distribute user accounts, the LDAP server running yplapd (or the NIS server) can crash, and users will still be able to log on to the cluster. If accounts are added or changed, however, all cluster nodes will not see the change until the script to update the local /etc/passwd, group, and hosts files runs.

    When using this configuration, you will need to leave the ypbind daemon running on all cluster nodes. You will also need to set the domain name for the client (on Red Hat systems, this is done in the /etc/sysconfig/network file).

yppasswd

  • This is required only on an NIS master machine (it is not required on the LDAP server running the NIS/LDAP gateway software as described in the discussion of ypbind in the previous list item). Cluster nodes will normally always be configured with yppasswd disabled.

ypserv

  • Same as yppasswd.

[5]Most server hardware now supports this.

[6]In the LVS-DR cluster described in Chapter 13, we will modify the routing table on each node to provide a special mechanism for load balancing incoming requests for cluster resources.

[7]An exception to this might be very large clusters that use a few cluster nodes to mount filesystems and then re-export these filesystems to other cluster nodes. Building this type of large cluster (probably for scientific applications) is beyond the scope of this book.

[8]See Chapter 10 for the reasons why. A more common application of the PXE service is to use this protocol when building Linux Terminal Server Project or LTSP Thin Clients.

[9]See Part IV of this book for more information.

[10]See also the mod_log_spread project at http://www.backhand.org/mod_log_spread.

[11]On older versions of Linux, and some versions of Unix, this is still called inetd.

[12]Usually used to distribute user accounts, user groups, host names, and similar information within a trusted network.



The Linux Enterprise Cluster. Build a Highly Available Cluster with Commodity Hardware and Free Software
Linux Enterprise Cluster: Build a Highly Available Cluster with Commodity Hardware and Free Software
ISBN: 1593270364
EAN: 2147483647
Year: 2003
Pages: 219
Authors: Karl Kopper

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net