So far, the bulk of this chapter has been focused on aliases, IP addresses, ARP, and various activities and attributes related to cluster alias routing. An important piece is still missing from the CLUA puzzle. How are client requests directed to the appropriate server process? Not to the appropriate server, but to the appropriate server process on the actual serving member. You should already understand how the request arrives at the appropriate cluster member, but that cluster member can be running many different applications, any of which may be the target of the client request, or the target process may need to be started by inetd. It's like being dropped off in a neighborhood without any clue of how to arrive at the correct house.
If we can forget clusters for a minute, and focus on typical network client-server activities, you may recall that the notion of a socket can represent many things in the mind of a network administrator or network programmer. It can represent a system call (see the socket(2) reference page), data structure, file, end point in network communication, or combination of a 32-bit IP address and a 16-bit port number. Consider, for a moment, the power of being able to identify a process anywhere on the Internet using 48 bits of information (32-bit IP address plus the 16-bit port number). Pretty impressive, isn't it?
The port number is a 16-bit integer assigned to represent an application. Two of the more well-known port numbers are 21 for ftp and 23 for telnet. Port numbers can be viewed in the cluster shared /etc/services file. Note that there are 64k port numbers available for TCP services, and another 64k port numbers available for UDP services.
# cat /etc/services ... # # Description: The services file lists the sockets and protocols used for # Internet services. # # Syntax: ServiceName PortNumber/ProtocolName [alias_1,...,alias_n] [#comments] # # ServiceName official Internet service name # PortNumber the socket port number used for the service # ProtocolName the transport protocol used for the service # alias unofficial service names # #comments text following the comment character (#) is ignored # echo 7/tcp echo 7/udp discard 9/tcp sink null discard 9/udp sink null systat 11/tcp users daytime 13/tcp daytime 13/udp netstat 15/tcp quote 17/udp text chargen 19/tcp ttytst chargen 19/udp ttytst ftp 21/tcp telnet 23/tcp smtp 25/tcp mail ... (lots more) ... rdg 10403/tcp rdg 10403/udp wnn4 22273/tcp # Wnn Ver 4.1 AdvFS 30000/tcp advfs # AdvFS GUI/daemon communications gii 616/tcp # GateD Interactive Interface
The current use of ports and sockets on a cluster member (or any system) can be displayed using the "netstat –a" command. The local address and foreign address columns in the output below display the port numbers as the number or mnemonic after the final dot (the port number in "molari.dec.com.4035" is 4035).
# netstat -a Active Internet connections (including servers) Proto Recv-Q Send-Q Local Address Foreign Address (state) tcp 0 0 molari.dec.com.4035 dns5.dec.com.domain TIME_WAIT tcp 0 217 molari.dec.com.telnet laptop2.dec.com.1184 ESTABLISHED tcp 0 0 babylon5.dec.com.domain *.* LISTEN tcp 0 0 molari.dec.com.domain *.* LISTEN tcp 0 0 localhost.domain *.* LISTEN tcp 0 0 molari-ics0.domain *.* LISTEN tcp 0 0 *.596 *.* LISTEN ... (lots more) ... tcp 0 0 molari.dec.com.1153 *.* LISTEN tcp 0 0 localhost.1153 *.* LISTEN tcp 0 0 molari-ics0.1153 *.* LISTEN tcp 0 0 molari.dec.com.1152 *.* LISTEN tcp 0 0 localhost.1152 *.* LISTEN tcp 0 0 molari-ics0.1152 *.* LISTEN tcp 0 0 *.830 *.* LISTEN tcp 0 0 *.2049 *.* LISTEN tcp 0 0 *.1022 *.* LISTEN tcp 0 0 *.gii *.* LISTEN tcp 0 0 *.111 *.* LISTEN udp 0 0 *.2049 *.* udp 0 0 *.time *.* udp 0 0 babylon5.dec.com.domain *.* udp 0 0 molari.dec.com.domain *.* udp 0 0 localhost.domain *.* udp 0 0 molari-ics0.domain *.* udp 0 0 *.111 *.* udp 0 0 *.1139 *.* udp 0 0 *.1144 *.* udp 0 0 *.1145 *.* udp 0 0 *.1146 *.* udp 0 0 babylon5.dec.com.ntp *.* udp 0 0 molari.dec.com.ntp *.* udp 0 0 localhost.ntp *.* ... (lots more) ...
The raw port numbers can more easily be discerned by adding the "-n" option to the previous netstat command. Once again, the number following the rightmost dot is the port number.
# netstat -an Active Internet connections (including servers) Proto Recv-Q Send-Q Local Address Foreign Address (state) tcp 0 0 192.168.0.69.4034 192.168.0.2.53 TIME_WAIT tcp 0 0 192.168.0.69.4035 192.168.0.2.53 TIME_WAIT tcp 0 301 192.168.0.69.23 192.168.0.2.1184 ESTABLISHED tcp 0 0 192.168.0.70.53 *.* LISTEN tcp 0 0 192.168.0.69.53 *.* LISTEN tcp 0 0 127.0.0.1.53 *.* LISTEN tcp 0 0 10.0.0.2.53 *.* LISTEN tcp 0 0 *.596 *.* LISTEN ... (lots more) ... tcp 0 0 192.168.0.69.1153 *.* LISTEN tcp 0 0 127.0.0.1.1153 *.* LISTEN tcp 0 0 10.0.0.2.1153 *.* LISTEN tcp 0 0 192.168.0.69.1152 *.* LISTEN tcp 0 0 127.0.0.1.1152 *.* LISTEN tcp 0 0 10.0.0.2.1152 *.* LISTEN tcp 0 0 *.830 *.* LISTEN tcp 0 0 *.2049 *.* LISTEN tcp 0 0 *.1022 *.* LISTEN tcp 0 0 *.616 *.* LISTEN tcp 0 0 *.111 *.* LISTEN udp 0 0 *.2049 *.* udp 0 0 *.37 *.* udp 0 0 192.168.0.70.53 *.* udp 0 0 192.168.0.69.53 *.* udp 0 0 127.0.0.1.53 *.* udp 0 0 10.0.0.2.53 *.* udp 0 0 *.111 *.* udp 0 0 *.1139 *.* udp 0 0 *.1144 *.* udp 0 0 *.1145 *.* udp 0 0 *.1146 *.* udp 0 0 192.168.0.70.123 *.* udp 0 0 192.168.0.69.123 *.* udp 0 0 127.0.0.1.123 *.* ... (lots more) ...
The port numbers can be associated with an actual application by examining the /etc/services file (see section 16.10.1). Note that not all port numbers are found in this file. For example, examining the /etc/services file for a telnet entry shows that telnet is associated with TCP port number 23. But there are two sides to a connection. What port number is used for the other side of the telnet connection? The server side (running telnetd) will be associated with port number 23. The client side will have allocated a port that is reserved for user requests.
These client side ports are dynamically chosen by the networking software in the kernel. Since the clients are usually not cluster members, there tend to be plenty of these "ephemeral" ports for the clients to use. But have we ever said that a cluster cannot be a client?
What happens with respect to these dynamically allocated ports when the cluster member is the client? The dynamic port numbers must be unique to the cluster in order to distinguish an application in a particular cluster member from the same application running in another cluster member, and we know that the cluster can have 8 members maximum (at the present time). How many of these dynamic ports are available for use in typical client/server network activities?
The number of available dynamic ports can be discerned by calculating the difference between two inet subsystem attributes – ipport_userreserved (default is 5000) and ipport_userreservedmin (default is 1024). The difference is 3976. Using the default values, this number indicates how many concurrent client activities can be executing within the cluster at any one time. While the 3976 number may be plenty for a single node machine, it is potentially a bottleneck for a multi-member cluster.
HP recommends that the ipport_userreserved attribute be increased to its maximum (65535) on clusters with more than two members. This will provide approximately 63k ports for concurrent use spread across the cluster members. See section 19.3.7.3 for an example modifying ipport_userreserved.
The server side of network applications such as telnet and ftp (telnetd or ftpd) is not started until the client has requested access (through the telnet or ftp commands). The server daemons are started up (upon client request) by the inetd daemon, which is responsible for handling the startup of any services listed in the /etc/inetd.conf file. Below is a sample of an inetd.conf file.
# cat /etc/inetd.conf ... ###################################################################### # # Internet server configuration database # # Description: The inetd.conf file is the first file that the inetd # daemon reads for information on how to handle Internet # service requests. It contains global services that # can be run on all members in a cluster. # # Precedence: Entries in inetd.conf.local will override entries in # inetd.conf because it is read after inetd.conf. # # Syntax: ServiceName SocketType ProtocolName Wait/NoWait UserName \ # ServerPath ServerArgs # # ServiceName name of an Internet service defined in the /etc/services file # SockettType type of socket used by the service, either stream or dgram # ProtocolName name of an internet protocol defined in the /etc/protocols # file # Wait/NoWait determines whether the inetd daemon waits for # a datagram server to release the socket before continuing # to listen at the socket # UserName the login that inetd should use to start the server # ServerPath full pathname of the server # ServerArgs optional command line arguments that inetd should use to # execute the server # ftp stream tcp nowait root /usr/sbin/ftpd ftpd telnet stream tcp nowait root /usr/sbin/telnetd telnetd shell stream tcp nowait root /usr/sbin/rshd rshd login stream tcp nowait root /usr/sbin/rlogind rlogind exec stream tcp nowait root /usr/sbin/rexecd rexecd # Run as user "uucp" if you don't want uucpd's wtmp entries. #uucp stream tcp nowait root /usr/sbin/uucpd uucpd #finger stream tcp nowait root /usr/sbin/fingerd fingerd #tftp dgram udp wait root /usr/sbin/tftpd tftpd /tmp comsat dgram udp wait root /usr/sbin/comsat comsat #talk dgram udp wait root /usr/sbin/talkd talkd ntalk dgram udp wait root /usr/sbin/ntalkd ntalkd # Please note that bootp functionality has been subsumed by the # DHCP daemon (joind). Please refer to the joind(8) man page #bootps dgram udp wait root /usr/sbin/bootpd bootpd #bootps dgram udp wait root /usr/sbin/joind joind #time stream tcp nowait root internal time time dgram udp wait root internal time #daytime stream tcp nowait root internal daytime ...
The /etc/inetd.conf file is not a CDSL and thus is shared by all cluster members. Suppose you need to disallow certain server-side applications from executing on a subset of your cluster members? For instance, suppose you only wanted one cluster member to execute the ftp service (or substitute any network-oriented service where we mention ftp). This is not resolved by cooking up a cluster alias and having only certain members "join" that alias. That would simply limit the cluster routing and/or serving activities for a particular alias. It says nothing about the particular services offered by the members of the cluster alias. It's like buying a ticket in order to watch the lion tamer at the circus, but the ticket gets you a seat where you can watch the clowns, trapeze artists, elephants, or anything you want.
If you make an alias called clu_ftp_den that you would like used to access ftp service delivered by a single cluster member, but a client issues the "telnet clu_ftp_den" command, there is nothing built into the cluster alias that says it should only be used for accessing ftp service. The alias name is just a name. If all clients follow the convention implicit in the name (clu_ftp_den is supposed to be used for ftp only), then the scheme will work. But the scheme relies on the convention being followed by all clients.
The following example shows the creation of the clu_ftp_den alias that is "joined" by one member of the cluster. The alias is then used to ping and telnet to the cluster member proving that the alias has no direct control over the service it is used to activate.
The client uses the non-existent alias in an unsuccessful ping attempt. At this point, the client's /etc/hosts file already contains the alias name and IP address, but the alias does not exist in the cluster.
# hostname climach4
# ping clu_ftp_den PING clu_ftp_den (192.168.0.171): 56 data bytes ----clu_ftp_den PING Statistics---- 3 packets transmitted, 0 packets received, 100% packet loss
# grep den /etc/hosts 192.168.0.171 clu_ftp_den
The cluster member also has the alias name and IP address reflected in the /etc/hosts file.
[molari] # grep den /etc/hosts 192.168.0.171 clu_ftp_den
Create the alias on cluster member molari.
[molari] # /usr/sbin/cluamgr -a alias=clu_ftp_den
[molari] # /usr/sbin/cluamgr -s clu_ftp_den Status of Cluster Alias: clu_ftp_den netmask: 0 aliasid: 2 flags: 5<ENABLED,IP_V4> connections rcvd from net: 0 connections forwarded: 0 connections rcvd within cluster: 0 data packets received from network: 0 data packets forwarded within cluster: 0 datagrams received from network: 0 datagrams forwarded within cluster: 0 datagrams received within cluster: 0 fragments received from network: 0 fragments forwarded within cluster: 0 fragments received within cluster: 0 Member Attributes: memberid: 2, selw=1, selp=1, rpri=1 flags=10<ENABLED>
The alias has not been joined, and aliasd is not yet aware of the new alias. Let's correct that situation.
[molari] # /usr/sbin/cluamgr -a alias=clu_ftp_den,join
[molari] # /usr/sbin/cluamgr -s clu_ftp_den Status of Cluster Alias: clu_ftp_den netmask: 0 aliasid: 2 flags: 5<ENABLED,IP_V4> connections rcvd from net: 0 connections forwarded: 0 connections rcvd within cluster: 0 data packets received from network: 0 data packets forwarded within cluster: 0 datagrams received from network: 0 datagrams forwarded within cluster: 0 datagrams received within cluster: 0 fragments received from network: 0 fragments forwarded within cluster: 0 fragments received within cluster: 0 Member Attributes: memberid: 2, selw=1, selp=1, rpri=1 flags=11<JOINED,ENABLED>
[molari] # /usr/sbin/cluamgr -r start
From the client system:
# ping clu_ftp_den PING clu_ftp_den (192.168.0.171): 56 data bytes 64 bytes from 192.168.0.171: icmp_seq=0 ttl=64 time=1 ms 64 bytes from 192.168.0.171: icmp_seq=1 ttl=64 time=0 ms ----clu_ftp_den PING Statistics---- 2 packets transmitted, 2 packets received, 0% packet loss round-trip (ms) min/avg/max = 0/0/1 ms
# telnet clu_ftp_den Trying 192.168.0.171... Connected to clu_ftp_den. Escape character is '^]'. Compaq Tru64 UNIX V5.1A (Rev. 1885) (molari.dec.com) (pts/1) login: root ... # exit Connection closed by foreign host.
Cluster member sheridan is not aware of the clu_ftp_den alias.
[sheridan] # /usr/sbin/cluamgr -s all Status of Cluster Alias: babylon5.dec.com netmask: 0 aliasid: 1 flags: 7<ENABLED,DEFAULT,IP_V4> connections rcvd from net: 5 connections forwarded: 5 connections rcvd within cluster: 0 data packets received from network: 1277 data packets forwarded within cluster: 174 datagrams received from network: 28 datagrams forwarded within cluster: 0 datagrams received within cluster: 28 fragments received from network: 0 fragments forwarded within cluster: 0 fragments received within cluster: 0 Member Attributes: memberid: 1, selw=3, selp=1, rpri=1 flags=11<JOINED,ENABLED> memberid: 2, selw=3, selp=1, rpri=1 flags=11<JOINED,ENABLED>
For another example, suppose you make an alias and have only three members of a four-member cluster actually join the alias. When a client requests telnet/ftp/other network services using the cluster alias, there is nothing inherent in the alias itself that would limit its use to a particular subset of the network services. The cluster alias simply determines a list of cluster members that will potentially respond when the alias (which is after all an IP address) is used by a client node on the network.
Figure 16-7 documents some of the decisions made by the cluster alias subsystem. Port accessibility can be manipulated through the use of the /etc/inetd.conf, /etc/inted.conf.local, /etc/services, or /etc/clua_services files.
Figure 16-7: Cluster Alias Connection Decision Tree
In order to provide a more granular approach to network services offered by the cluster, there is a member-specific version of the /etc/inetd.conf file called /etc/inetd.conf.local. The /etc/inetd.conf.local file provides a mechanism whereby an individual member of the cluster may effectively "turn off" its ability to respond to a request for a particular service. (This can also be thought of in the reverse – that is, you can disable all services cluster wide, but then turn on selected services in appropriate members' /etc/inetd.conf.local file(s).) The member-specific local file is read after the cluster-wide /etc/inetd.conf file. By default, the contents of the /etc/inetd.conf.local file is nothing but comments (see the following example).
# cat /etc/inetd.conf.local ... # ###################################################################### # # Internet server configuration database # # Description: The inetd.conf.local file is the second file that the inetd # daemon reads for information on how to handle Internet service # requests. It contains local services that can only # be run on this member, and not other members, in a cluster. # It can also be used to disable a service for this member # by using 'disable' in the ServerPath field of a service. # # Precedence: Entries in inetd.conf.local will override entries in # inetd.conf because it is read after inetd.conf. # # Syntax: ServiceName SocketType ProtocolName Wait/NoWait UserName \ # ServerPath ServerArgs # # ServiceName name of an Internet service defined in the /etc/services file # SockettType type of socket used by the service, either stream or dgram # ProtocolName name of an internet protocol defined in the /etc/protocols # file # Wait/NoWait determines whether the inetd daemon waits for # a datagram server to release the socket before continuing # to listen at the socket # UserName the login that inetd should use to start the server # ServerPath full pathname of the server # ServerArgs optional command line arguments that inetd should use to # execute the server
If we add the following to this file, the cluster member owning this file would be disallowed from satisfying a client request for the telnet (or whatever) service.
telnet stream tcp nowait root disable telnetd
The key syntax in the above line is "disable", which will be read by the inetd after it has read the contents of /etc/inetd.conf, thus enabling telnet service on all cluster members except members that include the "disable" syntax in their member-specific /etc/inetd.conf.local file.
Note | This feature does not work in V5.0A or V5.1. It does work in V5.1A and beyond, but the V5.1A release notes incorrectly state that it is broken. (The release notes will probably be corrected by the time you read this.) If you are still using V5.0A or V5.1, the way around this problem is to disable the service completely from the cluster-wide /etc/inetd.conf file, and then enable it in the appropriate member-specific /etc/inetd.conf.local file(s). |
The key point to remember is that the cluster alias determines which members can service client requests that use the alias, but aliases have no control over which services are available on particular members. The /etc/inetd.conf.local file provides the ability to further determine which services are available from individual cluster members.
This extra level of service granularity does not come for free. There will be extra CLUA effort involved in assuring that the client's request ends up being delivered to a cluster member that is able to fully respond. For instance, suppose the selp attribute indicates that member1 should handle a client request for telnet service. Further suppose that the client used a cluster alias to identify the target as the cluster (or a subset of the cluster). The CLUA subsystem would select member1 to handle the activity based on member1 having the highest selp attribute, but suppose member1 has "telnet stream tcp nowait root disable telnetd" in its /etc/inetd.conf.local file. This would ultimately cause the CLUA subsystem to select the next best member based on selp.
These decisions will cost extra CPU cycles. However, once the TCP connection has been made, it will not be necessary to repeat the CLUA discovery and routing process. Note that for UDP, there will be a need for some repetition of the CLUA decision-making process due to the connectionless nature of UDP.
The following output depicts a normal round robin use of cluster members as network servers based on default selp and selw values.
From the client system:
# rsh clua_den /usr/sbin/cluamgr -s clua_den | grep -E "^Status|^[Mm]ember" | more Status of Cluster Alias: clua_den Member Attributes: memberid: 1, selw=1, selp=1, rpri=1 flags=11<JOINED,ENABLED> memberid: 2, selw=1, selp=1, rpri=1 flags=31<JOINED,ENABLED,VIRTUAL>
# rsh clua_den hostname molari.dec.com # rsh clua_den hostname sheridan.dec.com # rsh clua_den hostname molari.dec.com # rsh clua_den hostname sheridan.dec.com
If we change the selp attribute on one of the members (molari), the member with the highest selp value will always be selected to handle the service request from the client as shown in the following example.
[molari] # cluamgr -a alias=clua_den,selp=2 [molari] # cluamgr -s clua_den | grep -E "^Status|^[Mm]ember" | uniq Status of Cluster Alias: clua_den Member Attributes: memberid: 1, selw=1, selp=1, rpri=1 flags=11<JOINED,ENABLED> memberid: 2, selw=1, selp=2, rpri=1 flags=31<JOINED,ENABLED,VIRTUAL>
On the client, the behavior now favors the member with the highest selp value (molari).
From the client system:
# rsh clua_den hostname molari.dec.com # rsh clua_den hostname molari.dec.com # rsh clua_den hostname molari.dec.com
But if the service is disabled in the member's /etc/inetd.conf.local file, the selp dominance is meaningless as shown by member sheridan getting the request in the following example. The "kill –HUP" command informs the inetd that there has been a change to the configuration file.
[molari] # grep rshd /etc/inetd.conf.local shell stream tcp nowait root disable /usr/sbin/rshd rshd
# kill -HUP 1049621
From the client system:
# rsh clua_den hostname sheridan.dec.com # rsh clua_den hostname sheridan.dec.com # rsh clua_den hostname sheridan.dec.com
Starting to feel like you've got a grip on the relationship between CLUA and various network services? Fasten your seatbelt – there's more. Remember the nice and simple /etc/services file presented earlier in the chapter? The truth is that in a cluster, the CLUA subsystem uses the /etc/clua_services file. This file has almost the same look and feel as the /etc/services file, but it allows the use of several new cluster attributes that can be associated with various services. The cluster attributes are described by the comments at the beginning of the file.
# cat /etc/clua_services ... # Description: The services file lists the properties of # Internet sockets and protocols used with # TruCluster Alias. # # Syntax: ServiceName PortNumber/ProtocolName [opt_1,...,opt_n] [#comments] # # ServiceName official Internet service name (informational only) # PortNumber the socket port number used for the service # ProtocolName the transport protocol used for the service # opt_n options, one or more of the following: # # < the following options are mutually exclusive > # # in_multi may act as a server on multiple nodes # in_single may act as a server on one node, with transparent # failover to an instance of the service on another node # (default for both TCP and UDP) # in_noalias this port will not receive inbound alias messages # # < the following options may be used with any option above > # # out_alias if this port is used as a destination, the default # cluster alias will be used as the source address # in_nolocal connections to non-alias addresses will not be # honored # static this port may not be assigned as a dynamic port # (e.g. as an ephemeral port or through bindresvport) # Should be set for any ports above 512 used by an # active daemon (e.g. those in inetd.conf) # # #comments text following the comment character (#) is ignored # # echo 7/tcp in_multi echo 7/udp in_multi discard 9/tcp in_multi discard 9/udp in_multi daytime 13/tcp in_multi daytime 13/udp in_multi netstat 15/tcp in_single quote 17/udp in_multi chargen 19/tcp in_multi chargen 19/udp in_multi ftp 21/tcp in_multi telnet 23/tcp in_multi,out_alias ... bootps 67/udp in_single,out_alias tftp 69/udp in_single finger 79/tcp in_noalias ... (lots more)
In the following sections, we will be changing entries in the /etc/clua_services file for various services. The changes will not take effect until the cluster alias subsystem is made aware of the changes through the "cluamgr –f" command. Furthermore, if the service is started by inetd, the inetd daemon must be informed of the change as well. This can be achieved by issuing the "kill –HUP" command on the inetd child process. See the following output, which uses the clu_alias script supplied at the Digital Press and BRUDEN websites (see Appendix B for the URL).
[sheridan] # ./clu_alias -f member1: (molari) "/usr/sbin/cluamgr -f" - accessing via molari-ics0" member2: (sheridan) "/usr/sbin/cluamgr -f"
[sheridan] # kill -HUP $(cat /var/run/inetd.pid)
[sheridan] # kill –HUP $(cat /var/cluster/members/member1/run/inetd.pid)
The same result can be achieved more succinctly by using the following syntax:
# kill -HUP $(cat /var/cluster/members/member[12]/run/inetd.pid)
The kill command is not a cluster-wide command prior to V5.1A. If you are using V5.0A or V5.1, you will need to use the rsh command.
# rshhostname 'kill –HUP $(cat /var/run/inetd.pid)'
This is shown in the example in section 16.10.7.1.
By default, each service has the in_single attribute associated with it. This attribute indicates that incoming traffic destined for the service should only be delivered on one cluster member at a time. It does not mandate that there be only one active instance of the service running. It indicates that if a second active instance of the service needs to be started, it will be started on the same cluster member as the first instance, if the first instance is still functioning. This usually results in the service running on the member with the oldest inetd running.
If the member on which the first instance was running has gone down, the CLUA software will select another eligible cluster member to serve the requests. But at no time will there be more than one active instance running on more than one cluster member (active means accepting incoming communications). There may be inactive instances of the application in existence on other cluster members, but once again, active instances will be executing on one cluster member at a time.
The following example sets up the cluster to be a tftp server. The attribute associated with tftp is in_single, so we should see all tftpd server instances executing on one cluster member.
[molari] # grep tftp /etc/clua_services tftp 69/udp in_single
Make a directory to be used for tftp experiments.
# mkdir /tftp
Make a file with a decent size so that a tftp request takes some time to execute.
# dd if=/dev/zero of=/tftp/big1 bs=64k count=100k 3494+0 records in 3493+0 records out
Allow access to all requesters.
# chmod a+rwx /tftp/big1
# ls -l /tftp/big1 -rwxrwxrwx 1 root system 228982784 Sep 2 16:29 /tftp/big1
Alter the /etc/inetd.conf entry for tftp such that it allows tftp access to files in the /tftp directory. Currently, it supports access to files in /tmp only.
# grep tftp /etc/inetd.conf #tftp dgram udp wait root /usr/sbin/tftpd tftpd /tmp
This amusing sed command searches for lines that start with "tftp", replaces the string "/tmp" with the string "/tftp", and places the altered file in /etc/inetd.conf.tmp. Then it is used to replace the original file.
# sed –e 's/^#tftp/tftp/' –e 's/tmp/tftp' /etc/inetd.conf > /etc/inetd.conf.temp
# mv /etc/inetd.conf.temp /etc/inetd.conf
# grep tftp /etc/inetd.conf tftp dgram udp wait root /usr/sbin/tftpd tftpd /tftp
# cat /var/run/inetd.pid 1049621
# kill -HUP $(cat /var/run/inetd.pid) # rsh sheridan-ics0 'kill -HUP $(cat /var/run/inetd.pid)'
or
# kill -HUP $(cat /var/cluster/members/member[12]/run/inetd.pid)
On the client, after issuing two tftp commands for execution in the background, both of the tftpd servers are running on the same cluster member. This would be true no matter how many tftp commands were issued. They would all be served by daemons running on the same cluster member because of the in_single service attribute. Think of in_single as meaning incoming requests are serviced on a single cluster member.
From the client system:
# tftp babylon5 -get /tftp/big1 /dev/null & [1] 29390
# tftp babylon5 -get /tftp/big1 /dev/null & [2] 29392
From the cluster members:
[molari] # ps -A | grep tftp | grep -v grep 1049590 ?? S 0:02.01 tftpd /tftp 1049598 ?? S 0:01.25 tftpd /tftp
[sheridan] # ps -A | grep tftp | grep -v grep <no output>
The most common alternative to in_single is in_multi. This attribute indicates that if a service is already running on one cluster member, and a client requests a second instance, the service may run on a different cluster member if warranted. Thus, the service can potentially be running on multiple cluster members simultaneously. The cluster member(s) chosen to run the server will be determined by the highest selp cluster alias attribute. If there are multiple members with the same selp value, the members will be used selw times before switching to the next cluster member with equal selp. Cluster members with lower selp values will not be used unless all members with higher selp are down or otherwise unavailable.
The following example replaces the in_single service attribute with the in_multi service attribute. Thus we can see the tftpd daemons running on both members. The example uses the default cluster alias (babylon5). By default, the selw for the default cluster alias is 3. For this example, it is changed to 1, in order to see the results of the in_multi service attribute more easily.
[molari] # cluamgr -s babylon5 | grep selw memberid: 1, selw=3, selp=1, rpri=1 flags=11<JOINED,ENABLED> memberid: 2, selw=3, selp=1, rpri=1 flags=11<JOINED,ENABLED>
[molari] # cluamgr -a alias=babylon5,selw=1
[molari] # cluamgr -s babylon5 | grep selw memberid: 1, selw=3, selp=1, rpri=1 flags=11<JOINED,ENABLED> memberid: 2, selw=1, selp=1, rpri=1 flags=11<JOINED,ENABLED>
[sheridan] # cluamgr -a alias=babylon5,selw=1
[sheridan] # cluamgr -s babylon5 | grep selw memberid: 1, selw=1, selp=1, rpri=1 flags=11<JOINED,ENABLED> memberid: 2, selw=1, selp=1, rpri=1 flags=11<JOINED,ENABLED>
Next we change the service to in_multi.
[molari] # grep tftp /etc/clua_services tftp 69/udp in_single
[molari] # sed 's/single/multi/' /etc/clua_services > /etc/clua_services.temp
[molari] # mv /etc/clua_services.temp /etc/clua_services
[molari] # grep tftp /etc/clua_services tftp 69/udp in_multi
On the client, we run a few tftp commands and see which cluster members do the serving. It turns out that both cluster members are running an instance of the tftpd server. This is allowed because of the in_multi service attribute applied to the tftp service. Think of in_multi as meaning incoming requests can be handled on multiple members.
From the client system:
# rsh molari ps -A | grep tftp | grep -v grep # rsh sheridan ps -A | grep tftp | grep -v grep
# tftp babylon5 -get /tftp/big1 /dev/null & [1] 5039
# tftp babylon5 -get /tftp/big1 /dev/null & [2] 5040
# rsh molari ps -A | grep tftp | grep -v grep 1049590 ?? S 0:02.01 tftpd /tftp
# rsh sheridan ps -A | grep tftp | grep -v grep 525680 ?? S 0:05.13 tftpd /tftp
Another alternative attribute is in_noalias. Setting this attribute on a service indicates that the service is not available to clients if they request the service using the cluster alias. Note that in_single, in_multi, and in_noalias are mutually exclusive attributes. This means that only one of these three attributes will be applied to a particular service. If you happen to set up a service with incompatible attributes, the "cluamgr –f" command will reject the attribute and issue an error message. The following example shows what happens if you make the mistake of trying to apply the incompatible combination of in_noalias and in_multi.
# grep tftp /etc/clua_services tftp 69/udp in_noalias,in_multi
# cluamgr -f Illegal option supplied service file entry: tftp 69/udp in_noalias,in_multi
The following example shows that the result of applying the in_noalias service attribute to the tftp service prevents the tftpd server daemon from running when requested from a client that is using a cluster alias to access the service. The in_noalias attribute has no effect if the client requests the service using an actual member name instead of an alias.
From the client system:
# rsh molari grep tftp /etc/clua_services tftp 69/udp in_noalias
No current tftp activity on either cluster member.
# rsh molari ps -A | grep tftp | grep -v grep # rsh sheridan ps -A | grep tftp | grep -v grep
Try to get some tftp activity going using the cluster alias.
# tftp babylon5 -get /tftp/big1 /dev/null & [1] 5118
# tftp babylon5 -get /tftp/big1 /dev/null & [2] 5119
No luck because of the in_noalias service attribute.
# rsh molari ps -A | grep tftp | grep -v grep # rsh sheridan ps -A | grep tftp | grep -v grep
Now try to get some tftp activity going using the actual cluster member names (not an alias).
# tftp molari -get /tftp/big1 /dev/null & [1] 5139
The tftpd daemon started on molari because we used the actual member name. Our request, therefore, was not affected by the in_noalias attribute.
# rsh molari ps -A | grep tftp | grep -v grep 1049745 ?? S 0:01.70 tftpd /tftp
As you (hopefully) recall, a cluster alias provides an additional IP address through which access to cluster services may be gained. You may want to restrict access to a service such that the clients must use a cluster alias to get access to the service. Attaching the in_nolocal attribute to a service indicates that there should be no access to the service unless the alias is used. This means that use of any of the member's local IP interface addresses (or node names) will not provide access to any service with the in_nolocal attribute.
The following example first shows successful client access to a service using a cluster alias, and then shows an unsuccessful attempt due to the in_nolocal attribute having been applied to the tftp service.
From the client system:
# rsh sheridan grep tftp /etc/clua_services tftp 69/udp in_multi,in_nolocal
Get some tftp activity going using the cluster alias. Works like a charm.
# tftp babylon5 -get /tftp/big1 /dev/null & [1] 5182
# rsh molari ps -A | grep tftp | grep -v grep 1049867 ?? S 0:00.00 tftpd /tftp
Try to get some tftp activity going using a member name. No luck because of the in_nolocal attribute.
# tftp sheridan -get /tftp/big1 /dev/null & [1] 5202
# rsh sheridan ps -A | grep tftp | grep -v grep
When communication takes place between nodes in a network, the nodes will identify themselves in the IP header with an IP address that is in use on the sending node. This would normally be the IP address associated with a local interface and not the IP address associated with a cluster alias. In the case where various authentication mechanisms may be in place, these mechanisms may function better if the sending node's (actually a cluster member's) IP address is the IP address associated with a cluster alias rather than a somewhat unpredictable, interface-based IP address that, in some cases, may come from different cluster members at varying points in the communication. It is better to have the IP address identifying the sending node be consistent and predictable. It also buries the fact that the response (or request) is from the cluster. This is achieved by applying the out_alias attribute to a service.
This service attribute is particularly useful for the rlogin, rsh, and rcp services that allow the creation of a trust between accounts on multiple nodes in the network using the /etc/hosts.equiv or ~/.rhosts files. Rather than having to place an entry for each cluster member in these files, the server (non-cluster member) can place one entry in the file. The single entry would contain the cluster alias.
The following example uses the login service as an example of a service using the out_alias attribute. After the attribute is removed, the rlogin activity requires a password. Prior to removing the out_alias attribute, the trust between the systems provided a convenient login with no password required. Needless to say, you probably do not want this trust enabled in a highly secure computing environment.
[sheridan] # grep login /etc/clua_services login 513/tcp in_multi, out_alias ,static
Remove out_alias from the login line and issue a "cluamgr –f" command to inform the CLUA software of the change.
[sheridan] # grep login /etc/clua_services login 513/tcp in_multi,static
# cluamgr -f
Login using the cluster alias. Notice no password was requested. This usually indicates that there is a trust enabled between the client and server through the /etc/hosts.equiv file or the ~/.rhosts file. See the reference pages for hosts.equiv or rhosts for more information on these files.
From the client system:
# rlogin babylon5 Last login: Tue Sep 3 00:41:34 EDT 2002 from babylon5.dec.com ...
# hostname sheridan.dec.com
Notice that when the login request is handled by molari, there is a prompt for password information. As you will see below, the information in the /etc/hosts.equiv file and/or the ~/.rhosts file consisted of the cluster alias and not the individual cluster member names. This works fine as long as the out_alias attribute is applied to the login service in the /etc/clua_services file.
From the client system:
# rlogin babylon5 Password: Last login: Tue Sep 3 00:46:19 EDT 2002 from babylon5.dec.com ...
# hostname molari.dec.com
In order to achieve the convenient trust that was enabled prior to removing the out_alias attribute from the /etc/clua_services file, we will have to enter a line for each individual cluster member in the /.rhosts file.
[molari] # cat /.rhosts babylon5.dec.com sheridan-ics0 molari-ics0 climach4 climach3
[molari] # cat >> /.rhosts molari sheridan
[molari] # cat /.rhosts babylon5.dec.com sheridan-ics0 molari-ics0 climach4 climach3 molari sheridan
Now the login works without a password required.
From the client system:
# rlogin babylon5 Last login: Tue Sep 3 00:48:42 EDT 2002 from sheridan.dec.com
You can take the out_alias attribute to mean that outgoing traffic will use the alias IP address to identify the sending node on the network.
The final flag, static, makes services associated with port numbers higher than 512 be statically associated with their assigned port numbers. The port numbers will not be assigned as dynamic ports. For example, once port 540 is associated with uucp (started by inetd), that is the only service that will be associated with port 540.
As mentioned earlier, if you make a change to the /etc/clua_services file, you will have to force the file to be re-read through the "cluamgr –f" command (on each member). If the change involves a service started by inetd, the inetd must be restarted (on each member) as well using a "kill –HUP" command.