20.4 NFS in a Cluster


20.4 NFS in a Cluster

NFS is a very heavily used mechanism for allowing transparent remote access to file systems. It requires a cooperative effort between the client system (doing the remote mount request) and the server system (providing the remote access).

20.4.1 NFS Client

Setting up a cluster member as an NFS client means that any NFS file systems mounted by the client member will be accessible to the other members of the cluster through the mounting member. Other members can also be NFS clients, but only one member can mount a particular NFS exported file system at a time. NFS clients will remotely mount file systems (or access directories) that actually exist on another system in the network. Given what we know about CFS, it is reasonable to expect that a file system that is mounted on one member will be visible to all members. This is indeed true for NFS mounted file systems as well. In truth, one member will request the remote mount, but all other members will get access to the remotely mounted file system through the member that actually did the mount. The member requesting the NFS mount automatically becomes the CFS server for that file system (see Figure 20-1).

click to expand
Figure 20-1: NFS Client – CFS Server

Note

If the mounting member fails, the NFS mount disappears and another member will not automatically become the new NFS client/CFS server unless automount (8) (V5.0A-V5.1A) or autofs (8) (V5.1-V5.1B) is configured.

 # mount -t nfs /usr/patches@delenn on /patches type nfs (v3, ro, nosuid,udp, hard, intr) 

 # cfsmgr /patches Domain or filesystem name = /patches Server Name =molari Server Status : OK 
 # ls -l /patches T64V51AB01AS0001-20020116.tar.gz T64V51AB02AS0002-20020513.tar.gz 

20.4.1.1 AutoFS

Note that the failure of a client member that has mounted a remote file system and is providing access (through CFS) to the remote file system to all other cluster members causes all members to lose access to the remote file system. AutoFS became available in Tru64 UNIX version 5.1 as a higher availability option to the automount command. HP recommends using AutoFS as an "automount" mechanism whereby remote file systems are automatically mounted upon reference. More importantly, if AutoFS is configured in a cluster, it will be configured (using CAA) such that automatic failover of the mounted file system can be arranged.

 # caa_stat -t autofs Name       Type          Target         State     Host ------------------------------------------------------------ autofs     application   ONLINE         ONLINE    molari 

Note that this is based on client side activities (NFS mounting is done only on the client). For more information on AutoFS, see the Tru64 UNIX Network Administration Guide: Services (the Network Administration Guide was split into two books in V5.1A) in the documentation set.

An interesting alternative to autofs exists. It involves creating a CDSL (which will be cluster-wide) and using the CDSL as the NFS mount point. The advantage to this is that there will be a

form of "virtual client failover" since the CFS serving part of the NFS client responsibilities (don't forget that the member that does the NFS client mount is also going to function as the CFS server) can now be accomplished by any cluster member (because the CDSL can be seen cluster wide). The disadvantage of this technique is that all coherency issues are the responsibility of NFS, which is less sophisticated in this area than CFS. The TruCluster Server Configuration Guide suggests the following three steps:

  • Create the mount point if one does not already exist.

     # mkdir / mountpoint 
  • Use the "mkcdsl –a" command to convert the directory into a CDSL. This will copy an existing directory to a member-specific area on all members.

     # mkcdsl -a / mountpoint 
  • Using the same NFS server, mount the NFS file system on each cluster member.

     # mount server:/filesystem / mountpoint 

20.4.2 NFS Server

The cluster can also be configured as an NFS server. The typical arrangement is such that the entire cluster is treated as the NFS server.

20.4.2.1 Exporting File Systems Containing CDSLs

We suggest that you consider the contents of the file systems that you export. If the exported file system contains CDSLs, you would think that the client should receive access to the member-specific file pointed to by the CDSL. At first glance this seems to be an acceptable arrangement. But consider what happens when a system (any system – even a non-clustered system) accesses a file using a CDSL. Recall that the CDSLs exist in a standalone system (such as our NFS client), but they resolve to /cluster/member/member0 locations (as opposed to /cluster/member/member1, or /cluster/member/member2, ). Therefore an NFS client system that resolves a file reference to a CDSL on the NFS server actually resolves the CDSL on the NFS client! Oops – definitely not what we had in mind. The more you think about it, the more it makes sense. Any time a system references a CDSL, it resolves it to a location on the system that is referencing the CDSL, which is the NFS client in this case. The lesson here is to export file systems that do not have CDSLs, or at least design the client-side applications so that they do not access the CDSLs without being aware of the repercussions. The following example shows some notable behavior when accessing CDSLs from an NFS client. As odd as this behavior may seem, it is reasonable. But it serves to emphasize that CDSLs can potentially be a problem in an exported file system.

 # file /etc/rc.config /etc/rc.config: symbolic link to ../cluster/members/{memb}/etc/rc.config 

 # grep "HOSTNAME=" /cluster/members/member?/etc/rc.config /cluster/members/member0/etc/rc.config:HOSTNAME="molari.dec.com" /cluster/members/member1/etc/rc.config:HOSTNAME="molari.dec.com" /cluster/members/member2/etc/rc.config:HOSTNAME="sheridan.dec.com" 

From a client system, mount the cluster's /etc directory.

 # mount babylon5:/etc /mnt 
 # grep "HOSTNAME=" /mnt/rc.config HOSTNAME="delenn.dec.com" 

20.4.2.2 Excluding Members from NFS Server Responsibilities

If desired, some members may be excluded from the duties of NFS serving using the "sysman" with the "-focus" option (see the TruCluster Server Cluster Administration Guide for more information). By focusing the configuration utility (sysman or nfsconfig) on a particular member, it will override the cluster-wide configuration. So you can configure the cluster as an NFS server but then exclude certain members if you prefer. The following output shows that the NFS serving variables are normally reflected in the /etc/rc.config.common file, but if the member is reconfigured as a member-specific NFS server, the variables appear in the member-specific /etc/rc.config file.

 # grep -i NFS /etc/rc.config /etc/rc.config.common /etc/rc.config.common:NUM_NFSIOD="7" /etc/rc.config.common:export NUM_NFSIOD /etc/rc.config.common:NFS_CONFIGURED="1" /etc/rc.config.common:export NFS_CONFIGURED /etc/rc.config.common:NFSSERVING="1" /etc/rc.config.common:export NFSSERVING /etc/rc.config.common:NFSLOCKING="1" /etc/rc.config.common:export NFSLOCKING /etc/rc.config.common:PCNFSD="0" /etc/rc.config.common:export PCNFSD 
 # nfsconfig -ui cui -focus molari 

De-configure the member and then reconfigure it as a member-specific NFS server.

 # grep -i NFS /etc/rc.config /etc/rc.config.common /etc/rc.config:NFS_CONFIGURED="1" /etc/rc.config:export NFS_CONFIGURED /etc/rc.config:NFSSERVING="1" /etc/rc.config:export NFSSERVING /etc/rc.config:NFSLOCKING="1" /etc/rc.config:export NFSLOCKING /etc/rc.config:PCNFSD="0" /etc/rc.config:export PCNFSD ... 

Alternatively, you could leave the NFS values in rc.config.common so that all members are configured and then set NFS_CONFIGURED to "0" in the rc.config file of the member that you do not want configured. The rc.config file overrides the values in rc.config.common.

20.4.2.3 The exports.aliases File

Note that the NFS clients must reference the default cluster alias (or an alias that has been established in the /etc/exports.aliases file – new in V5.1A) in order to enable the cluster to transparently provide some NFS server failover capabilities. The following output shows an attempt to do an NFS mount using a member name rather than the default cluster alias. The successful mount uses the cluster alias to identify the server.

 # showmount -e babylon5 Exports list on babylon5: /etc                 Everyone /usr/den             Everyone 
 # showmount -e molari Can't do Exports rpc: RPC: Program unavailable 
 # mount molari:/etc /mnt Can't access molari:/etc: Connection refused 
 # mount babylon5:/etc /mnt 

 # df -t nfs Filesystem            512-blocks     Used    Available  Capacity Mounted on babylon5:/etc           1002864    185468       805728     19%   /mnt 

Essentially, if the member currently serving out access to the exported file systems fails, another cluster member should pick up the flag and keep on serving.

The NFS client and server daemons can be running simultaneously on multiple cluster members. Currently, there's no way to restrict a particular mount point to be exported to a particular alias. But by using the /etc/exports.aliases file, an alias name (joined by particular cluster members only) can be used by a client's mount command yielding a situation where only selected members of the cluster (members of the alias) will do NFS serving for the requesting client.

The following is an important excerpt from the /etc/exports.aliases file:

 # *** You must be very careful to ensure that for each file system #     being exported to NFS clients, the CFS server of the file system #     is a member of the cluster alias being used by the clients. #     Otherwise performance will be severely degraded for NFS over UDP #     mounts. This is because an attempt is always made to tunnel NFS #     over UDP packets to the CFS server for the file system. If the #     server is not a member of the cluster alias being used, then #     each packet is randomly assigned to a node that is a member of #     the alias by the cluster alias round robin algorithm. Having IO #     requests for the same file handled by different CFS clients will #     severely degrade performance. 

For more information, see the exports.aliases (4) reference page.

NFS can be configured using nfsconfig or sysman. Using sysman without specifying a focus indicates that the configuration should take place cluster wide. Therefore any configuration information will be placed in /etc/rc.config.common.

20.4.3 NFS Locking

Your applications may need file locking during their dealings with NFS mounted file systems. If so, the client copy of the rpc.lockd (8) and rpc.statd (8) daemons must be running on the client members. The server versions of these lock daemons will run on one member of the cluster at a time. In your cluster, these daemons are run as a highly available application resource (cluster_lockd) managed by CAA.

 # caa_stat -t cluster_lockd Name            Type          Target         State      Host ----------------------------------------------------------------- cluster_lockd   application   ONLINE         ONLINE     molari 

For more information on CAA, see Chapter 23 and Chapter 24. For more information on NFS locking, see the rpc.lockd (8) and rpc.statd (8) reference pages as well as the Tru64 UNIX Network Administration Guide (V5.0A and V5.1) and the Tru64 UNIX Network Administration: Services Guide (V5.1A and newer).




TruCluster Server Handbook
TruCluster Server Handbook (HP Technologies)
ISBN: 1555582591
EAN: 2147483647
Year: 2005
Pages: 273

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net