In the first release of the TruCluster Server product (V5.0A[1]), when you issued a mount command on any member in the cluster, all members would automatically have access to the file system at the same mount point. While this is generally considered a good thing, there might be reasons why you would not want every member in the cluster to have access to a file system. For example, you may have an application that should only run on one member at a time due to lack of synchronization built into the application. If the application was on a file system that every member has access to, then it is possible that the application could be started on more than one member at a time.
While you could set up a Cluster Application Availability (CAA) application resource to automatically start/stop/relocate the application in a cluster, you could not prevent someone from starting the application short of setting the application's permissions (which might actually create other issues). For information on CAA, see chapters 23 and 24.
The TruCluster engineers recognized the need for a mechanism to restrict a file system's scope to a member and added "Partitioned File System" support to the TruCluster Server product in V5.1. As more and more applications become cluster-aware, the usefulness of the Partitioned File System will likely fade to black – but for now, it is very useful, particularly for those customers migrating from a TruCluster Available Server Environment (ASE). For more information on migrating from ASE to TruCluster Server, see chapter 26.
Mounting a file system on only one member is pretty easy. All you need to do is add the "-o server_only" option to the mount(8) command. For example, the /kits mount point in our cluster is currently mounted cluster-wide. We can illustrate this by issuing a df(1) or mountcommand on each member in the cluster.
[sheridan] # df /kits Filesystem 512-blocks Used Available Capacity Mounted on extra#kits 13054928 4875562 8082544 38% /kits
[molari] # df /kits Filesystem 512-blocks Used Available Capacity Mounted on extra#kits 13054928 4875562 8082544 38% /kits
So let's unmount the file system and remount it using the "-o server_only" option. Notice that we only need to issue the umount(8) command on one member to unmount the file system cluster-wide.
[sheridan] # umount /kits
[sheridan] # mount –o server_only extra#kits /kits
If we now reissue the df command on each member we should see /kits mounted only on sheridan. Let's check it out:
[sheridan] # df /kits Filesystem 512-blocks Used Available Capacity Mounted on extra#kits 13054928 4875562 8082544 38% /kits
[sheridan] # rsh molari-ics0 df /kits /kits: Permission denied
In the last example, we decided to rsh(1) to the other cluster member so that we would not need to log in. We're just being lazy…efficient. Let's take this one step further and see what happens when we try to interact with the /kits directory on each member.
[sheridan] # cd /kits ; ls .tags local sbin V5.1A perl usr acrobat_v405.tar.gz quota.group var cluster quota.user
[molari] # ls /kits ls: /kits: No permission
[molari] # cd /kits ksh: /kits: permission denied
Maybe you're thinking, "If it is mounted as a partitioned file system, how can I verify that fact, short of getting an error message from the cd, df, and ls commands?" You can use the cfsmgr command, although it will only show that the file system is mounted and not that it is mounted as a partitioned file system. And the mount command will show you that the file system it is mounted as a partitioned file system but does not indicate which system is the CFS server.
[molari] # cfsmgr /kits Domain or filesystem name = /kits Server Name = sheridan Server Status : OK
You can also see what another member in the cluster sees by using the "-h" switch.
[molari] # cfsmgr -h sheridan /kits Domain or filesystem name = /kits Server Name = sheridan Server Status : OK
You can also use the mount command and then grep(1) for "server_only".
[molari] # mount | grep server_only extra#kits on /kits type advfs (rw, server_only)
Instead of using the cfsmgr and mount commands, we recommend you use the cfs command and look for an "@" which will tell you if the file system is mounted as a partitioned file system.
# cfs CFS Server Mount Point File System FS Type ----------------- ------------------------- ------------------------ ------- molari /cluster/members/member1/ root1_domain#root AdvFS boot_partition sheridan /cdrom /dev/disk/cdrom1c CDFS sheridan /mnt /dev/disk/dsk5a UFS sheridan / cluster_root#root AdvFS sheridan /usr cluster_usr#usr AdvFS sheridan /var cluster_var#var AdvFS sheridan @ /kits extra#kits AdvFS sheridan /u1 home#u1 AdvFS sheridan /cluster/members/member2/ root2_domain#root AdvFS boot_partition sheridan /fafrak tcrhb#fafrak AdvFS sheridan /lola tcrhb#lola AdvFS
In the cfs command's output above, the /kits file system is mounted as a partitioned file system.
As with most things in life, certain rules and restrictions apply. Table 13-2 shows the particular rules and restrictions that pertain to the various TruCluster Server versions.
File System Partitioning Rules and Restrictions | ||||
---|---|---|---|---|
V5.1a | V5.1 | V5.0A | ||
File system Supported | AdvFS,MFS,UFS | AdvFD | Unsupoorted | |
Automatically Failover? | via CFS? | No | No | |
via VAA? | yes[1] | yes[1] | ||
Manual Relocation(i.e., cfsmgr -a server=member)? | No | No | ||
NFS Export? | yes[2] | No | ||
Mixing cluster-wide and parititioned filesets in the dsame domain | No[3] | No[3] | ||
Mount a file system under a partitioned file system | NO | No | ||
Mount Updates (i,e, ,mount -u -o server_only)? | No | No | ||
[1]Although automatic failover is not supported by the CFS, you can configure a CAA resource to unmount and mount a file system when the resource is relocated. See chapter 24 for an example.
[2]In V5.1A, the restriction to where NFS clients must use the default Cluster Alias has been lifted, so you can create a new alias, place the alias in the /etc/export.aliases file, and restrict it to the member with the mounted partitioned file system via a startup script or a CAA resource. See chapter 24 for an example. See chapter 20 for more information on the /etc/exports.aliases file.
[3]The "-o server_only" option to the mount command applies to all filesets in a domain. In other words, if you have a fileset in a domain already mounted cluster-wide, then you cannot mount another fileset in the same domain "-o server_only" --you will receive an error. Conversely, if you have a fileset in a domain mounted "-o server_only", any additional filesets in the domain mounted subsequently will also be mounted "-o server_only". |
Let's see what happens when you attempt to circumvent the rules.
We have placed an ISO-9660 formatted CD into /dev/disk/cdrom1c. According to the rules, we cannot mount this file system "-o server_only", so what happens if we try it anyway? The CD-ROM drive is located on sheridan's local bus, so we'll attempt to mount the drive "-o server_only" from sheridan. Note, even though the drive is local to sheridan, we can just as easily mount the drive from molari. In fact, due to the Device Request Dispatcher (DRD), a device located anywhere in the cluster is accessible by any member in the cluster (we will discuss the DRD in chapter 15).
[sheridan] # mount -o server_only /dev/disk/cdrom1c /mnt
[sheridan] # df /mnt Filesystem 512-blocks Used Available Capacity Mounted on /dev/disk/cdrom1c 263948 263948 0 100% /mnt
Well, we did not get an error mounting it. We expect to see the df command succeed on the member that mounted the device, but if the file system was truly mounted "-o server_only", then the other member should return a "/mnt: Permission denied" error, but it doesn't as illustrated in the following example:
[molari] # df /mnt Filesystem 512-blocks Used Available Capacity Mounted on /dev/disk/cdrom1c 263948 263948 0 100% /mnt
The moral of this story is that mounting a file system "-o server_only" when the file system is not allowed to be mounted as a partitioned file system yields a cluster-wide mounted file system.
The CFS will not relocate a partitioned file system if the member that mounted it fails. Since only that member was using it, the other cluster members are not going to miss the file system anyway, but your users might. If the file system must be highly available and partitioned, then you should configure the file system as part of a CAA application resource. If the member running the resource fails, then the CAA subsystem will automatically choose another member to run the application (and mount the partitioned file system). We will discuss application resources and show you how to incorporate a partitioned file system within an application resource in chapters 23 and 24 – stay tuned.
According to the rules, you cannot mix cluster-wide and partitioned file system mounts in the same domain. Let's explore how the operating system handles subsequent mounts given that first fileset is mounted either cluster-wide or "-o server_only". We will be using a multi-fileset domain that we created for this demonstration, tcrhb. Let's see which filesets are in this domain.
# showfsets -b tcrhb lola fafrak
Let's mount one fileset cluster-wide.
# mount tcrhb#fafrak /fafrak
Now mount the second fileset partitioned.
# mount -o server_only tcrhb#lola /lola Cannot mount fileset server_only when existing domain is not. tcrhb#lola on /lola: Function not implemented
The second fileset is prevented from being mounted at all. Unmount the file system so that we can mount it "-o server_only" next.
# umount /fafrak
This time we will mount the first fileset partitioned.
# mount -o server_only tcrhb#fafrak /fafrak
The subsequent mounts will not fail but will also be partitioned. For example:
# mount tcrhb#lola /lola WARNING: Domain is already specified server_only so this mount will be.
# mount | grep server_only tcrhb#fafrak on /fafrak type advfs (rw, server_only) tcrhb#lola on /lola type advfs (rw, server_only)
Alternatively, you could use the output of cfs command piped into the grep command – the advantage being that you can also see which member is the server of the partitioned file system.
# cfs | grep @ molari @ /fafrak tcrhb#fafrak AdvFS molari @ /lola tcrhb#lola AdvFS
What happens when you attempt to mount a file system underneath a partitioned file system's mount point? You get an error – really, no kidding. Don't believe us? Okay, how about if we prove it?
[sheridan] # mount -o server_only tcrhb#fafrak /fafrak
[sheridan] # mount extra#kits /fafrak/kits extra#kits on /fafrak/kits: Permission denied
You cannot mount update a file system to make it "-o server_only". For example:
[sheridan] # umount /fafrak
[sheridan] # mount tcrhb#fafrak /fafrak
[sheridan] # mount -u -o server_only /fafrak Cannot update existing mount to be server-only.
If you want to change a mounted file system from a cluster-wide mount to a partitioned mount, then you must unmount the file system and mount the file system "-o server_only".
You can, however, mount update a file system from read-only mode to read-write mode. Let's remount the tcrhb#fafrak file system "-o server_only, ro", so that we can attempt to update the mount to be read-write.
[sheridan] # umount /fafrak
[sheridan] # mount -o server_only,ro tcrhb#fafrak /fafrak
[sheridan] # mount | grep fafrak tcrhb#fafrak on /fafrak type advfs (ro, server_only)
[sheridan] # mount -u /fafrak WARNING: Domain is already specified server_only so this mount will be.
[sheridan] # mount | grep fafrak tcrhb#fafrak on /fafrak type advfs (rw, server_only)
[1]Technically the first release was V5.0, but it was a very limited release.