10.4 Sharing Filesystems


In the final section of this chapter, we consider sharing localfilesystems with other systems, including Windows systems. It covers the most common Unix filesystem sharing facility, NFS, and the Samba facility, which makes Unix filesystems available to Windows systems.

More information about NFS is available in NFS and NIS by Hal Stern, Mike Eisler and Ricardo Labiaga (O'Reilly & Associates). More information about Samba is available in the books Teach Yourself Samba in 24 Hours by Gerald Carter with Richard Sharpe (SAMS) and Using Samba by Robert Eckstein, David Collier-Brown, and Peter Kelly (O'Reilly & Associates).

10.4.1 NFS

The Network File System (NFS) enables filesystems physically residing on one computer system to be used by other computers in the network, appearing to users on the remote host as just another local disk.[35] NFS is universally available on Unix systems.

[35] However, NFS assumes that users will have accounts with the same UID on both systems.

The following configuration files are used by NFS:

/etc/fstab (/etc/vfstab under Solaris)

Remote filesystems are entered into the filesystem configuration file, using only a slightly varied syntax from regular entries.

/etc/exports

This file controls which filesystems on the local system can be mounted by remote hosts and under what conditions and restrictions. On Solaris systems, this file is not used, but the file /etc/dfs/dfstab performs an analogous function.

Table 10-10 lists the daemons used by NFS and the files that start them in the various Unix versions.

Table 10-10. NFS daemonsa

Item

AIX

FreeBSD

HP-UX

Linux

Solaris

Tru64

Main NFS daemon

nfsd

nfsd

nfsd

rpc.nfsd

nfsd

nfsd

Handles mount requests

mountd

mountd

mountd

rpc.mountd

mountd

mountd

Block/asynch. I/O

biod

nfsiod

biod

   

nfsiod

File locking

rpc.lockd

rpc.lockd

rpc.lockd

rpc.lockd

lockd

Rpc.lockd

Network status monitor

rpc.statd

rpc.statd

rpc.statd

rpc.statd

statd

Rpc.statd

RPC port mapper

portmap

portmap

portmap

portmap

rpcbind

portmap

Boot script(s)[36]

/etc/rc.nfs

/etc/rc.network

/sbin/init.d/nfs.*

/etc/init.d/nfs*

/etc/init.d/nfs.*

/etc/init.d/nfs*

[36] The portmap daemon is started by a different file, as part of general TCP/IP initialization.

A few remarks about some of these daemons are in order:

  • The nfsd daemon handles filesystem exporting and file access requests from remote systems. An NFS server any system that makes its filesystems available to other computers runs multiple instances of this daemon.

  • The biod daemon performs NFS (block) I/O operations for client processes. Multiple instances of this daemon typically run on NFS clients.

  • The mountd daemon handles mount requests from remote systems.

  • The rpc.lockd daemon manages file locking on both server and client systems.

  • The rpc.statd daemon handles lock, crash, and recovery services (client and server).

  • The portmap daemon facilitates initial connection between local and remote servers (not strictly an NFS daemon but required for the NFS server facility to function).

As Table 10-10 indicates, the names of these daemons vary on some systems.

10.4.1.1 Mounting remote directories

As we've noted, remote filesystems may be entered into the filesystem configuration file in order to allow them to be automatically mounted at boot time. The format for an NFS entry is:

host:pathname mount-pt nfs options 0 0

where the first field is a concatenation of the remote hostname and the pathname to the mount point of the desired filesystem on the remote host, joined with a colon. For example, to designate the filesystem mounted at /organic on host duncan, use duncan:/organic. The filesystem type field is set to nfs, and the remaining fields have their usual meanings. Note that the dump frequency and fsck pass fields should be zero.

Here is an example:

# device         mount             type  options  dump fsck duncan:/organic  /duncan/organic   nfs   bg,intr     0 0

On Solaris systems, the /etc/vfstab entries look like this:

# mount          fsck # device         dev   mount             type  pass  auto?  options duncan:/organic  -     /remote/organic   nfs    -    yes    bg,intr

In addition to options for local filesystems, there are many other options available for remote filesystems. The most important are summarized in Table 10-11.

Table 10-11. Important NFS-specific mounting options

Option

Meaning

bg

If the NFS mount of this filesystem fails on the first try, continue retrying in the background. This speeds up booting when remote filesystems are unavailable.

retry=n

Number of mount retries before giving up (100,000 is the default).

timeo=n

Set the timeout the length of time to wait for the first try of each individual NFS request before giving up to the specified number of tenths of seconds. Each subsequent retry doubles the previous timeout value.

retrans=n

Retransmit a request n times before giving up (the default is 3).

soft, hard

Quit or continue trying to connect even after the retrans value is met.

intr

Allow an interrupt to kill a hung process.

rsize=n

wsize=n

The size of the read or write buffer in bytes. Tuning these sizes can have a significant impact of NFS performance on some systems.

The soft and hard options are worth special mention. They define the action taken when a remote filesystem becomes unavailable. If a remote filesystem is mounted as hard, NFS will try to complete any pending I/O requests forever, even after the maximum number of retransmissions is reached; if it is mounted soft, an error will occur and NFS will cancel the request.

If a remote filesystem is mounted hard and intr is not specified, the process will block (be hung) until the remote filesystem reappears. For an interactive process especially, this can be quite annoying. If intr is specified, sending an interrupt signal to the process will kill it. This can be done interactively by typing Ctrl-C (although it won't die instantly; you'll still have to wait for the timeout period). For a background process, sending an INT (2) or QUIT (3) signal will usually work (again not necessarily instantaneously):

#  kill -QUIT 34216

Sending a KILL signal (-9) will not kill a hung NFS process.

It would seem that mounting filesystems soft would get around the process-hanging problem. This is fine for filesystems mounted read-only. However, for a read-write filesystem, a pending request could be a write request, and so simply giving up could result in corrupted files on the remote filesystem. Therefore, read-write remote filesystems should always be mounted hard, and the intr option should be specified to allow users to make their own decisions about hung processes.

Here are some additional example /etc/fstab entries for remote filesystems:

duncan:/benzene    /rings    nfs rw,bg,hard,intr,retrans=5  0  0  portia:/propel     /peptides nfs ro,soft,bg,nosuid          0  0

The first command mounts the filesystem mounted at /benzene on the host duncan under /rings on the local system. It is mounted read-write, hard, with interrupts enabled. The second command mounts the /propel filesystem on the host portia under /peptides; this filesystem is mounted read-only, and the SetUID status of any of its files is ignored on the local host.

Under AIX, remote filesystems have stanzas in /etc/filesystems like local ones, with some additional keywords:

/rings: Local mount point.     dev        =  /benzene             Remote filesystem.     vfs        =  nfs                  Type is NFS.     nodename   =  duncan               Remote host.     mount      =  true                 Mount on boot.     options    =  bg,hard,intr         Mount options.

Once defined in the filesystem configuration file, the short form of the mount command may be used to mount the filesystem. For example, the following command mounts the proper remote filesystem at /rings:

# mount /rings

The mount command may also be used to mount remote filesystems on an ad hoc basis, for example:

# mount -t nfs -o rw,hard,bg,intr duncan:/ether /mnt

This command mounts the /ether filesystem from duncan under /mnt on the local system. Note that the option that specifies the filesystem type varies on some systems. In fact, the filesystem type is usually superfluous.

10.4.1.2 Exporting local filesystems

The /etc/exports file controls the accessibility of local filesystems to network access (except on Solaris systems; see below). Its traditional form consists of a series of lines containing a local filesystem moun t point and followed by one or more hostnames:

/organic    spain canada  /inorganic

This export configuration file allows the hosts spain and canada to mount the /organic filesystem and any remote host to remotely mount the /inorganic filesystem.

The preceding examples present only the simplest examples of filesystem export options. In fact, any filesystem, directory, or file can be exported, not just the entire filesystem. And there is greater control over the type of access allowed. Entries in /etc/exports consist of lines of the form:

pathname -option,option...

pathname is the name of the file or directory to which network access will be allowed. If pathname is a directory, all of the files and directories below it within the same local filesystem are also exported, but not any filesystems mounted within it. The second field in the entry consists of options specifying the type of access to be given and to whom.

A filesystem should be exported only once to a given host. Exporting two different directories within the same filesystem to the same host doesn't work in general.

Here are some sample entries from /etc/exports (note that only the first option in the list is preceded by a hyphen):

/organic    -rw=spain,access=brazil:canada,anon=-1  /metal/3    -access=duncan:iago,root=duncan  /inorganic  -ro

This file allows the host spain to mount /organic for reading and writing and the hosts brazil and canada to mount it read-only, and it maps anonymous users usernames from other hosts that do not exist on the local system and the root user from any remote system to the UID -1. This corresponds to the nobody account, and it tells NFS not to allow such a user access to anything. On some systems, the UID -2 may be used to allow anonymous users access only to world-readable files. The -rw option exports the directory read-write to the hosts specified as its argument and read-only to all other allowed hosts; this access is referred to as read-mostly.

Note that hosts within a list are separated by colons.

The second entry grants read-write access to /metal/3 to the hosts duncan and iago, and allows root users on duncan to retain that status and its access rights when using this filesystem. The third entry exports /inorganic read-only to any host that wants to use it.

Table 10-12 lists the most useful exports file options.

Table 10-12. Useful exports file options

Option

Meaning

rw=list ro=list

Read-write and read-only access lists. rw is the default.

root=list

List of hosts where root status may be retained for this filesystem.

anon=n

Map remote root access to this UID.

maproot=n

Map remote root access to this UID (FreeBSD).

mapall=n

Map all remote users to this UID (FreeBSD).

root_squash

Map UID 0 and GID 0 values to the anonymous values (under Linux, to those specified in the anonuid and anongid options). This is the default.

anonuid=n anongid=n

UID/GID to which to map incoming root/group 0 access (Linux).

noaccess

Prohibits access to the specified directory and its subdirectories (Linux). This option is used to prevent access to part of a tree that has already been exported.

secure

Require access to be via the normal privileged NFS port (Linux). This is the default. I do not recommend ever using the insecure option.

If you modify /etc/exports, the exportfs command must be run to put the new access restrictions into effect. The following command puts all of the access information in /etc/exports into effect:

# exportfs -a

FreeBSD does not provide the exportfs command. You can use this command instead:

# kill -HUP `cat /var/run/mountd.pid`

Tru64 also does not have exportfs. The NFS mountd daemon detects changes to the file automatically.

The showmount command may be used to list exported filesystems (using its -e option) or other hosts that have remotely mounted local filesystems (-a). For example, the following command shows that the hosts spain and brazil have mounted the /organic filesystem:

# showmount -a  brazil:/organic  spain:/organic

This data is stored in the file /etc/rmtab. This file is saved across boots, so the information in it can get quite old. You may want to reset it from time to time by copying /dev/null onto it (the system boot scripts take care of this automatically when NFS is started).

NOTE

figs/armadillo_tip.gif

If you're having trouble allowing other systems to mount the local filesystems from some particular system, the first thing to check is that the NFS server daemons are running. These daemons are often not started by default. If they are not running, you can start them manually, using the boot script listed in Table 10-10.

10.4.1.2.1 Exporting directories under Linux

The exports file has a slightly different format on Linux systems; options are included in parentheses at the end of the entry:

/organic     spain(rw) brazil(ro) canada(ro)  /metal/3     *.ahania.com(rw,root_squash)  /inorganic   (ro)

Based on this file, /organic is exported read-write to spain and read-only to brazil and canada. /metal/3 is exported read-write to any host in the domain ahania.com, with UID 0 access mapped to the nobody account. /inorganic is exported read-only to the world.

10.4.1.2.2 Exporting filesystems under Solaris

On Solaris systems, filesystem exporting is done via the /etc/dfs/dfstab configuration file, which stores the share commands needed to export filesystems. The following dfstab file is equivalent to the exports file we looked at previously:

share -F nfs -o rw=spain,access=brazil:canada,anon=-1 /organic  share -F nfs -o access=duncan:iago,root=duncan /metal/3  share -F nfs -o ro /inorganic

For example, the first line exports the /organic filesystem: it allows spain to mount it for reading and writing and brazil and canada to mount it read-only. Requests from usernames without accounts on the local system are denied.

These same commands need to be executed manually to put these access restrictions into effect prior to the next reboot (be sure that mountd is running).

10.4.2 The NFS Automounter

Once a network has even a moderate number of systems in it, trying to cross-mount even one or two filesystems from each system can quickly become a nightmare. The NFS automounter facility is designed to handle such situations by providing a means by which remote directories are mounted only when they are needed: when a user or process uses or refers to a file or subdirectory located within the remote directory. Directories that have not been used in a while are also unmounted automatically.

Using the automounter has the potential for simplifying remote directory management. The filesystem configuration file is made more straightforward because it lists only local filesystems and perhaps one or two statically mounted remote filesystems or directories. Booting is faster because NFS mounts are done later. Systems can also be shut down unexpectedly with fewer ill effects and hung processes.

The automounter works by detecting attempted access to any part of the remote directories under its control. When such an event occurs, the automounter generally mounts the remote filesystem into a directory known as its staging area usually /tmp_mnt and creates a (pseudo) symbolic link to the mount location expected by the user. For example, if a user attempts to copy the file /data/organic/strained/propell.com, and /organic is a directory on host spain, the automounter will mount that remote directory on /tmp_mnt and create a link to the local mount point, /data/organic. To the user, the file will look like it really is located in /data/organic/strained; however, if he changes to the directory /data/organic and issues a pwd command, the real mount point will be visible (confusion is also likely if he uses a command like cd .. after moving to an automounted directory until he gets used to how the automounter works).

The automounter uses configuration files known as maps, which are of two types:

  • Direct maps hold entries for remote directories to be mounted on demand by the automounter. These entries are really just abbreviated versions of traditional NFS /etc/fstab entries.

  • Indirect maps are used for local directories whose subdirectories are each NFS-mounted, most likely from different remote hosts. For example, user home directories are usually managed with an indirect map. They are all automounted at a standard location within the filesystem on every system within a network, even though every one of them may be physically located on a separate system.

Indirect maps are used far more frequently than direct ones.

Direct maps are conventionally stored in /etc/map.direct. Here is a sample entry from a direct map:

/metal/3    -intr    dalton:/metal/3

This entry places the directory /metal/3 on host dalton under automounter control. The directory will be mounted when needed at /metal/3 on the local system; directories controlled by direct maps do not use the automounter staging area. The second field in the entry holds options for the mount command.

Indirect maps are generally named for the local directory whose (potential) contents they specify. Here is a short version of the indirect map /etc/auto.homes, which is used to configure the local directory /homes; its entries specify the remote locations of the various subdirectories of /homes:

chavez-rw,intr  dalton:/home/chavez  harvey-rw,intr  iago:/home/harvey  wang-rw,intr  portia:/u/wang  stein-rw,intr  hamlet:/home/stein4

The format is very similar to that for direct maps. In this case, the first field is the name of the subdirectory of /homes from which the remote directory will be accessed locally. Note that we have set up automounting at /homes, not in the usual location of /home, because it is illegal to mix local and automounted subdirectories within the same local directory.

Once the automounter is configured in this way on every system, user home directories will be invariant to the system the user happens to be using. No matter where he is, his home directory will always have the same files within it.

The automounting facility uses the automount daemon, which may be started with a command like this one:

# automount -tl 600 /homes /etc/auto.homes /- /etc/auto.direct

The -tl option specifies how long a directory must be idle before it is automatically unmounted (in seconds; five minutes is the default). The next two arguments illustrate the method for specifying a local directory for automounter control and its corresponding indirect map. The final two arguments illustrate how a direct map is specified; the local directory is always specified as /- for a direct map. A command like the previous needs to be added to (or uncommented out within) the system initialization scripts for the automounter to be started at boot time.

If you want to stop the automounter process for some reason, use the kill command without any signal option; this will send the process a TERM signal and allow it to terminate gracefully and clean up after itself. For example:

# kill `ps -ea | grep automoun | awk '{print $1}'`

If you kill it with -9, hung processes and undeletable phantom files are the almost certain result.

10.4.3 Samba

The freeSamba facility allows Unix filesystems to be shared with Windows systems. Samba does so by supporting the Server Message Block (SMB) protocol,[37] the native resource sharing protocol for Microsoft networks. It is available for all of the Unix versions we are considering.

[37] Also known as theCommon Internet File System, CIFS (this week...).

With Samba, you can make Unix filesystems look like shared Windows filesystems, allowing them to be accessed using the normal Windows facilities and commands such as net use . Linux systems can also mount Windows filesystems within the Unix filesystem using a related facility.

Installing Samba is quite simple. The books I mentioned earlier have excellent discussions of the procedure. Once you have built Samba, the next step is to create the Samba configuration file, smb.conf, usually stored in the lib subdirectory of the main Samba directory or in /etc/samba.

Here is a simple version of this file:

[global]                               Global settings applying to all exports. hosts allow = vala, pele hosts deny = lilith valid users = dagmar, @chem, @phys, @bio, @geo invalid users = root, admin, administrator max log size = 2000                    Log size in KB. [chemdir]                              Define a directory (share) for export. path = /chem/data/new                  Local (Unix) path to be shared. comment = New Data                     Description of the filesystem. read only = no                         Filesystem is not read-only.    case sensitive = yes                   Filenames are case sensitive. force group = chemists                 Map all user access to this Unix group. read list = dagmar, @chem, @phys       Users/groups allowed read access. write list = @chem                     Users/groups allowed write access.

The first section of the configuration file, introduced by the [global] line, specifies global Samba settings that apply to all filesystems exported via the facility. Its first two lines specify remote systems that are allowed to access Samba filesystems and those that are forbidden from doing so, respectively. The next two lines similarly specify Unix users and groups that are allowed and denied access (note that group names are prefixed by an at sign: @chem). The final line of this first section specifies the maximum size of the Samba log file in KB.

The second section of the sample Samba configuration file defines a filesystem for exporting (i.e., a share). In this case, it consists of the local path /chem/data/new, and it will be accessed by remote systems using the share name chemdir (defined in the section's header line). This exported filesystem is exported read-write and uses case-sensitive filenames. All incoming access to the filesystem will take place as if the user were a member of the local Unix chemists group. Windows user dagmar and groups chem and phys are allowed read access to the filesystem, and members of Windows group chem are also given write access. Whether an individual file may be read or written will still be determined by its Unix file permissions.

User home directories are exported in a slightly different way via configuration file entries like these:

[homes]                                Create the special homes share. comment = Home directories writeable = yes valid users = %S                       %S expands to the share name (here = username).

These entries create a share for each local Unix user home directory (as defined in the password file). These shares are actually created on the fly as they are accessed. For example, if user chavez attempts to access the share \\india\home (where india is the Unix system), the share \\india\chavez will be created and presented to her. Only she will be able to access this share due to the valid users line in the homes share definition; all other users will be denied access. User chavez can access the share as either \\india\homes or as \\india\chavez.

You can use the testparm command to verify the syntax of a Samba configuration file before you install it. See the Samba documentation for full details on configuration file entries.

Another useful Samba feature is the username mapping file, specified via a configuration file entry like the following:

username map = /etc/samba/smbusers

Entries within the file look like this:

# Unix = Windows chavez = rachel root = Administrator admin             Multiple names are allowed. quigley = "Filbert Quigley"            Quote names with spaces.

Map files can have some unexpected effects. For one thing, when a password is required by the Unix system before access is granted, it is that password for the Unix account that will be needed. This can be confusing if the mapping sends a user to an account that is different from the one he usually uses. Secondly, home share names will again reflect the mapped Unix username.

The smbstatus command may be used to display current remote users of local filesystems on the Unix system:

$ smbstatus Samba version 1.9.16 Service  uid     gid      pid   machine ----------------------------------------------  chemdir  nobody  chemists 14810 vala (192.168.13.34) Jul 14 11:51:07 No locked files
10.4.3.1 Samba authentication

In general, Samba prompts the user for a password when required. By default, these passwords are sent across the network in unencrypted form (i.e., as clear text). This is an insecure practice that most sites will find unacceptable. Samba can be modified to use only encoded passwords as follows:

  • Add the following entries to the global section of the Samba configuration file:

    encrypt passwords = yes security = user
  • Use the mksmbpasswd.sh script included with the Samba package source code to create the initial Samba password file. For example:

    # cat /etc/passwd | mksmbpasswd.sh > /etc/samba/private/smbpasswd
  • The smbpasswd file should be owned by root and have the permissions mode 600. The subdirectory in which it resides should be protected 500.

Once encrypted passwords are enabled, users must use the smbpasswd command in order to set their Samba passwords.

You can use a single Unix server to authenticate all Samba passwords by using these configuration file entries:

security = server password server = host encrypt passwords = yes

You can authenticate Samba using a Windows domain controller with these configuration file entries:

security = domain workgroup = domain password server = domain-controllers encrypt passwords = yes

See the Samba documentation and the previously cited books for more details about this topic (including how to use a Samba server as a Windows domain controller).

10.4.3.1.1 Mounting Windows filesystems under Linux and FreeBSD

The Samba package includes the smbclient utility in order to access remote SMB-based shares from the Unix system. It uses an interface similar to the FTP facility.

A much better approach is provided onLinux systems via the built-in smbfs filesystem type. For example, the following command mounts the depot share on vala as the local directory /win_stuff:

# mount -t smbfs -o username= user ,password= xxx  //vala/depot /win_stuff 

This command makes the connection as the specified user account on the Windows system using the specified password. If the password option is omitted, you will be prompted for the proper password. If you do include a password in the /etc/fstab file, be sure to protect the file from ordinary users. In general, you should not use the Administrator password. Create an unprivileged user account to use for the mount process instead.

A similar facility is available under FreeBSD Version 4.5 and later. For example:

# mount_smbfs -I vala //chavez@vala/depot /mnt Password:                              Not echoed.

Passwords can be stored in a file named $HOME/.nsmbrc. In this case, add the -N option to the command to suppress the password prompt. Here is a sample file:

[VALA:CHAVEZ:DEPOT]                    server:user:share password=xxxxxxxx

Yes, the first line really does have to be in uppercase (ugh!).

You can also enter such filesystems into /etc/fstab on either system, using entries like these:

# remote share       mount point  type  options //chavez@vala/depot  /depot/vala  smbfs noauto 0 0                            FreeBSD //vala/depot         /depot/vala  smbfs noauto,username=chavez,password=x 0 0 Linux

Under FreeBSD, you'll need to specify the password in the .nsmbrc file if you want to remote share to mounted automatically.



Essential System Administration
Essential System Administration, Third Edition
ISBN: 0596003439
EAN: 2147483647
Year: 2002
Pages: 162

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net