Managing the Network File System (NFS)

The Network File System (NFS) is a distributed file system that enables you to share files and directories on one computer and to access those resources from any other computer on the network. Users accessing the resources on an NFS server might or might not know that they're accessing files across the network. The actual location is made irrelevant, because resources accessed through NFS appear nearly identical to local files and directories.

One of the best features of NFS is that it enables Solaris to interface with a variety of network operating systems. Resources shared by NFS can be accessed by a Linux-based or Windows-based client, with relatively few configuration difficulties.

Files must be shared to be accessed across the network. You can share data manually with the share and shareall commands, or by adding an entry to the /etc/dfs/dfstab (distributed file system table) file. For a server that will be sharing numerous resources, using the dfstab file is the recommended option.

Shared NFS resources are known as file systems. Because NFS is supported across computer platforms, and because the term file system differs across platforms, an NFS file system simply refers to the portion of data being shared, even though this "file system" might be a single directory or a single file.

Here are some of the benefits of using NFS:

  • Multiple computers can use the same files, meaning that everyone on the network has access to the same data.

  • Data will be consistently reliable, because each user has access to the same data.

  • Computers can share applications as well as data, reducing the amount of storage needed on each computer.

  • The mounting and accessing of remote file systems is transparent to users.

  • Multiple network operating systems are supported as NFS clients.

A computer can become an NFS server simply by sharing files or directories on the network. Computers that access the NFS server are NFS clients. A computer can be both an NFS server and an NFS client to another machine. When a client machine mounts an NFS file system, the files on the NFS server are not actually copied to the client. Instead, NFS enables the client system to access files on the server's hard disk through a series of Remote Procedure Calls (RPCs).

The History and Evolution of NFS

NFS was originally developed by Sun Microsystems and has since been developed for many other popular network operating systems. The implementations of NFS vary across operating systems, but you can get a sense of the evolution of NFS by looking at the history and features of Sun's version.

The first version of NFS in wide use was NFS 2. Although NFS 2 continues to be popular, it lacks a lot of the features of current NFS implementations. Solaris versions older than Solaris 2.5 support NFS 2 only. Some limitations of NFS 2 are that it doesn't support 64-bit data transfers and it's restricted to 8KB data packet sizes.

With the release of Solaris 2.5, NFS 3 was introduced. As you might expect, a lot of enhancements were made to NFS 3. However, to enjoy the full functionality of NFS 3, both the client and server must support the version. Here are some of the more notable features of NFS 3 as released with Solaris 2.5:

  • The NFS server can batch requests, improving the server's response time.

  • NFS operations return file attributes, which are stored in local cache. Because the attributes don't need a separate operation to be updated, the number of RPCs being sent to the server is reduced, improving efficiency.

  • The default protocol for NFS 3 is the reliable Transmission Control Protocol (TCP) instead of the connectionless User Datagram Protocol (UDP).

  • The 8KB transfer size limit was eliminated. The client and server can negotiate a transfer size, with the default size being 32KB. Larger file transfers increase efficiency.

  • Improvements were made in verifying file access permissions. In version 2, if a user did not have permissions to read or write a file, they would get a generic "read error" or "write error." In version 3, users trying to access a file to which they don't have permissions receive an "open error."

  • Support for Access Control Lists (ACLs) was added. This increases the flexibility of security administration.

  • Improvements were made to the network lock manager. The more reliable lock manager reduces hanging from commands that use file locking, such as ksh and mail.

Solaris 2.6 also released improvements to NFS, although the version number remained unchanged. Here are some NFS 3 enhancements released with Solaris 2.6:

  • Files larger than 2GB could be transferred.

  • Dynamic failover of read-only file systems was introduced. This increases availability, as multiple replicas of read-only data can be created. If one NFS server is not available, another one can take its place.

  • The authentication protocol for commands such as mount and share was updated from Kerberos v4 to Kerberos v5.

  • WebFS was introduced, making file systems shared on the Internet available through network firewalls.

Solaris 8 introduced NFS logging. NFS logging enables an administrator to track all file operations that have been performed on the NFS server's shared file systems. With NFS logging, you can see which resources were accessed, when they were accessed, and by whom. The implications for security are tremendous, especially for sites that allow Internet-based or anonymous access.

NFS Files and Daemons

NFS provides a critical network resource-sharing service, and in terms of the number of files and daemons needed to support NFS, it's a complex service. There are 13 configuration files used, and six daemons needed to support full NFS functionality.

The files used for NFS configuration are listed in Table 11.7.

Table 11.7: NFS Files

File

Contents

/etc/default/fs

The default file system type for local file systems (usually UFS).

/etc/default/nfs

Configuration information for the nfsd and lockd daemons.

/etc/default/nfslogd

Configuration information for nfslogd, the NFS logging daemon.

/etc/dfs/dfstab

A list of local resources to be shared.

/etc/dfs/fstypes

Default file system types for remote file systems (usually NFS).

/etc/dfs/sharetab

Local and remote resources that are currently shared. Do not edit this file.

/etc/mnttab

A list of file systems that are currently mounted. Do not edit this file.

/etc/netconfig

Transport protocols. Do not edit this file.

/etc/nfs/nfslog.conf

General configuration information about NFS logging.

/etc/nfs/nfslogtab

Information for NFS log processing by nfslogd. Do not edit this file.

/etc/nfssec.conf

NFS security services. Do not edit this file.

/etc/rmtab

A table of file systems remotely mounted by NFS clients. Do not edit this file.

/etc/vfstab

File systems that are to be mounted locally.

Tip 

Questions about the functionality of and differences between /etc/default/fs and /etc/dfs/fstypes are commonly found on the exam. The /etc/default/fs file contains one entry, and that's the default local file system type. Of course, local file systems on hard disks usually use UFS. The /etc/dfs/fstypes file contains a list of remote file systems; the first entry is the default, which is usually NFS.

Some files listed in Table 11.7 include the warning "Do not edit." These files are updated and maintained by Solaris, and require no configuration from the administrator. In fact, editing these files directly could cause NFS to malfunction.

When Solaris is booted into run level 3, the NFS daemons are started. Six daemons are used to support NFS. Two of them, mountd and nfsd, are run exclusively on the NFS server. Two others, lockd and statd, are run on both clients and servers to facilitate NFS file locking. The NFS daemons are listed in Table 11.8.

Table 11.8: NFS Daemons

Daemon

Function

automountd

Handles the mounting and unmounting of file systems based on requests from the AutoFS service. AutoFS will be discussed in the "Using AutoFS" section later in this chapter.

lockd

Manages record locking for NFS files.

mountd

Handles file system mount requests from remote computers. When a remote user attempts to mount a resource, mountd checks the /etc/dfs/sharetab file to determine which file systems can be mounted and by whom.

nfsd

After a remote file system is mounted, nfsd handles file system requests, such as file access permissions and opening and copying files. Older versions of Solaris required one instance of nfsd per remote file request. In Solaris 9, only one instance of nfsd is required to run.

nfslogd

Manages NFS logging.

statd

Interacts with the lockd daemon to provide crash and recovery functions for file locking services. If an NFS server crashes, upon reboot, statd will allow the client computers to reclaim locks they had on NFS-shared resources.

The daemons that function over the network, such as mountd, nfsd, and statd, use the RPC protocol. The logging daemon, nfslogd, keeps records of all RPC operations. If your computer is having problems using the RPC protocol (or its corresponding rpcbind daemon, which helps establish RPC connections), NFS will not be able to work either.

Setting Up NFS

Don't let the number of files and daemons required to support NFS scare you away from using it. The number of files to manage can make the service seem overly complex, but the principles behind setting up NFS are straightforward.

The first step is to install and configure an NFS server. This computer or computers will host resources for clients on the network. After a server is configured, resources need to be shared, so that clients can access them. To finish, you need to configure clients to access the shared resources.

Of course, there are many optional items you can configure, which adds to the complexity of setup. For example, you can configure shared resources to be shared automatically upon boot. Although this is optional, it's highly recommended you choose to do this, or else you'll be manually sharing resources every time you need to reboot the server.

Another optional feature is NFS logging. As with any other type of logging, NFS logging adds overhead and will slightly slow the response time of the server. It's up to you to decide whether logging is important for your NFS servers. However, logging is strongly recommended if you have high security requirements or are allowing anonymous or Internet-based access.

Sharing Network File Systems

File systems on an NFS server can be shared in one of two ways. The first is to use the share (or shareall) command to manually share resources. The second is to configure the /etc /dfs/dfstab file to automatically share directories every time the server is entered into init state 3.

As you might imagine, if your NFS server has a large number of resources to share, it's both impractical and cumbersome to use the share command. However, the share command is useful for testing or troubleshooting purposes. For normal NFS use, it's recommended that you make entries in the /etc/dfs/dfstab file to automatically share directories with clients.

Here is a sample /etc/dfs/dfstab file:

 # more /etc/dfs/dfstab #   Place share(1M) commands here for automatic execution #   on entering init state 3. # #   Issue the command '/etc/init.d/nfs.server start' to run the NFS #   daemon processes and the share commands, after adding the very #   first entry to this file. # #   share [-F fstype] [ -o options] [-d "<text>"] <pathname> [resource] #   .e.g, #   share  -F nfs  -o rw=engineering  -d "home dirs"  /export/home2 share -F nfs /export share -F nfs -o ro -d "phone lists" /data/phlist # 

As you can see, the /etc/dfs/dfstab file contains a list of resources to share, as specified by the share command. The file also instructs you, in the commented out section, that to start the NFS server daemon processes, you should run the /etc/init.d/nfs.server start command after adding the first entry into the file. If you do not do this, NFS will not start properly. You need to run this command only after you make the first entry, though, because the next time Solaris enters run level 3, the nfsd and mountd daemons will start automatically. They won't start, however, if the /etc/dfs/dfstab file is empty.

To successfully use the /etc/dfs/dfstab file, you need to understand the share command syntax. Understanding the syntax is also helpful if you are going to use the share command manually. Here is the syntax for share:

 # share -F fstype -o options -d "text" pathname 

If you omit the -F fstype option, the first file system type listed in /etc/dfs/fstypes will be used. Although this is generally NFS, it's a good idea to specify a file system type just to be certain. The options you can use when mounting an NFS file system are explained in Table 11.9. You can add optional text with the -d argument; this might be useful for clients searching for a certain shared data point. The pathname is the path of the local resource (directory) to be shared.

Table 11.9: NFS Share Options

Option

Explanation

aclok

Enables the NFS server to perform access control for NFS clients running NFS version 2. If set, aclok gives maximum access to all clients-meaning, with aclok set, if anyone has Read permission, then everyone has Read permission. If aclok is not set, then everyone is given minimal access.

anon=uid

Sets the uid to be the effective UID of unknown users. By default, unknown users are given the User ID of nobody. If this option is set to - 1, access by unknown users is denied.

log=tag

Enables NFS server logging. The optional tag specifies the location of the related log files. If no tag is specified, the default global log file, as defined in /etc/nfs/nfslog.conf, is used.

nosub

Prohibits clients from mounting subdirectories of shared directories.

nosuid

Disallows the use of SetUID and SetGID permissions on shared resources.

root=access_list

Indicates that only root users from hosts specified in the access_list will have root access. By default, no host has root access.

ro

Indicates that the shared resource will be Read-only to all clients.

ro=access_list

Those specified in the access list have Read-only access; all others are denied access.

rw

Indicates that shared resource will be Read and Write to all clients. This is the default.

rw=access_list

Indicates that those specified in the access list have Read and Write access; all others are denied access.

sec=mode

Specifies one or more security modes to authenticate clients. Options are sys, for AUTH_SYS authentication (clear-text authentication), which is the default, dh for Diffie-Hellman public key authentication, krb5 for Kerberos v5 (or krb5i or krb5p for variations), or none, which uses AUTH_NONE authentication, meaning that users have no identity and are mapped to the anonymous user nobody.

Note 

The share command with no arguments displays the shared resources on your computer.

If you are using an option that requires an access_list, multiple entries in the access_ list are separated by colons. For example, if you want to share a file system named /files1 and you want three clients, named larry, curly, and moe, to have Read-only access, you would use:

 # share -F nfs -o ro=larry:curly:moe /files1 

Alternately, if you want to give fred and wilma Read and Write access, while limiting barney and betty to read-only, you could use:

 # share -F nfs -o ro=barney:betty rw=fred:wilma /files1 

You also have the option of using a minus sign (-) to exclude a computer from being able to mount a remote resource. For example, if you want everyone in the finance netgroup except the client1 computer to be able to access /files1, you could use:

 # share -F nfs -o rw=-client1:finance /files1 

If multiple share commands are issued for the same file system, the last instance will invalidate previous commands. The share options specified by the last instance will override any other share options.

Warning 

root permissions should not be enabled on NFS file system shares. Enabling root permissions could open a serious security hole in your network by allowing users to have root access to files on a server.

After all your shares are entered into the /etc/dfs/dfstab file, you can begin automatically sharing the file systems by rebooting or running the shareall command.

To stop the sharing of a shared file system, use the unshare command. For example, to stop the sharing of /files1, you could use:

 # unshare /files1 

Enabling NFS Logging

Logging NFS enables you to track who accessed what resources and when on your NFS server. Although enabling logging slows the NFS server response time, the slowdown is not significant except on the most heavily utilized servers. In any case, the security and tracking benefits of using NFS logging outweigh any possible inconveniences.

To enable NFS logging, add the log argument to the appropriate share command in /etc /dfs/dfstab. Each share point can be logged separately, but you must specify each share point you want to log. One of the questions you need to ask is: what do you want to log? If you wanted to log the /files1 file system, you could use the following command in your dfstab file:

 # share -F nfs -o log /files1 

You can configure two files to affect the behavior of NFS logging: /etc/nfs/nfslog.conf and /etc/default/nfslogd. The nfslog.conf file contains information on the location of log files. The locations are referenced by what is called a tag. By default, nfslog.conf contains one tag, named global. If you do not specify and create an alternate tag, the global tag will be used. Here's a sample nfslog.conf file:

 # more /etc/nfs/nfslog.conf # # NFS server log configuration file. # # <tag> [ defaultdir=<dir_path> ] \ #       [ log=<logfile_path> ] [ fhtable=<table_path> ] \ #       [ buffer=<bufferfile_path> ] [ logformat=basic|extended ] # global defaultdir=/var/nfs \        log=nfslog fhtable=fhtable buffer=nfslog_workbuffer 

As you can see, the global tag uses the /var/nfs directory by default, and the log file name is nfslog.

After you have decided what you want to log, you can configure the nfslog.conf file with multiple tags and log file locations if you choose. If you are doing a lot of logging and want to be able to quickly access log file information for a specific share point, you will want to create a separate log for each share. If you do not use extensive logging or want all your logging to be in one location, you can use the default global tag.

The other file you can use to configure NFS logging behavior is /etc/default/nfslogd. Whereas the nfslog.conf file specifies specific logs to use, the nfslogd file contains more general NFS server logging configuration information, such as the maximum number of old logs to preserve and default permissions for log files.

The nfslogd daemon needs to be started for NFS logging to work. Restarting the NFS daemons with the nfs.server start command will also start nfslogd, if an nfslog.conf file exists. If the nfslog.conf file does not exist, you must first run the /usr/lib/nfs/nfslogd command to create it. Then, subsequent restarts of NFS will automatically start NFS logging as well.

Accessing NFS Resources from Clients

Sharing NFS resources is a convenient way to make sure that network clients each have access to the same data located on a server. After you have configured your NFS server, you need to configure clients to access the server.

If you are unsure of which resources are shared and available to the client, you can use the dfshares command, which displays the available shared resources on a given computer. For example, if you wanted to see the shares available on the bedrock server, you could use:

 # dfshares bedrock 

To access remotely shared resources from your local computer, you need to mount the shared file system locally. One of three ways to do this is manually with the mount command. You originally learned of the mount command in Chapter 7, and the usage of mount does not change from the explanations in that chapter. For review, Table 11.10 lists the generic mount arguments, and Table 11.11 lists mount-specific options for NFS file systems.

Table 11.10: Generic mount Arguments

Argument

Description

-F FSType

Specifies the file system type to mount.

-m

Mounts the file system but does not create an entry in /etc/mnttab.

-O

Overlay mount. This enables you to mount a file system over an existing mount. The overwritten file system will not be accessible while the new file system is mounted.

-p

Prints a list of mounted file systems in /etc/vfstab format. Must be used alone.

-r

Mounts the file system as Read-only.

-v

Prints the list of mounted file systems in verbose format. Must be used alone.

Table 11.11: NFS mount -o Options

Option

Description

bg | fg

Specifies whether to retry the mount operation in the background or foreground if mounting fails the first time. Foreground is the default.

hard | soft

Specifies how to proceed if the server does not respond. The soft option returns an error, and hard retires the request until the server responds. The default is hard.

intr | nointr

Enables or disables keyboard interrupts to kill a process that hangs while waiting for a response from a hard-mounted file system on the NFS server. The default is intr, which enables clients to interrupt hung applications.

port=n

Indicates the server IP port number. The default is NFS_PORT.

quota | noquota

Checks whether the user is over the quota limits on the NFS server, if quotas are enabled, or prevents quota checking.

retry=n

Retries the mount operation if it fails. The variable n determines the number of times to retry.

remount

Changes features of an already mounted file system. It can be used with any options except ro.

ro | rw

Indicates Read-only versus Read/Write. The default is Read/Write.

rsize=n

Sets the read buffer size to n bytes. The default for NFS version 3 is 32,768 bytes.

suid | nosuid

Enables or disables the use of SetUID. The default is suid.

timeo=n

Sets the NFS time-out to n tenths of a second. For connectionless transports, the default is 11 tenths of a second, and for connection-oriented transports, the default is 600 tenths of a second.

wsize=n

Sets the write buffer size to n bytes. The default for NFS version 3 is 32,768 bytes.

Specific mount options, as invoked with -o, are listed in Table 11.11. These mount options are available for NFS mounts.

Here is the syntax for mount:

 # mount -F FSType generic_options -o specific_options     device_name mount_point 

For example, say you want to mount a shared file system named /files1 from the pebbles server. The mounted file system should be located at /localfiles. You could use:

 # mount -F nfs pebbles:/files1 /localfiles 

If you have remote resources to mount on a consistent basis, manual mounting is not very efficient. You have two other choices: use the /etc/vfstab file or use the automounter. The vfstab file is covered in Chapter 7, and the automounter is covered in detail later in this chapter, in the "Using AutoFS" section.

Here's a quick introduction to the automounter. By default, clients can automatically mount remote resources through the /net mount point. So, to mount the /export/files/data file system located on the pebbles server, you would use the following command:

 $ cd /net/pebbles/export/files/data 

Automounter enables users to mount file systems, so superuser access is not required. The automounter will also automatically unmount the file system after the user is finished using it.

File systems mounted as Read/Write or those containing executable files should always be mounted as hard. Soft mounting such file systems can cause unexpected I/O errors. Read/Write file systems should also be mounted with the intr option, so as to allow users to kill processes that appear to be hung if necessary.

If a file system is mounted as hard, and intr is not specified, a process can hang until the remote file system responds. This can cause annoying delays for terminal processes. If you use intr, then an interrupt signal can be sent to the server to terminate the process. For foreground processes, Control+C usually works. For background processes, you can send an INT or QUIT signal, such as:

 # kill -QUIT 11234 

Note 

KILL signals (-9) do not kill hung NFS processes.

To unmount NFS file systems, use the umount command.

Starting and Stopping NFS Services

In some cases, you will need to stop the NFS services. You might have an emergency on the NFS server, or you might want to perform system maintenance. To enable or disable NFS services, you must have superuser privileges.

To stop NFS services, use:

 # /etc/init.d/nfs.server stop 

And to restart NFS services, use:

 # /etc/init.d/nfs.server start 

You can also stop the automounter with the following:

 # /etc/init.d/autofs stop 

Or you can restart the automounter with:

 # /etc/init.d/autofs start 

Troubleshooting NFS

Good troubleshooters will tell you that solving problems is all about isolating the problem before you try to fix it. As logical as this sounds, many people will think they know what's wrong and try to fix it before gathering all the information they need to make a proper decision. With NFS, there could be one of three problem areas: the client, the server, or the network connection.

To help isolate the problem, find out what works and what doesn't. For example, if one client machine cannot attach to the NFS server, it's possibly a problem on the client side. However, if no one can get to the NFS server, it's more likely a network problem or an NFS server problem.

Before beginning any NFS troubleshooting, ensure that the nfsd and mountd daemons are running on the NFS server. They should start automatically at boot, provided that there is at least one entry in the /etc/dfs/dfstab file.

Tip 

By default, all mounts are made with the intr option. If your remote program hangs, and you get a server not responding error message, pressing Control+C on your keyboard should kill the remote application.

Hard-mounted remote file systems will behave differently than soft-mounted remote file systems. Accordingly, you will receive different error messages if the server stops responding, depending on whether you have used a hard or soft mount. If your remote file system is hard mounted, and the server (named filesrv in this example) fails to respond, you will see the following error message:

 NFS server filesrv not responding still trying 

Because of the hard mount, though, your client computer will continue to try the mount. If you have used soft mounts, you will see the following error when the server fails to respond:

 NFS operation failed for server filesrv: error # (error message) 

Soft mounts increase the chance of corrupted data on Read/Write file systems or file systems that contain executable programs. Therefore, only hard-mount file systems such as these.

One useful troubleshooting command is nfsstat. If NFS response seems slow, the nfsstat command can return statistics about the NFS server and clients. To display client statistics, use nfsstat -c. Server statistics are displayed with nfsstat -s, and nfsstat -m shows statistics for each file system.

Using AutoFS

If your network has a large number of NFS resources, managing the mapping of these resources can become cumbersome. AutoFS, also called the automounter, is a client-side service that provides automatic mounting of remote file systems. If you use AutoFS, remote directories are mounted only when they are being accessed by a client and are automatically unmounted after the client is finished.

Using AutoFS eliminates the need for mounting file systems at boot, which both reduces network traffic and speeds up the boot process. Also, users do not need to use the mount and umount commands, meaning that they do not need superuser access to mount file systems.

Here is how AutoFS works: when a user attempts to access a remote file system, a mount is established by the automounter. When the file system has not been accessed for a certain period of time, the mount is automatically broken. AutoFS is managed by the automountd daemon, which runs continuously on the client machine and handles mounts and unmounts, and by the automount service, which sets up mount points and manages automount maps.

When the system boots up, the automount command reads the master map file, named auto_master, and creates the initial set of AutoFS mounts. These mounts are not automatically mounted at startup; instead, they are mount points under which file systems will be mounted in the future. These are called trigger nodes.

After the initial AutoFS mounts are configured, they can trigger file systems to be mounted under them. So when a user tries to mount a file system by using AutoFS, automountd mounts the requested file system under the trigger node.

The automount command is used to invoke the AutoFS service. The syntax of automount is as follows:

 # automount -t time -v 

The -t variable sets the time, in seconds, that a file system should remain mounted if not in use. The default is five minutes. However, on systems with a lot of automounted resources, you might want to increase this value to reduce the overhead caused by checking file systems to see whether they're active. The -v option reports automounting information in verbose mode, which can be useful in troubleshooting AutoFS problems.

The automount service does not read the /etc/vfstab file for a list of file systems to mount. Instead, it's configured through its own set of files, known as AutoFS maps.

AutoFS Maps

AutoFS uses three types of maps: master, direct, and indirect. Maps can be located on each local system or centrally located on a name server such as NIS or NIS+. If you have a network with a large number of clients, using a name service is preferred over maintaining local files.

Master Map

The master map associates a directory with a map point, and also lists direct and indirect maps used. In a sense, the master map (/etc/auto_master) configures AutoFS. Here is the default master map:

 # more /etc/auto_master # Master map for automounter # +auto_master /net          -hosts      -nosuid,nobrowse /home         auto_home   -nobrowse /xfn          -xfn /-            auto_direct -ro 

This map file contains one entry, which begins with the +auto_master statement. Each line in the entry contains the following information: mount point, map name, and mount options. For example, the line beginning with /net (the mount point) has -hosts as the map name and the nosuid and nobrowse mount options. Table 11.12 describes the master map fields.

Table 11.12: /etc/auto_master Fields

Field

Description

mount_point

The full absolute pathname of a directory to be used as the mount point. AutoFS will create the directory if it does not already exist. The notation /- as a mount point indicates that the map is a direct map; no particular mount point is associated with the map.

map_name

The map that AutoFS uses to find directories or mount information. A preceding slash (/) indicates that a local file is to be used; otherwise, AutoFS uses the name service specified in the name service switch file (/etc/nsswitch.conf) to locate mount information. The /net and /xfn mount points use special maps.

mount_options

Optional, comma-separated list of options that pertain to the mount point.

Without making any changes to the auto_master map, users can access remote file systems through the /net mount point. This is because of the /net entry, which uses a built-in special map named -hosts that uses only the hosts database.

For example, imagine that your network has an NFS server named gumby, which has a shared file system named /files. Clients using only the default map could access the resource by using the following command:

 $ cd /net/gumby/files 

The path used is dependent on the network name, though. For example, if you then wanted to access the /docs file system on the remote system pokey, you would need to use:

 $ cd /net/pokey/docs 

The /home mount point references the /etc/auto_home map, which is an indirect map that supports the mounting of home directories from anywhere on the network.

Again, keep in mind that although the master map sets up the map points for /net and /home automatically, its other primary responsibility is to point clients to direct and indirect maps for the automatic mounting of remote resources.

Anytime you modify a master map, you will need to stop and restart the automounter for the changes to take effect.

Direct Map

A direct map is an automount point that contains a direct association between a mount point on a client and a directory on a server. Direct maps require a full pathname to explicitly indicate the relationship. Here is a sample auto_direct map:

 # more /etc/auto_direct /usr/local   -ro   /bin   server1:/export/local/sun4   /share server1:/export/local/share /usr/man     -ro server1:/usr/man server2:/usr/man server3:/usr/man /docfiles        filesrv:/docs # 

A direct map has three fields: key, mount options, and location. The key is the pathname of the local mount point-for example, the local /docfiles directory. The mount options are standard options that you want to apply to the mount. Finally, the location is the absolute path of the remote file system that you want to mount. Locations should not contain mount point names, but full, absolute pathnames. For example, a home directory should be listed as server:/export/home/username, not as server:/home/username.

You will notice that for the /usr/man mount point, three servers are listed. Multiple locations can be used for failover. If the client attempts to access /usr/man, it has three servers it can access the information from. If the first server is busy or unavailable, the client can attempt to use the next server. By default, the client will attempt to connect to the closest server in the list, based on network distance and response time. However, if you want, you can indicate priorities for the listed servers, as in this example:

 /usr/man -ro serverx,servery(1),serverz(2):/usr/man 

The first server, serverx, does not have a priority and therefore defaults to the highest priority available (0). The servery server will be tried second, and the serverz server last.

Any time you modify a direct map, you will need to stop and restart the automounter for the changes to take effect.

Indirect Map

Whereas a direct map uses mount points that are specified in the named direct map file, an indirect map uses mount points that are defined in the auto_master file. Indirect maps establish associations between mount points and directories by using a substitution key value. Home directories are easily accessed through indirect maps, and the auto_home map is an example of an indirect map for this purpose. Here is a sample auto_home map:

 # more /etc/auto_home # Home directory map for automounter # +auto_home qdocter Q-Sol:/export/home/qdocter kdocter Q-Sol:/export/home/kdocter mgantz Q-Sol:/export/home/mgantz sjohnson Q-Sol:/export/home/sjohnson fredee Q-Sol:/export/home/fredee ramini Q-Sol:/export/home/ramini 

As with direct maps, there are three fields: key, mount options, and location. In an indirect map, though, the key is a simple reference name in the indirect map. The location should always be an absolute path.

As mentioned in the introduction to this section, indirect maps use mount points as specified in the auto_master file. Here is an example:

 # more /etc/auto_master # Master map for automounter # +auto_master /net          -hosts       -nosuid,nobrowse /home         auto_home    -nobrowse /xfn          -xfn /files        auto_files 

This auto_master file contains the /files mount point, which references the auto_files indirect map. Here is the auto_files map:

 # more /etc/auto_files # +auto_files docs      server1:/projects/data/files 

When the /files directory is accessed (as indicated in the auto_master file), then the automounter will trigger a node for the /files/docs directory. After the /files/docs directory is accessed, AutoFS will complete the mounting of the server1:/projects/data/files file system. The user can trigger this whole process by using directory navigation or management commands, such as cd or ls, and does not need to use the mount command at all.

Automount Summary

Although using the automounter is convenient, its best usage is for infrequently used file systems. This is because as the file system is mounted, network traffic is generated. Also, the automounter must continually check whether the automounted resources are still being used, and unmount the unused file systems. This adds some overhead to the NFS server.

For NFS resources that are frequently accessed, standard NFS mounting might be more efficient for your network.

start sidebar
Real world Scenario: Efficient File System Usage

You work for a natural gas management company, and your network has two physical locations: Denver and Cheyenne. The Denver office is the main corporate office and contains four of the five network's servers. Although the Cheyenne office has only about 30 employees, it has a dedicated connection to the Denver office.

One of your Denver servers is configured as an NFS server and contains user home directories, project files, and the company's database. Home directories for Cheyenne users are stored on the Cheyenne server. However, users in Cheyenne need to access the company database as well as critical project files located on the Denver NFS server. These users complain that access to files stored in Denver is very slow, as the dedicated connection is only 128Kbps. What can you do to speed up file access?

You're already using NFS, which is a good thing. The problem is speed, though. This is a good time to use the CacheFS file system to improve performance for Cheyenne users. CacheFS can be set up on computers in the Cheyenne office to cache the files that users need to perform their jobs. In fact, it would probably even be a good idea to pack the cache, if you know which files will be used on a consistent basis.

The CacheFS file system is designed to improve response time over slow network connections, and this seems like an ideal case in which to implement it.

end sidebar




Solaris 9. Sun Certified System Administrator Study Guide
Solaris 9 Sun Certified System Administrator Study Guide
ISBN: 0782141811
EAN: 2147483647
Year: 2003
Pages: 194

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net