|< Day Day Up >|| |
The Network File System (NFS) is the standard for sharing files and printers on a directory with Linux and Unix computers. It was originally developed by Sun Microsystems in the mid-1980s. Linux has supported NFS (both as a client and a server) for years, and NFS continues to be popular in organizations with Unix- or Linux-based networks.
You can create shared NFS directories directly by editing the /etc/exports configuration file, or you can create them with Red Hat's NFS Configuration tool. As you need an NFS server before you can configure an NFS client, I describe how to create NFS servers first.
|Exam Watch|| |
It is a good skill for all Linux administrators to know how to connect to a shared NFS directory. While the Red Hat Exam Prep guide does not explicitly require RHCT candidates to have this knowledge, it is consistent with the spirit of that exam. The Red Hat Exam Prep guide does explicitly require that RHCE candidates know how to configure an NFS server.
NFS servers are relatively easy to configure. All that is required is to export a filesystem, either generally or to a specific host, and then mount that filesystem from a remote client. I've shown you how to configure an NFS server to install RHEL 3 over a network. In this chapter, you'll learn the basics of NFS server configuration and operation.
Two RPM packages are closely associated with NFS: portmap and nfs-utils. They should be installed by default in RHEL 3. Just in case, you can use the rpm -q packagename command to make sure these packages are installed. The rpm -ql packagename command provides a list of files installed from that package. The nfs-utils package includes a number of key files. The following is not a complete list:
/etc/rc.d/init.d/nfs (control script for NFS)
/etc/rc.d/init.d/nfslock (control script for lockd and statd)
/usr/share/doc/nfs-utils-1.0.5 (documentation, mostly in HTML format)
Server daemons in /usr/sbin: rpc.mountd, rpc.nfsd
Server daemons in /sbin: rpc.lockd, rpc.statd
Control programs in /usr/sbin: exportfs, nfsstat, nhfsgraph, nhfsnums, nhfsrun, nhfsstone, showmount
Status files in /var/lib/nfs: etab, rmtab, statd, state, xtab
The portmap RPM package includes the following key files (also not a complete list):
/etc/rc.d/init.d/portmap (control script)
Server daemon in /sbin: portmap
Control programs in /usr/sbin: pmap_dump, pmap_set
Once configured, you can set up NFS to start during the Linux boot process, or you can start it yourself with the service nfs start command. NFS also depends on the portmap package, which helps secure NFS directories that are shared through /etc/exports. Because of this dependency, make sure to start the portmap daemon before starting NFS, and don't stop it until after stopping NFS.
|On The Job|| |
Remember that both the portmap and nfs daemons must be running before NFS can work.
The nfs service script starts the following processes:
rpc.mountd Handles mount requests
nfsd Starts an nfsd kernel process for each shared directory
rpc.rquotad Reports disk quota statistics to clients
If any of these processes are not running, NFS won't work. Fortunately, it's easy to check for these processes. Just run the rpcinfo -p command. As with other service scripts, if you want it to start when RHEL 3 boots, you'll need to run a command such as:
# chkconfig --level 35 nfs on
Alternatively, you can use the Red Hat Service Management utility described in Chapter 4 to make sure NFS starts the next time you boot RHEL 3.
NFS is fairly simple. The only major NFS configuration file is /etc/exports. Once configured, you can export these directories with the exportfs -a command. Each line in this file lists the directory to be exported, the hosts it will be exported to, and the options that apply to this export. You can export a given directory only once. Take the following examples from an /etc/exports file:
/pub (ro,sync) someone.mylocaldomain.com(rw,sync) /home *.mylocaldomain.com(rw,sync) /opt/diskless-root diskless.mylocaldomain.com(rw,no_root_squash,sync)
In the preceding example, the /pub directory is exported to all users as read-only. It is also exported to one specific computer with read/write privileges. The /home directory is exported, with read/write privileges, to any computer on the .mylocaldomain.com network. Finally, the /opt/diskless-root directory is exported with full read/write privileges (even for root users) on the diskless.mylocaldomain.com computer.
All of these options include the sync flag. This requires all changes to be written to disk before a command such as a file copy is complete. This is a fairly new change, which Red Hat first implemented on Red Hat Linux 8.0.
Be very careful with /etc/exports; one common cause of problems is an extra space between expressions. For example, if you type in a space after the comma in (ro,sync), your directory won't get exported, and you'll get an error message.
In Linux network configuration files, you can specify a group of computers with the right wildcard. This process in Linux is also known as globbing. What you do for a wildcard depends on the configuration file. The NFS /etc/exports file uses 'conventional' wildcards; for example, *.mydomain.com specifies all computers within the mydomain.com domain. In contrast, /etc/hosts.deny is less conventional; .mydomain.com, with the leading dot, specifies all computers in that same domain.
For IPv4 networks, wildcards often require some form of the subnet mask. For example, 192.168.0.0/255.255.255.0 specifies the 192.168.0.0 network of computers with IP addresses that range from 192.168.0.1 to 192.168.0.254. Some services support the use of CIDR (Classless Inter-Domain Routing) notation. In CIDR, since 255.255.255.0 masks 24 bits, CIDR represents this with the number 24. If you're configuring a network in CIDR notation, you can represent this network as 192.168.0.0/24. For details, see the discussion for each applicable service in Chapters 7 through 11.
Once you've modified /etc/exports, you need to do more. First, this file is simply the default set of exported directories. You need to activate them with the exportfs -a command. The next time you boot RHEL 3, if you've activated nfs at the appropriate runlevels, the nfs start script automatically runs the exportfs -r command, which synchronizes exported directories. You can see this for yourself in the /etc/rc.d/init.d/nfs script.
When you add a share to /etc/exports, the exportfs -r command adds the new directories. However, if you're modifying, moving, or deleting a share, it is safest to first temporarily unexport all filesystems with the exportfs -ua command before reexporting the shares with the exportfs -a command.
Once exports are active, they're easy to check. Just run the showmount -e command on the server. If you're looking for the export list for a remote NFS server, just add the name of the NFS server. For example, the showmount -e enterprise3 command looks for the list of exported NFS directories from the enterprise3 computer. If this command doesn't work, you may have blocked NFS messages with a firewall.
Naturally, as it is easy to configure /etc/exports, the Red Hat NFS Server Configuration tool is easy to use and is reliable. To start this tool, type the redhat-config-nfs command in a GUI terminal, or click Main Menu | System Settings | Server Settings | NFS. This opens the NFS Server Configuration window shown in Figure 9-10. After you go through these steps, you'll see how much simpler it is to add a line to /etc/exports.
Figure 9-10: NFS Server Configuration
To add a shared NFS directory, take the following steps:
Click Add or File | Add NFS Share. This opens the Add NFS Share window shown in Figure 9-11.
Figure 9-11: The Add NFS Share window
Under the Basic tab, add the directory that you want to share. If you want to limit access to a specific host or domain, add the appropriate names to the Host(s) text box. If you want to allow access to all users, enter an asterisk (*). Set read-only or read/write permissions as desired. Click the General Options tab.
Under the General Options tab, you can set several parameters for this share, as described in Table 9-2. Note that sync is the only option that's active by default. The default is sufficient unless you receive specific instructions for an NFS share on your exam. Click the User Access tab.
Corresponding /etc/exports Command and Explanation
Allow connections from ports 1024 or higher
insecure As this sends NFS requests above ports 1024, it is not blocked by most firewalls.
Allow insecure file locking
insecure_locks If you have an older NFS client, this does not check user permissions to a file.
Disable subtree checking
no_subtree_check If you export a subdirectory such as /mnt/inst, this does not check higher level directories such as /mnt for permissions.
Sync write operations on request
sync Data is written upon request. Active by default.
Force sync of write operations immediately
no_wdelay Data is written to the share immediately.
Under the User Access tab, you can set different parameters for remote users, as defined in Table 9-3.
Corresponding /etc/exports Command and Explanation
Treat remote root user as local root
no_root_squash Remote root users get root privileges on the shared directory.
Treat all client users as anonymous users
all_squash All remote users are mapped as an anonymous user. In RHEL 3, that user is nfsnobody, which you can see in /etc/passwd.
Specify local user ID for anonymous users
anonuid=userid Allows you to map remote users to a specific user ID such as pcguest.
Specify local group ID for anonymous users
anongid=groupid Allows you to map remote groups to a specific group ID such as pcguest.
Once you've finished configuring your shared NFS directory, click OK. The directory is automatically exported with the exportfs -r command and, as long as you aren't blocking access with firewalls, should now be ready for use.
If you have problems, check for firewalls. Check for limitations in /etc/hosts.allow and /etc/hosts.deny. For more information on firewall management, read Chapter 10. If necessary, use the service nfs stop and service nfs start commands to restart the NFS service. If there are still problems, you may find more information in your /var/log/messages file.
|Exam Watch|| |
If you use Red Hat's NFS Server Configuration tool, don't forget to activate NFS at the appropriate runlevels (3 and 5) so your shared directories are available when your exam proctor reboots your computer to see what you've done.
Unfortunately, the NFS Server Configuration tool does not activate NFS at the appropriate runlevels for the next time you boot Linux.
Now you can mount a shared NFS directory from a client computer. The commands and configuration files are similar to those used for any local filesystem.
Before doing anything elaborate, you should test the shared NFS directory from a Linux or Unix client computer. But first, you should check for the list of shared NFS directories. If you're on an NFS server computer named enterprise3, the command is easy:
# showmount -e
This command assumes that the NFS server is local. If you don't see a list of shared directories, review the steps described earlier in this chapter. Make sure you've configured your /etc/exports file properly. Remember to export the shared directories. And your NFS server can't work if you haven't started the NFS daemon on your computer.
If you're on a remote NFS client computer and want to see the list of shared directories from the enterprise3 computer, run the following command:
# showmount -e enterprise3
If it doesn't work, there are a couple of more things that you'll need to check: firewalls and your /etc/hosts or DNS server. If you have a problem with your /etc/hosts or DNS server, you can substitute the IP address of the NFS server. You'll see output similar to the following:
Export list for enterprise3 /mnt/inst *
Now if you want to mount this directory locally, you'll need an empty local directory. Create a directory such as /mnt/remote if required. You can then mount the shared directory from the enterprise3 computer with the following command:
# mount -t nfs enterprise3:/mnt/inst /mnt/remote
This command mounts the /mnt/inst directory from the computer named enterprise3. This command specifies the use of the NFS protocol (-t nfs), and mounts the share on the local /mnt/remote directory. Depending on traffic on your network, this command may take a few seconds. Be patient! When it works, you'll be able to access files on /mnt/inst as if it were a local directory.
You can also configure an NFS client to mount a remote NFS directory during the boot process, as defined in /etc/fstab. For example, the following entry in a client /etc/fstab mounts the /homenfs share from the computer named nfsserv, on the local /nfs/home directory:
## Server: Directory Mount Point Type Mount Options Dump Fsckorder nfsserv:/homenfs /nfs/home nfs soft,timeout=100 0 0
Alternatively, an automounter, such as autofs or amd, can be used to dynamically mount NFS filesystems as required by the client computer. The automounter can also unmount these remote filesystems after a period of inactivity. For more information, see Chapter 4.
When you start NFS as a client, it adds a few new system processes, including:
rpc.statd Tracks the status of servers, for use by rpc.lockd in recovering locks after a server crash
rpc.lockd Manages the client side of file locking
NFS supports diskless clients, which are computers without a hard drive. A diskless client may use a boot floppy or a boot PROM to get started. Then, embedded commands can mount the appropriate root (/) directory, swap space, the /usr directory as read-only, and other shared directories such as /home in read/write mode. If your computer uses a boot PROM, you'll also need access to DHCP and TFTP servers for network and kernel information.
Red Hat Enterprise Linux 3 includes features that support diskless clients. While not listed as part of the current Red Hat exam requirements or related course outlines, I would not be surprised to see such requirements in the future. You can find out more about how this works with the Network Installation and Diskless Environment tool, which you can start with the Main Menu | System Settings | Server Settings | Network Booting Service command.
NFS does have its problems. An administrator who controls shared NFS directories would be wise to take note of these limitations.
NFS is a 'stateless' protocol. In other words, you don't need to log in separately to access a shared NFS directory. Instead, the NFS client normally contacts rpc.mountd on the server. The rpc.mounted daemon handles mount requests. It checks the request against currently exported filesystems. If the request is valid, rpc.mounted provides an NFS file handle (a 'magic cookie'), which is then used for further client/server communication for this share.
The stateless protocol allows the NFS client to wait if the NFS server ever has to be rebooted. The software waits, and waits, and waits. This can cause the NFS client to hang as discussed later.
This can also lead to problems with insecure single-user clients. When a file is opened through a share, it may be 'locked out' from other users. When an NFS server is rebooted, handling the locked file can be difficult. The security problems can be so severe that NFS communication is blocked even by the default Red Hat Enterprise Linux firewall.
In theory, the recent change to NFS, setting up sync as the default for file transfers, should help address this problem. In theory, locked-out users should not lose any data that they've written with the appropriate commands.
If you have any symbolic links on an exported directory, be careful. The client interprets a symbolically linked file with respect to its own local filesystem. Unless the mount point and filesystem structures are identical, the linked file can point to an unexpected location, which may lead to unpredictable consequences.
You have a couple of ways to address this issue. You can take care to limit the use of symbolic links within an exported directory. Alternatively, NFS offers a server-side export option (link_relative) that converts absolute links to relative links; however, this can have counter-intuitive results if the client mounts a subdirectory of the exported directory.
By default, NFS is set up to root_squash, which prevents root users on an NFS client from gaining root access to a share on an NFS server. Specifically, the root user on a client (with a user ID of 0) is mapped to the nfsnobody unprivileged account.
This behavior can be disabled via the no_root_squash server export option in /etc/exports. In that case, root users who connect from a client gain root privileges on the shared NFS directory.
Because NFS is stateless, NFS clients may wait up to several minutes for a server. In some cases, an NFS client may wait indefinitely if a server goes down. During the wait, any process that looks for a file on the mounted NFS share will hang. Once this happens, it is generally difficult or impossible to unmount the offending filesystems. You can do several things to reduce the impact of this problem:
Take great care to ensure the reliability of NFS servers and the network.
Avoid mounting many different NFS servers at once. If several computers mount each other's NFS directories, this could cause problems throughout the network.
Mount infrequently used NFS exports only when needed. NFS clients should unmount these clients after use.
Set up NFS shares with the sync option, which should at least reduce the incidence of lost files.
Don't configure a mission-critical computer as an NFS client, if at all possible.
Keep NFS mounted directories out of the search path for users, especially that of root.
Keep NFS mounted directories out of the root (/) directory; instead, segregate them to a less frequently used filesystem such as /nfs/home or /nfs/share.
Consider using the soft option when mounting NFS filesystems. When an NFS server fails, a soft-mounted NFS filesystem will fail rather than hang. However, this risks the failure of long-running processes due to temporary network outages.
In addition, you can use the timeo option to set a timeout interval, in tenths of a second. For example, the following command would mount /nfs/home with a timeout of 30 seconds:
# mount -o soft,timeo=300 myserver:/home /nfs/home
An NFS server daemon checks mount requests. First, it looks at the current list of exports, based on /etc/exports. Then, it looks up the client's IP address to find its hostname. This requires a reverse DNS lookup.
This hostname is then finally checked against the list of exports. If NFS can't find a hostname, rpc.mountd will deny access to that client. For security reasons, it also adds a 'request from unknown host' entry in /var/log/messages.
Multiple NFS clients can be set up to mount the same exported directory from the same server. It's quite possible that people on different computers end up trying to use the same shared file. This is addressed by the file locking daemon service.
NFS has historically had serious problems making file locking work. If you have an application that depends on file locking over NFS, test it thoroughly before putting it into production.
It is impossible to export two directories in the same filesystem if one is inside the other. For example, /usr and /usr/local cannot both be exported unless /usr/local is mounted on a separate partition from /usr.
You can do several things to keep NFS running in a stable and reliable manner. As you gain experience with NFS, you might monitor or even experiment with the following:
Eight kernel NFS daemons, which is the default, is generally sufficient for good performance, even under fairly heavy loads. If your NFS server is busy, you may want to add additional NFS daemons through the /etc/rc.d/init.d/nfs script. Just keep in mind that the extra kernel processes consume valuable kernel resources.
NFS write performance can be extremely slow, particularly with NFS v2 clients, as the client waits for each block of data to be written to disk.
You may try specialized hardware with nonvolatile RAM. Data that is stored on such RAM isn't lost if you have trouble with network connectivity or a power failure.
In applications where data loss is not a big concern, you may try the async option. This makes NFS faster because async NFS mounts do not write files to disk until other operations are complete. However, a loss of power or network connectivity can result in a loss of data.
Hostname lookups are performed frequently by the NFS server; you can start the Name Switch Cache Daemon (nscd) to speed lookup performance.
|On The Job|| |
NFS is a powerful file-sharing system. But there are risks associated with NFS. If an NFS server is down, it could affect your entire network. It's also not sufficiently secure to use on the Internet. NFS is primarily used on secure LAN/WAN networks.
NFS includes a number of serious security problems and should never be used in hostile environments (such as on a server directly exposed to the Internet), at least not without strong precautions.
NFS is an easy-to-use yet powerful file-sharing system. However, it is not without its problems. The following are a few security issues to keep in mind:
Authentication NFS relies on the host to report user and group IDs. However, this can expose your files if root users on other computers access your NFS shares. In other words, data that is accessible via NFS to any user can potentially be accessed by any other user.
Privacy Not even Secure NFS encrypts its network traffic.
portmap infrastructure Both the NFS client and server depend on the RPC portmap daemon. The portmap daemon has historically had a number of serious security holes. For this reason, portmap is not recommended for use on computers that are directly connected to the Internet or other potentially hostile networks.
If NFS must be used in or near a hostile environment, you can do some things to reduce the security risks:
Educate yourself in detail about NFS security. If you do not clearly understand the risks, you should restrict your NFS use to friendly, internal networks behind a good firewall.
Export as little data as possible, and export filesystems as read-only if possible.
Use root squash to prevent clients from having root access to exported filesystems.
If an NFS client has a direct connection to the Internet, use separate network adapters for the Internet connection and the LAN. Use the right firewall commands (iptables or ipchains) to block the routing on the TCP and UDP ports associated with portmapper, mountd, and nfsd.
Use a firewall system such as iptables or ipchains to deny access to the portmapper, mountd, and nfsd ports, except from explicitly trusted hosts or networks. The ports are
111 TCP/UDP portmapper (server and client) 745 UDP mountd (server) 747 TCP mountd (server) 2049 TCP/UDP nfsd (server)
Use a port scanner to verify that these ports are blocked for untrusted network(s).
|Exam Watch|| |
While some may find it easier to learn with a GUI tool, these tools are usually more time-consuming and less flexible than direct action from the command line. As time is often short on the RHCE exam, I recommend that you learn how to configure and activate NFS and other services from the command line.
Exercise 9-2: NFS
This exercise requires two computers, one set up as an NFS server, the other as an NFS client. On the NFS server:
Set up a group named IT for the Information Technology group in /etc/group.
Create the /MIS directory. Assign ownership to the MIS group with the chgrp command.
Set the SGID bit on this directory to enforce group ownership.
Update /etc/exports file to allow read and write for your local network. Run the following command to set it up under NFS.
# exportfs -a
Restart the NFS service.
On an NFS client, take the following steps:
Create a directory for the server share called /mnt/MIS.
Mount the shared NFS directory on /mnt/MIS.
List all exported shares from the server and save this output as /mnt/MIS/thishost.shares.list.
Make this service a permanent connection in the /etc/fstab file. Assume that the connection might be troublesome and add the appropriate options, such as soft mounting.
Reboot the client computer. Check to see if the share is properly remounted.
Test the NFS connection. Stop the service on the server, and then try copying a file to the /mnt/MIS directory. While the attempt to copy will fail, it should not hang the client.
Restart the NFS server.
Edit /etc/fstab again. This time assume that NFS is reliable, and remove the special options that you added in step 4.
Reboot the client computer. Test the service with the new settings.
Now test what happens when you shut down the server. The mounted NFS directory on the client should hang when you try to access the service. Restart the server service and see if your client service resumes.
Exercise 9-3: Using the NFS Server Configuration tool
In this exercise, you'll use the options associated with the NFS Server Configuration tool to experiment with creating a shared directory in /etc/exports. While it's best and usually fastest to edit a Linux configuration file directly, Red Hat GUI configuration tools such as the NFS Server Configuration tool can help you learn about different options for Linux services.
Open a GUI on a RHEL 3 computer. If not already open, you can do so with the startx command.
Start the NFS Server Configuration tool. You can run redhat-config-nfs from a command line interface, or click Main Menu | System Settings | Server Settings | NFS.
In the NFS Server Configuration tool, click Add. This opens the Add NFS Share window with the Basic tab. Set up a share for your home directory. Share it with one specific host on your LAN, with read-only permissions.
Click the General Options tab. Select the options of your choice. It does not matter what you select; the purpose of this lab is to demonstrate the effect of the NFS Server Configuration tool on the /etc/exports file.
Click the User Access tab. Select the options of your choice.
Click OK. The settings you choose are saved in /etc/exports.
Open a command line window. Right-click on the desktop and select New Terminal from the pop-up menu that appears.
Open the /etc/exports file in the text editor of your choice. What is the relationship between the options you selected in the NFS Server Configuration tool and the command options associated with your home directory in /etc/exports? Close the /etc/exports file.
Back in the NFS Server Configuration tool, highlight the line associated with your home directory, and then click Properties. This opens the Edit NFS Share window with the settings that you just created.
Make additional changes under the three tabs in this window. After you click OK, check the results in /etc/exports. What happened?
If you don't want to actually export your home directory, highlight the appropriate line in the NFS Server Configuration tool and click Delete. What happens to /etc/exports?
Exit from the NFS Server Configuration tool.
|< Day Day Up >|| |