Authentication

Team-Fly    

Solaris™ Operating Environment Boot Camp
By David Rhodes, Dominic Butler
Table of Contents
Chapter 18.  NFS, DFS, and Autofs


We have now successfully mounted the resources on the clients, and all the users can access their data correctly. The only odd thing is that if we try to create a file as root, it doesn't get created with the correct ownership. In fact, depending on where we try to write it, we may not have permission to create the file at all!

For example, if we run the following commands, we can see that we end up with a file owned by someone known as nobody:

 server# share -F /temp_dir server# client# mount -F nfs server:/temp_dir /temp_dir client# touch /temp_dir/testfile client# ls -l /temp_dir/testfile -rwxr-xr-x  1 nobody   nobody 0 Oct 22 2001 /temp_dir/testfile client# 

We haven't created the user, so where has it come from? If we look in the password file, we can see the following entries for nobody:

 helium# grep "^nobody" /etc/passwd nobody:x:60001:60001:Nobody:/: nobody4:x:65534:65534:SunOS 4.x Nobody:/: helium# 

The reason the nobody users are defined has to do with the way that users are authenticated within NFS (actually by the underlying RPC mechanism); let's see why.

"Authentication" is the term used to define a process that is used to prove you are who you say you are! For example, on a local machine you would authenticate by following the normal login process, listed below:

  1. The system offers you the login prompt and asks you to login.

  2. You say you are "Mike Smith" and enter the correct user ID for him, "msmith" in this case.

  3. The system asks you to prove this by entering the correct password.

  4. You dothe system is happy with this and logs you on.

A similar process is followed when using an RPC-based application, but the steps taken will alter depending upon how the application has been written. RPC supports several different authentication levels, which are shown in Table 18.7. Any RPC-based application can use these facilities to create a more secure product, if you wish.

Table 18.7. RPC Authentication Levels

Level

Access Required

AUTH_NONE

No authentication. Supported for share_nfs only (not mount_nfs or automount). Maps clients to the nobody user.

AUTH_SYS

The user's UID and GID are authenticated by the NFS server.

AUTH_DH

Also known as AUTH_DES. Uses the "Diffie-Hellman" public key system.

AUTH_KERB4

Also known as AUTH_KERB. Uses the Kerberos Version 4 authentication system.

NFS uses the level of security known as AUTH_SYS by default. For this, the user's ID and group ID are passed to the server for authentication. This means that all the user's IDs should be the same across the network (it's a common problem for a user to have different IDs on different machines; this will cause files to appear to be owned by the wrong user).

To improve security at this level, the root user's special privileges are revoked on NFS-mounted filesystems. To do this, the user ID is remapped to the user nobody, leaving the root user with normal user privileges. This means root must be granted read-write permissions for anything root requires, just as a normal (nonprivileged) user would be.

If, however, we do want to allow root access, we only need to alter the share option on the server to include the "root=access_list" option for the relevant resource. For example, to allow the machines named "client1" and "client2" to have root access to a directory named /temp_dir, we would use the following entry in /etc/dfs/dfstab:

 server# share -F nfs -o root=client1:client2 /temp_dir server# 

Secure NFS

NFS can be configured to use one of the stronger authentication mechanisms (AUTH_DH or AUTH_KERB4). This is known as "Secure NFS." While it is beyond the scope of this book, we'll briefly note the steps that need to be taken to configure it. The example here is based on AUTH_DH authentication:

  1. Establish public and secret keys with newkeys.

  2. Login with keylogin.

  3. Configure the server resource (for example, "share -F nfs -o sec=dh /temp_dir").

  4. Mount the resource on the client with the correct options (for example, "mount -F nfs -o sec=dh server:/temp_dir /temp_dir").

Client Failover

NFS supports something known as "client failover." This is a concept similar to that available with Autofs, as we'll see later. The idea is that if there are a number of machines that can provide the same resource, we can list them all as redundant servers in /etc/vfstab.

A few caveats go with this type of configuration: The filesystems must all be mounted read-only (ro) on the client and the file layout should be exactly the same within the hierarchy. The data will be read from the first host until it fails, then the next will be used, and so on.

One of the best examples to show this is the standard manual page hierarchy (it should be the same across all the servers, and can easily be mounted read-only). The entry used to mount them would be something similar to this:

 lithium# cat /etc/vfstab # #device    device   mount  FS    fsck  mount    mount #to mount  to fsck  point  type  pass  at boot  options # <lines removed for clarity> hydrogen,helium,lithium:/usr/share/man - /usr/share/man nfs - yes ro lithium# 

NFS URL

Throughout the chapter, we have used the most common method of referring to a resource: "hostname:pathname." As an alternative, NFS allows an Internet style of reference to be used also, known as an "NFS URL," and written as "nfs://hostname/pathname."

As an example, the following syntax shows the two ways of mounting the same resource:

 mount helium:/export/home/users/msmith /home/msmith mount nfs://helium/export/home/users/msmith /home/msmith 

    Team-Fly    
    Top
     



    Solaris Operating Environment Boot Camp
    Solaris Operating Environment Boot Camp
    ISBN: 0130342874
    EAN: 2147483647
    Year: 2002
    Pages: 301

    flylib.com © 2008-2017.
    If you may any questions please contact us: flylib@qtcs.net