Configuring the Server

Team-Fly    

Solaris™ Operating Environment Boot Camp
By David Rhodes, Dominic Butler
Table of Contents
Chapter 18.  NFS, DFS, and Autofs


Up to now, we have been looking at how the components tie together, but now we will start to configure our actual systems. For this example, we'll use helium as the server, and assume we have a large disk connected to it that contains some important data. We're not worried about the type of data here; for all we know, it may be the company database or the internal telephone list! For now, we just know that we need access to it from every machine around the network.

The disk we are interested in is mounted on /data, which is shown in the output below:

 helium# df -k Filesystem        kbytes  used  avail capacity  Mounted on /proc                  0      0      0   0%   /proc /dev/dsk/c0t2d0s0  48349  16889  26626  39%   / /dev/dsk/c0t2d0s6 770543 522712 193893  73%   /usr fd                     0      0      0   0%   /dev/fd /dev/dsk/c0t2d0s1  61463   5592  49725  11%   /var /dev/dsk/c0t2d0s7 519718  82791 384956  18%   /export/home /dev/dsk/c0t2d0s5  38539   5928  28758  18%   /opt swap              130064     28 130036  1%    /tmp /dev/dsk/c0t1d0s0 504305 443210  53626  93%   /data helium# 

The information we would like to access is actually located in subdirectories of the /data filesystem. We've already said that NFS works on a file/directory level rather than a filesystem level, which means that we don't have to share a complete filesystem from the machinewe can specify individual directories if we wish. In fact, if we list /data, we can see it contains two directories: local_files and remote_files. The local_files directory contains helium's local files, so we don't want that shared, but everyone else can safely mount remote_files:

 helium# ls /data local_files    remote_files helium# 

We'll update /etc/dfs/dfstab with any share settings that we wish to apply. This means they will be automatically shared whenever the system is rebooted. (If we only require temporary access, we could simply use the share command line instead if we wished.)

 helium# cat /etc/dfs/dfstab <lines removed for clarity> share -F nfs /data/remote_files helium# 

Now we can start the NFS server. This will normally be carried out automatically at boot-time when the system detects that we have some filesystems to be shared. But since we have just created our dfstab file, we'll need to start it manually here:

 helium# /etc/init.d/nfs.server start helium# 

This will also perform a shareall for us, so let's check that we have a valid entry in the /etc/dfs/sharetab file:

 helium# cat /etc/dfs/sharetab /data/remote_files       -       nfs     rw helium# 

The sharetab file is up-to-date, which means that everything should have been shared from the system correctly:

 helium# dfshares RESOURCE               SERVER    ACCESS    TRANSPORT helium:/data           helium    -         - helium# 

Good. Everything appears to be shared, but we haven't checked that mountd and nfsd are actually working correctly, so we'll do that next. We know that both of these are RPC-based processes, so let's check that they are responding using some of the RPC commands.

Checking RPC

The first task is to make sure that rpcbind has started OK; otherwise, any RPC-based servers won't be able to register with it. This also causes the client side to fail, as it cannot determine its server port:

 helium# ps -ef | grep rpcbind     root   98   1  0 08:50:33 ?   0:00 /usr/sbin/rpcbind helium# netstat -a <lines removed for clarity> TCP   Local Address  Remote Address  Swind Send-Q Rwind Recv-Q State   *.sunrpc       *.*              0     0      0     0 LISTEN <lines removed for clarity> helium# 

The process is running and listening on its port, so now let's use rpcinfo to query it. First we'll get a list of the processes that are currently registered with rpcbind:

 helium# rpcinfo helium <lines removed for clarity> program version netid     address      service    owner 100005    1    udp    0.0.0.0.128.13   mountd     superuser 100005    2    udp    0.0.0.0.128.13   mountd     superuser 100005    3    udp    0.0.0.0.128.13   mountd     superuser 100005    1    tcp    0.0.0.0.128.8    mountd     superuser 100005    2    tcp    0.0.0.0.128.8    mountd     superuser 100005    3    tcp    0.0.0.0.128.8    mountd     superuser 100003    2    udp    0.0.0.0.8.1      nfs        superuser 100003    3    udp    0.0.0.0.8.1      nfs        superuser 100227    2    udp    0.0.0.0.8.1      nfs_acl    superuser 100227    3    udp    0.0.0.0.8.1      nfs_acl    superuser 100003    2    tcp    0.0.0.0.8.1      nfs        superuser 100003    3    tcp    0.0.0.0.8.1      nfs        superuser 100227    2    tcp    0.0.0.0.8.1      nfs_acl    superuser 100227    3    tcp    0.0.0.0.8.1      nfs_acl    superuser <lines removed for clarity> helium# 

We can also get a more compact listing of this if we wish, which can sometimes be easier to read:

 helium# rpcinfo -s program  version(s)  netid(s)                        service   owner <lines removed for clarity> 100005   3,2,1     ticots,ticotsord,tcp,ticlts,udp   mountd  superuser 100003   3,2       tcp,udp                           nfs     superuser 100227   3,2       tcp,udp                           nfs_acl superuser <lines removed for clarity> helium# 

From this, we can see that rpcbind currently supports three versions of the mountd process (Versions 1, 2, and 3), and that it can use a number of different transport mechanisms, including TCP and UDP. Let's interrogate it a little more and check that the program is responding. To do this we'll use rpcinfo to run the equivalent of a ping on the process:

 helium# rpcinfo -T tcp -u localhost mountd program 100005 version 1 ready and waiting program 100005 version 2 ready and waiting program 100005 version 3 ready and waiting helium# 

The NFS programs should also respond with a similar message, in which case all the RPC servers appear to be working OK. So let's move on to one of the clients and configure it to use the new resource.


    Team-Fly    
    Top
     



    Solaris Operating Environment Boot Camp
    Solaris Operating Environment Boot Camp
    ISBN: 0130342874
    EAN: 2147483647
    Year: 2002
    Pages: 301

    flylib.com © 2008-2017.
    If you may any questions please contact us: flylib@qtcs.net